timoberthold@fico.com

Numerics: Challenge Accepted

Blog Post created by timoberthold@fico.com Advocate on May 1, 2018

A couple of weeks ago, I went to Dagstuhl castle to attend a seminar on "Designing and Implementing Algorithms for Mixed-Integer Nonlinear Optimization."  It was an inspiring conference, as it provided the opportunity to meet some of the world's leading researchers in Mixed-Integer Nonlinear Programming (MINLP) and have discussions. For my presentation, I spoke about "Numerical challenges in MIP and MINLP solvers."

 

The term numerical error comprises two sources of errors:

 

Errors from approximating numbers

  • This error occurs due to the fact that each number involved in a computer calculation is only represented by a finite number of digits (more precisely: bits). Thus, there are inevitably some numbers that cannot be represented exactly as floats (a data type used to represent numbers in a computer code). Take Pi, with its infinite series of non-repeating decimals, as the most famous example. But there are other, more surprising cases as well. For example, the number 16,777,217 cannot be represented as a float, while 16,777,216 and 16,777,218 can.
    *(for more information, see the detailed explanation at the end of the blog)

 

Errors from approximating mathematical procedures

  • This error is due to the fact that a computer program has to evaluate even the most complex mathematical formulas by essentially only adding and multiplying. For example, trigonometric functions like sine and cosine will be approximated by a sum of polynomials, called a Taylor series.

 

iterative.png

Numerical errors and the use of tolerances often lead to slightly

infeasible solutions - which is acceptable in most cases.

 

A single numerical error is usually negligible (and as said before, unavoidable), but things might get more and more serious when the computation is continued with such slightly-off numbers; as the errors can add up and propagate.

 

The big challenge is figuring out how to avoid having numerical errors add up to create a big impact on the solution of your optimization models!

 

One answer, by FICO Xpress, is our Solution Refiner. Due to numerical errors, the final solution might violate some of the linear constraints, some of the variable bounds and some of the integrality requirements. The solution refiner will trigger extra LP iterations with higher precision to reduce or even eliminate those violations. This procedure is activated by default for both linear and mixed-integer optimization in Xpress.

 

For nonlinear optimization, a lot depends on defining so-called convergence criteria. Since there are more than twenty of these criteria, balancing them to get a decent performance can be a real burden. Fortunately, we recently introduced the "self-tuning default convergence," where a user can specify target values for feasibility and optimality tolerances and FICO Xpress will take care of the rest.

 

So, while a lot of effort is already taken to get numerical errors under control, many challenges remain, and these challenges open a wide field for research. This is particularly true for Nonlinear Optimization, since it features a lot of issues that are not present in Linear Optimization. Take singularities, like evaluating 1/x close to x=0 as one example. But also some MINLP-specific algorithms like Outer Approximation are error-prone by design.

 

IMG_20180328_115159.jpgDagstuhl 2018: Developers of ten different optimization solvers in one picture (I'm on the left).

 

In Dagstuhl, we had another lively discussion about Numerics at the Breakout-Session on "Experimentation with MINLP software." It is surely a topic that will keep the solver developers and the academic research community on the run for the next couple of years. Well then: challenge accepted!

 

You can develop, model, and deploy FICO Xpress Optimization software for free by downloading our Community License. Learn more about FICO Xpress Optimization, ask questions, and hear from experts in the Optimization Community.

 

*float uses 23 bits to represent the mantissa. Since 16,777,216 = 2^24, the 2-bit is the last significant bit and the 1-bit is lost. As a consequence, odd numbers between 2^24 and 2^25 cannot be represented. Similarly, from 2^25 to 2^26, only multiples of four can be represented exactly, and so on. As an example in the decimal system, consider you had to use a number format which allows you to specify three digits of each number plus its order of magnitude. You could represent all numbers from 0-999. Also 1000 would work (since 1000 = 1,00 *10^3), but the next number you could represent would be 1010 (= 1,01 *10^3). The numbers 1001 to 1009 would not be representable, but would have to  be rounded up or down.

Outcomes