timoberthold@fico.com

Parallelization of the FICO Xpress-Optimizer

Blog Post created by timoberthold@fico.com Advocate on Jun 28, 2017

Original article by, timoberthold@fico.com, James Farmer, Stefan Heinz & Michael Perregaard

 

Computing hardware has mostly thrashed out the physical limits for speeding up individual computing cores. Consequently, the main line of progress for new hardware is growing the number of computing cores within a single CPU. This makes the study of efficient parallelization schemes for computation-intensive algorithms more and more important.

 

A natural precondition to achieving reasonable speedups from parallelization is maintaining a high workload of the available computational resources. At the same time, reproducibility and reliability are key requirements for software that is used in industrial applications. In the Parallelization of the FICO Xpress-Optimizer paper, we present the new parallelization concept for the state-of-the-art MIP solver FICO Xpress-Optimizer. MIP solvers like Xpress are expected to be deterministic. This inevitably results in synchronization latencies which render the goal of a satisfying workload a challenge in itself.

 

We address this challenge by following a partial information approach and separating the concepts of simultaneous tasks and independent threads from each other. Our computational results indicate that this leads to a much higher CPU workload and thereby to an improved, almost linear, scaling on modern high-performance CPUs. As an added value, the solution path that Xpress takes is not only deterministic in a fixed environment, but also, to a certain extent, thread-independent.

 

Find the full copy of our article Parallelization of the FICO Xpress-Optimizer here, on Taylor & Francis Online. Get a free trial of FICO Optimization Modeler, or learn more about the Xpress Optimization Suite here.

Outcomes