preloader
  • Home
  • Gurobi Hardware and Performance

The Gurobi hardware and performance guide discusses how Gurobi performs on various computer hardware configurations, emphasizing the benefits of high CPU speeds and low-latency, high-bandwidth memory.

Gurobi Hardware and Performance

How does Gurobi perform on different computer hardware?

  • It is difficult to predict how Gurobi performs on a certain machine. In general, the solver benefits from high CPU speeds and low-latency, high-bandwidth memory.
  • Having multiple cores at your disposal can improve performance, but this is highly problem-dependent. This is also true for the amount of memory (RAM).
  • More channels, e.g., DDR4, increase the data throughput and are preferred over single-channel RAM.
  • If you are solving a large MIP in parallel, it is best to use a system with the fastest possible clock rate, using the fastest available memory, with as many fully populated memory channels as are available. Current Intel Xeon systems support up to six channels per CPU, while current AMD EPYC systems support up to eight. Desktop and low-end server configurations typically have fewer channels.
  • References:

Why does Gurobi perform differently on different machines?

  • Solving the same model on different machines can result in different solution paths (Performance Variability). Hardware plays an important role and changing the order of variables or constraints or using random seeds can also drastically impact the performance even when the mathematical models are identical.
  • The reasons for performance variability are quite diverse and can originate from differences in operating systems, underlying libraries, computer hardware, etc. Gurobi makes many decisions in the search for an optimal solution.
  • Gurobi attempts to exploit performance variability when running in ConcurrentMIP mode. To get a sense of how susceptible your model is in regards to performance variability, you can compare the performance of several runs that only differ in the random seed parameter.
  • References:

Does using more threads make Gurobi faster?

  • Continuous models (LP, QP, SOCP):
    • Yes for barrier algorithm.
    • No for simplex method.
  • Mixed-Integer Programs:
    • Yes, if a large number of nodes is required.
    • No for MIPs that are solved at or near the root node.
  • The default value of threads (corresponding to the value 0) is an automatic setting which will use up to min(32, number of virtual cores) threads, i.e., up to 32 threads on HSUper.
  • In the following cases more threads are often not better (and can be even worse):
    • When the first solution found by the MIP solver is almost always optimal, and that solution isn’t found at the root.
    • When memory is tight.
  • References:

What hardware should I select when running Gurobi?

  • Parallelization effects are model-dependent.
  • GPUs do not help.
  • The faster the CPU, the better.
  • RAM requirements are model-dependent.
  • It is crucial to utilize a wide range of instances to ensure that the obtained results robust and represent an accurate picture of the expected load on the system.
  • References:

Does Gurobi support GPUs?

  • GPUs aren’t well suited to the needs of an LP/MIP/QP solver (sparse linear algebra not well suited for SIMD).
  • Gurobi Optimizer is designed to effectively exploit multiple cores in a CPU.
  • References:

Are there performance differences amongst the different Gurobi APIs?

  • The Gurobi APIs (Java, Python, C++, C#, R, and Matlab) are thin layers built on top of the C API. These APIs collect the data and pass it on to the core library, which is written in C. From a solver and solving performance point of view, there is no difference.
  • However, the API can make a difference for the model building part of the code. Model building is in the chosen language.
  • References:

What is the correlation between problem size and solution time?

  • In general, there is no correlation between problem size and solution time with Gurobi. The time it takes Gurobi to find and prove an optimal solution depends on how well the algorithms are able to tackle the particular problem at hand.
  • It is often possible to identify such correlations for a particular model family, i.e., a set of models that originates from the same algebraic description but with different data.
  • References:

How do I get similar performance from Gurobi based on parameters tuned for another solver?

  • In terms of performance tuning, there are three types of parameters: termination criteria, tolerance values, and algorithm behavior.
  • For termination criteria such as MIPGap or tolerance values such as IntFeasTol, Gurobi parameters should correspond closely to those from other solvers.
  • In terms of algorithm behavior, it is generally best to start with default values, since default values in Gurobi have been tested to be fast and robust across a wide variety of models.
  • References:

Is Gurobi deterministic?

  • Gurobi is designed to be deterministic. You always get the same results from the same inputs (model and parameters) on the same computer with the same Gurobi version. Gurobi is also deterministic for parallel optimization.
  • Not deterministic for concurrent optimization.
  • References: