preloader
  • Home
  • Gurobi Solver: Configuration, Setup, Log, Tuning

The Gurobi solver guide provides detailed instructions on configuration, setup, and parameter tuning for solving models using Gurobi on HSUper.

Gurobi Solver: Configuration, Setup, Log, Tuning

Solving a Model

Running Gurobi on HSUper

  • Login on HSUper, e.g., via MobaXterm (only on Windows), PuTTY or via the ssh command.
  • Use a SLURM script to submit jobs on HSUper via entering in the terminal sbatch (followed by the name of the SLURM script).
  • Enter squeue -u $USER to see all your jobs in the queue (or replace $USER by an username); R at ST means that your job is currently running.
  • Enter squeue --format="%.18i %.9P %.30j %.8u %.8T %.10M %.9l %.6D %R" --me to extend the length of the output of squeue.
  • Enter scancel -u $USER to scancel all your jobs or scancel followed by a jobid to cancel the corresponding job.
  • Enter sacct -S now-7days -X --user=$USER --format="jobid,jobname,user,account,partition,AllocCPUS,AllocNodes,State,ConsumedEnergy,Elapsed,TimeLimit,ExitCode" to gain information about all your jobs during the last 7 days.
  • A basic SLURM script could look like this:
#!/bin/bash

#SBATCH --job-name=name       ### the name of your job
#SBATCH --output=%x_%j.out    ### the output file for errors etc.

#SBATCH --nodes=1             ### number of nodes
#SBATCH --ntasks=1            ### number of (MPI) tasks needed
#SBATCH --partition=small     ### partition (small, small_shared, small_fat, dev)
#SBATCH --cpus-per-task=72    ### number of threads per task (OMP threads)
#SBATCH --time=24:00:00       ### maximum wall clock limit for job execution

gurobi_cl model.mps
  • If you want to solve several problems, e.g., with different .mps files or with varying numbers of threads (see How to change parameters in Gurobi command line), you can add these runs in the same SLURM script in a new line after gurobi_cl model.mps such that these problems are solved one after the other.
  • If you want to start all your runs as soon as possible, then you can use individual SLURM scripts and submit each of them via sbatch (followed by the name of the corresponding SLURM script). Or you could use a bash script (chmod +x myBash makes the bash script myBash executable, ./myBash executes it) to start the different SLURM scripts. The following bash script starts runs for 30 different problems (each problem is indentical except that a different mps file is used with j = 1, …, 30 with start p=1 and end P=30) and different numbers of threads (i.e., i = 1, 2, 4, 8, 16, 32, 64 with start t=1 and end T=72) and writes the log to individual log files.
#!/bin/bash

t=1
T=72
p=1
P=30
for (( j=$p ; j<=$P ; j++ ));
do
  for (( i=$t ; i<=$T ; i*=2 ));
  do
    sbatch --job-name="threads$i.problem$j" --export=ALL,i="$i",j="$j" var_param_gurobi.slurm
  done
done

The SLURM script var_param_gurobi.slurm is identical to the SLURM script from above, except that the line
gurobi_cl model.mps
is changed to
gurobi_cl Threads=$i LogFile="$HOME/gurobiLog/model$j.threads$i.log" model$j.mps.

How to change parameters in Gurobi command line?

  • Add the parameters you want to set and their values between gurobi_cl and model.mps, e.g., use Gurobi with 4 threads via:
    gurobi_cl Threads=4 model.mps
  • References:

How to use concurrent optimization?

  • Concurrent optimization starts multiple, independent solves on a model, using different strategies for each, and terminates when the first one completes. This leads often to a speed up.
  • Concurrent optimization is default choice for solving LP models (modify via Method parameter), and a user-selectable option for MIP models (modify number of instances via ConcurrentMIP parameter; this divides available threads evenly among the independent solves, and different values for the MIPFocus and Seed parameters are chosen for each solve).
  • References:

How do I instruct Gurobi to produce a log file?

  • Gurobi produces a log so you can track the progress of the optimization (on HSUper the log is additionally written to the output which is defined in the SLURM script).
  • A default log file is written only for command-line Gurobi gurobi_cl and for the interactive shell gurobi.sh.
  • When you use one of the APIs to initiate the optimization, you need to specify the name of the log file by using the parameter LogFile.
  • References:

How do I export a model from Gurobi?

  • Gurobi can export your model to several different file formats.
  • The export functionality can be achieved by invoking the write function of the appropriate API or by using the Gurobi command-line tool by setting the ResultFile parameter, e.g., gurobi_cl ResultFile=model.rew model.mps.
    With this approach, the model file isn’t written until the optimization completes. As such, you may want to solve the model with a very short TimeLimit so you don’t have to wait for the model to solve to optimality.
  • References:

Exporting MPS files from open-source and commercial solvers

  • The MPS format has not been standardized, and each solver may have its own set of conventions and extensions.
  • CPLEX command line:
    • write model.mps
    • Compress it directly if you append .gz after the filename, i.e. model.mps.gz.
    • If you want to anonymize the variable and constraint names append the file ending .rew instead of .mps.
  • References:

Parameter Tuning Tool

  • The Gurobi tuning tool performs multiple solves on your model, choosing different parameter settings for each solve, in a search for settings that improve runtime. The longer you let it run, the more likely it is to find a significant improvement.
  • Use the grbtune command-line tool, or you can invoke it from one of our programming language APIs.
  • The command-line tool offers more tuning features. For example, it allows you to provide a list of models to tune, or specify a list of base settings to try (TuneBaseSettings).
  • We’ve seen cases where subtle changes in the search produce 100X performance swings. While the tuning tool tries to limit the impact of these effects, the final result will typically still be heavily influenced by such issues.
  • References: