No longer updated!

This page presents benchmarks on solvers with GAMS interface.

MIP

Date Solvers Models Comment
18.05.2012 CBC 2.7, CPLEX 12.4.0.0, GUROBI 4.6.1, SCIP 2.1.1 MIPLIB 2010  
28.02.2011 CBC 2.4, CPLEX 12.2.0.2, GUROBI 4.0.1, SCIP 2.0.1 LINLib  
09.02.2010 CBC 2.4, CPLEX 12.1, GUROBI 2.0, SCIP 1.2 LINLib  
18.01.2009 CPLEX 11.2, CBC 2.2, SCIP 1.1 LINLib  
04.10.2008 CPLEX 11.1, CBC 2.2, SCIP 1.1, SCIP 1.0, GLPK 4.30    LINLib  
14.09.2008 CPLEX 11.1, CBC 2.2, SCIP 1.0, GLPK 4.30 LINLib  
25.08.2008 CBC 1.1, 1.2, 2.0, 2.1, 2.2 LINLib  
19.07.2008 CBC 2.1 LINLib one vs. two threads

NLP

Date Solvers Models Comment
04.03.2011 CONOPT 3.14V, IPOPT 3.8, KNITRO 7.0.0, MINOS 5.51, PATH-NLP 4.7.02, SNOPT 7.2-4 GlobalLib  
20.11.2009 IPOPT 3.7, CONOPT 3.14T, MINOS 5.51, SNOPT 7.2-4 GlobalLib  
20.11.2009 IPOPT 3.7 GlobalLib linear solver MA27 vs. MUMPS 4.8.3; GAMS version vs. locally compiled version
19.10.2008 IPOPT 3.5, KNITRO 5.1.2, CONOPT 3.14S, MINOS 5.51, SNOPT 7.2-4 GlobalLib  
21.09.2008 IPOPT 3.5 GlobalLib linear solver MA27 vs. MUMPS 4.8.3

LP

Date Solvers Models Comment
21.01.2009 CPLEX 11.2, CLP 1.8, GLPK 4.35 LINLib
07.03.2011 CPLEX 12.2.0.2, CLP 1.11, GLPK 4.43, GUROBI 4.0.1, MOSEK 6.0.0.96 LINLib

MINLP

Date Solvers Models Comment
12.11.2012 Baron 11.5.2, Couenne 0.4, Lindo API 7.0.1.497, SCIP 3.0.0 MINLPLib
13.09.2012 Baron 11.3.0, Couenne 0.4, Lindo API 7.0.1.497, SCIP 3.0.0 MINLPLib
14.09.2012 AlphaECP 2.09.02, Bonmin 1.6, DICOPT, Knitro 8.0.0, SBB, SCIP 3.0.0 convex MINLPs from IBM-CMU MINLP, MINLPLib, Günlück and Linderoth
25.02.2011 AlphaECP 1.75.04, Bonmin 1.4, DICOPT 2x-C, Knitro 7.0.0, SBB convex MINLPs from Bonami, Kilinç, Linderoth
25.02.2011 Bonmin 1.4 convex MINLPs from Bonami, Kilinç, Linderoth B-BB vs. B-ECP vs. B-Hyb vs. B-OA vs. B-QG
29.09.2008 Baron 8.1, LindoGlobal 5.0 MINLPLib
29.09.2008 Bonmin 0.99 MINLPLib B-BB vs. B-Hyb vs. B-OA, only convex models
29.09.2008 Baron 8.1, LindoGlobal 5.0, AlphaECP 1.63, Bonmin 0.99, DICOPT 2x, SBB, OQNLP MINLPLib only convex models
29.09.2008 Baron 8.1, LindoGlobal 5.0, AlphaECP 1.63, Bonmin 0.99, DICOPT 2x, SBB, OQNLP MINLPLib


Older benchmarks:

This page contains details about benchmarks of COIN-OR solvers with GAMS interface vs. other GAMS solvers.
Benchmarks were made by creating trace-files using the GAMS trace option and processing them with the PAVER server.

NEW:

Content:

  1. General Notes (README)
  2. Solvers
  3. LP Benchmarks
  4. MIP Benchmarks
  5. NLP Benchmarks
  6. MINLP Benchmarks

General Notes

Solvers of benchmarks from 02.09.2007

We used commercial solvers as they are included in GAMS 22.5 and open-source solvers as their were available on 21.8.2007. For results after 02.09.2007, the solver version is indicated together with the results, if different. In detail, the following solvers were used:

CPLEX 10.2.0

CBC and CLP

The following trunk revisions were used: CBC rev 770, CLP rev 1092, CGL rev 520, CoinUtils rev 837, OSI rev 1073

GAMS/CBC uses the OSI/CLP interface and CbcSolver routines which are also used by the CBC-standalone version. Additionally the following options are changed:

GLPK 4.20

GAMS/GLPK uses the OSI/GLPK interface. MIPs are solved with GLPKs advanced Branch and Cut solver (lpx_intopt).
Additionally the following options are set or changed:

CONOPT 3.14r

KNITRO 5.1

IPOPT 3.3

The stable branch revision 1067 was used. Linear solver is MA27.
The GAMS/IPOPT interface changes the defaults of the following IPOPT options:

SBB = Simple Branch and Bound

Since we also deal with nonconvex models, we have set acceptnonopt to 1.

DICOPT = Discrete and Continuous Optimizer

DICOPT was run with standard options, which means that DICOPT stops if the NLP solution is worsening or the major iteration limit of 20 is reached. It will be interesting to see how things will change with a change in these options.

AlphaECP 1.30 = Extended Cutting Plane

BARON 7.8.1 = Branch And Reduce Optimization Navigator

BONMIN

The trunk revision 681 was used.

LaGO

The trunk revision 127 was used.

LP Benchmarks

We run on all LPs from the LINLib which have at least 50000 nonzeros (categories medium, large, and huge). This are 118 models.

02.09.2007:

01.10.2007:

07.01.2008:

28.01.2008:

MIP Benchmarks

We run on all MIPs from the LINLib. This are 125 models.

25.08.2008:

19.07.2008:

06.07.2008:

01.06.2008:

19.05.2008:

07.01.2008:

02.09.2007:

NLP Benchmarks

We run on all NLPs from the GlobalLib. This are 379 models.

02.09.2007:

Next we have run all QQPs from the GlobalLib with atmost 1000 variables. This are 162 models.
The gap tolerance is set to 1%, the feasibility tolerance to 1e-4.

02.09.2007:

MINLP Benchmarks

We run on a selected set of MINLPs from the MINLPLib. The selection aims to not overrepresent a type of model that one solver might be very good at (e.g., AlphaECP is very good on the models fo{7,8,9}*, m7*, no7*, and o{7,8,9}*, so we took only some of them). Also we limited to models with atmost 1000 variables. This gives a set of 167 models.
We run with a gap tolerance of 1%.

02.09.2007:

Solver A vs. Solver B:

27.09.2007:

28.09.2007:

02.09.2007:

Influence of NLP subsolver on SBB performance:

Influence of NLP and MIP subsolver on DICOPT performance:

Influence of MIP subsolver on AlphaECP performance: