Derivative Checker

To use the derivative checker, you need to use the option
`derivative_test`.
By default, this option is set to `none`,
i.e., no finite difference test is performed, It is set to ` first-order`, then the first derivatives of the objective function
and the constraints are verified, and for the setting ` second-order`, the second derivatives are tested as well.

The verification is done by a simple finite differences approximation,
where each component of the user-provided starting point is perturbed
one of the other. The relative size of the perturbation is determined
by the option `derivative_test_perturbation`. The default value
(, about the square root of the machine precision) is
probably fine in most cases, but if you believe that you see wrong
warnings, you might want to play with this parameter. When the test is
performed, IPOPT prints out a line for every partial derivative, for
which the user-provided derivative value deviates too much from the
finite difference approximation. The relative tolerance for deciding
when a warning should be issued, is determined by the option
`derivative_test_tol`.
If you want to see the user-provided and
estimated derivative values with the relative deviation for each
single partial derivative, you can switch the
`derivative_test_print_all`
option to `yes`.

A typical output is:

Starting derivative checker. * grad_f[ 2] = -6.5159999999999991e+02 ~ -6.5559997134793468e+02 [ 6.101e-03] * jac_g [ 4, 4] = 0.0000000000000000e+00 ~ 2.2160643690464592e-02 [ 2.216e-02] * jac_g [ 4, 5] = 1.3798494268463347e+01 v ~ 1.3776333629422766e+01 [ 1.609e-03] * jac_g [ 6, 7] = 1.4776333636790881e+01 v ~ 1.3776333629422766e+01 [ 7.259e-02] Derivative checker detected 4 error(s).

The star (```*`

'') in the first column indicates that this line
corresponds to some partial derivative for which the error tolerance
was exceeded. Next, we see which partial derivative is concerned in
this output line. For example, in the first line, it is the second
component of the objective function gradient (or the third, if the
`C_STYLE` numbering is used, i.e., when counting of indices
starts with 0 instead of 1). The first floating point number is the
value given by the user code, and the second number (after
```~`

'') is the finite differences estimation. Finally, the
number in square brackets is the relative difference between these two
numbers.

For constraints, the first index after `jac_g` is the index of
the constraint, and the second one corresponds to the variable index
(again, the choice of the numbering style matters).

Since also the sparsity structure of the constraint Jacobian has to be
provided by the user, it can be faulty as well. For this, the ``` v`'' after a user-provided derivative value indicates that this
component of the Jacobian is part of the user provided sparsity
structure. If there is no ```v`'', it means that the user did not
include this partial derivative in the list of non-zero elements. In
the above output, the partial derivative ```jac_g[4,4]`'' is
non-zero (based on the finite difference approximation), but it is not
included in the list of non-zero elements (missing ```v`''), so
that the user probably made a mistake in the sparsity structure. The
other two Jacobian entries are provided in the non-zero structure but
their values seem to be off.

For second derivatives, the output looks like:

* obj_hess[ 1, 1] = 1.8810000000000000e+03 v ~ 1.8820000036612328e+03 [ 5.314e-04] * 3-th constr_hess[ 2, 4] = 1.0000000000000000e+00 v ~ 0.0000000000000000e+00 [ 1.000e+00]

There, the first line shows the deviation of the user-provided partial second derivative in the Hessian for the objective function, and the second line show an error in a partial derivative for the Hessian of the third constraint (again, the numbering style matters).

Since the second derivatives are approximates by finite differences of the first derivatives, you should first correct errors for the first derivatives. Also, since the finite difference approximations are quite expensive, you should try to debug a small instance of your problem if you can.

Another useful option is
`derivative_test_first_index`
which allows your to start the derivative test with variables with a larger
index.Finally, it is of course always a good idea to run your code through
some memory checker, such as `valgrind` on Linux.