Quasi-Newton Approximation of Second Derivatives

IPOPT has an option to approximate the Hessian of the Lagrangian by a limited-memory quasi-Newton method (L-BFGS). You can use this feature by setting the hessian_approximation option to the value limited-memory. In this case, it is not necessary to implement the Hessian computation method eval_h in TNLP. If you are using the C or Fortran interface, you still need to implement these functions, but they should return false or IERR=1, respectively, and don't need to do anything else.

In general, when second derivatives can be computed with reasonable computational effort, it is usually a good idea to use them, since then IPOPT normally converges in fewer iterations and is more robust. An exception might be in cases, where your optimization problem has a dense Hessian, i.e., a large percentage of non-zero entries in the Hessian. In such a case, using the quasi-Newton approximation might be better, even if it increases the number of iterations, since with exact second derivatives the computation time per iteration might be significantly higher due to the very large number of non-zero elements in the linear systems that IPOPT solve in order to compute the search direction.

Since the Hessian of the Lagrangian is zero for all variables that appear only linearly in the objective and constraint functions, the Hessian approximation should only take place in the space of all nonlinear variables. By default, it is assumed that all variables are nonlinear, but you can tell IPOPT explicitly which variables are nonlinear, using the get_number_of_nonlinear_variables and get_list_of_nonlinear_variables method of the TNLP class, see Section 3.3.4. (Those methods have been implemented for the AMPL interface, so you would automatically only approximate the Hessian in the space of the nonlinear variables, if you are using the quasi-Newton option for AMPL models.) Currently, those two methods are not available through the C or Fortran interface.