Interfacing with IPOPT through code

In order to solve a problem, IPOPT needs more information than just the problem definition (for example, the derivative information). If you are using a modeling language like AMPL, the extra information is provided by the modeling tool and the IPOPT interface. When interfacing with IPOPT through your own code, however, you must provide this additional information. The following information is required by IPOPT:
  1. Problem dimensions
  2. Problem bounds
  3. Initial starting point
  4. Problem Structure
  5. Evaluation of Problem Functions
    Information evaluated using a given point ( $ x,
\lambda, \sigma_f$ coming from IPOPT)
The problem dimensions and bounds are straightforward and come solely from the problem definition. The initial starting point is used by the algorithm when it begins iterating to solve the problem. If IPOPT has difficulty converging, or if it converges to a locally infeasible point, adjusting the starting point may help. Depending on the starting point, IPOPT may also converge to different local solutions.

Providing the sparsity structure of derivative matrices is a bit more involved. IPOPT is a nonlinear programming solver that is designed for solving large-scale, sparse problems. While IPOPT can be customized for a variety of matrix formats, the triplet format is used for the standard interfaces in this tutorial. For an overview of the triplet format for sparse matrices, see Appendix A. Before solving the problem, IPOPT needs to know the number of nonzero elements and the sparsity structure (row and column indices of each of the nonzero entries) of the constraint Jacobian and the Lagrangian function Hessian. Once defined, this nonzero structure MUST remain constant for the entire optimization procedure. This means that the structure needs to include entries for any element that could ever be nonzero, not only those that are nonzero at the starting point.

As IPOPT iterates, it will need the values for Item 5 in Section 3.2 evaluated at particular points. Before we can begin coding the interface, however, we need to work out the details of these equations symbolically for example problem (4)-(7).

The gradient of the objective $ f(x)$ is given by

x_1 x_4 + x_4 (x_1 + x_2 + x_3) \\
...4 \\
x_1 x_4 + 1 \\
x_1 (x_1 + x_2 + x_3)

and the Jacobian of the constraints $ g(x)$ is

x_2 x_3 x_4 & x_1 x_3 x_4 & x_1 x...
... x_2 x_3 \\
2 x_1 & 2 x_2 & 2 x_3 & 2 x_4

We also need to determine the Hessian of the Lagrangian16. The Lagrangian function for the NLP (4)-(7) is defined as $ f(x) + g(x)^T
\lambda$ and the Hessian of the Lagrangian function is, technically, $ \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$. However, we introduce a factor ($ \sigma_f$) in front of the objective term so that IPOPT can ask for the Hessian of the objective or the constraints independently, if required.Thus, for IPOPT the symbolic form of the Hessian of the Lagrangian is

$\displaystyle \sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$ (9)

and for the example problem this becomes

\sigma_f \left[
2 x_4 & x_4 & x_4 & 2 x...
...& 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 2

where the first term comes from the Hessian of the objective function, and the second and third term from the Hessian of the constraints (5) and (6), respectively. Therefore, the dual variables $ \lambda_1$ and $ \lambda_2$ are the multipliers for constraints (5) and (6), respectively.

The remaining sections of the tutorial will lead you through the coding required to solve example problem (4)-(7) using, first C++, then C, and finally Fortran. Completed versions of these examples can be found in $IPOPTDIR/Ipopt/examples under hs071_cpp, hs071_c, hs071_f.

As a user, you are responsible for coding two sections of the program that solves a problem using IPOPT: the main executable (e.g., main) and the problem representation. Typically, you will write an executable that prepares the problem, and then passes control over to IPOPT through an Optimize or Solve call. In this call, you will give IPOPT everything that it requires to call back to your code whenever it needs functions evaluated (like the objective function, the Jacobian of the constraints, etc.). In each of the three sections that follow (C++, C, and Fortran), we will first discuss how to code the problem representation, and then how to code the executable.


... Lagrangian16
If a quasi-Newton option is chosen to approximate the second derivatives, this is not required. However, if second derivatives can be computed, it is often worthwhile to let IPOPT use them, since the algorithm is then usually more robust and converges faster. More on the quasi-Newton approximation in Section 4.2.