- Coding the Problem Representation
- Method
`get_nlp_info` - Method
`get_bounds_info` - Method
`get_starting_point` - Method
`eval_f` - Method
`eval_grad_f` - Method
`eval_g` - Method
`eval_jac_g` - Method
`eval_h` - Method
`finalize_solution`

- Method
- Coding the Executable (
`main`) - Compiling and Testing the Example
- Additional methods in
`TNLP`

The C++ Interface

After ```make install`'' (see Section 2.4),
the header files are installed in `$IPOPTDIR/include/coin`
(or in `$PREFIX/include/coin` if the switch
`--prefix=$PREFIX`

was used for `configure`).

Coding the Problem Representation

Start by creating a new directory `MyExample` under `examples` and
create the files `hs071_nlp.hpp` and ` hs071_nlp.cpp`. In `hs071_nlp.hpp`, include `IpTNLP.hpp`
(the base class), tell the compiler that we are using the IPOPT
namespace, and create the declaration of the `HS071_NLP` class,
inheriting off of `TNLP`. Have a look at the `TNLP` class in
`IpTNLP.hpp`; you will see eight pure virtual methods that we must
implement. Declare these methods in the header file. Implement each
of the methods in `HS071_NLP.cpp` using the descriptions given
below. In `hs071_nlp.cpp`, first include the header file for your
class and tell the compiler that you are using the IPOPT namespace.
A full version of these files can be found in the ` Ipopt/examples/hs071_cpp` directory.

It is very easy to make mistakes in the implementation of the function evaluation methods, in particular regarding the derivatives. IPOPT has a feature that can help you to debug the derivative code, using finite differences, see Section 4.1.

Note that the return value of any `bool`-valued function should be
`true`, unless an error occurred, for example, because the value of
a problem function could not be evaluated at the required point.

virtual bool get_nlp_info(Index& n, Index& m, Index& nnz_jac_g, Index& nnz_h_lag, IndexStyleEnum& index_style)Give IPOPT the information about the size of the problem (and hence, the size of the arrays that it needs to allocate).

`n`: (out), the number of variables in the problem (dimension of ).`m`: (out), the number of constraints in the problem (dimension of ).`nnz_jac_g`: (out), the number of nonzero entries in the Jacobian.`nnz_h_lag`: (out), the number of nonzero entries in the Hessian.`index_style`: (out), the numbering style used for row/col entries in the sparse matrix format (`C_STYLE`: 0-based,`FORTRAN_STYLE`: 1-based; see also Appendix A).

Our example problem has 4 variables (n), and 2 constraints (m). The constraint Jacobian for this small problem is actually dense and has 8 nonzeros (we still need to represent this Jacobian using the sparse matrix triplet format). The Hessian of the Lagrangian has 10 ``symmetric'' nonzeros (i.e., nonzeros in the lower left triangular part.). Keep in mind that the number of nonzeros is the total number of elements that may ever be nonzero, not just those that are nonzero at the starting point. This information is set once for the entire problem.

bool HS071_NLP::get_nlp_info(Index& n, Index& m, Index& nnz_jac_g, Index& nnz_h_lag, IndexStyleEnum& index_style) { // The problem described in HS071_NLP.hpp has 4 variables, x[0] through x[3] n = 4; // one equality constraint and one inequality constraint m = 2; // in this example the Jacobian is dense and contains 8 nonzeros nnz_jac_g = 8; // the Hessian is also dense and has 16 total nonzeros, but we // only need the lower left corner (since it is symmetric) nnz_h_lag = 10; // use the C style indexing (0-based) index_style = TNLP::C_STYLE; return true; }

virtual bool get_bounds_info(Index n, Number* x_l, Number* x_u, Index m, Number* g_l, Number* g_u)Give IPOPT the value of the bounds on the variables and constraints.

`n`: (in), the number of variables in the problem (dimension of ).`x_l`: (out) the lower bounds for .`x_u`: (out) the upper bounds for .`m`: (in), the number of constraints in the problem (dimension of ).`g_l`: (out) the lower bounds for .`g_u`: (out) the upper bounds for .

In our example, the first constraint has a lower bound of and no upper
bound, so we set the lower bound of constraint `[0]` to and
the upper bound to some number greater than . The second
constraint is an equality constraint and we set both bounds to
. IPOPT recognizes this as an equality constraint and does not
treat it as two inequalities.

bool HS071_NLP::get_bounds_info(Index n, Number* x_l, Number* x_u, Index m, Number* g_l, Number* g_u) { // here, the n and m we gave IPOPT in get_nlp_info are passed back to us. // If desired, we could assert to make sure they are what we think they are. assert(n == 4); assert(m == 2); // the variables have lower bounds of 1 for (Index i=0; i<4; i++) x_l[i] = 1.0; // the variables have upper bounds of 5 for (Index i=0; i<4; i++) x_u[i] = 5.0; // the first constraint g1 has a lower bound of 25 g_l[0] = 25; // the first constraint g1 has NO upper bound, here we set it to 2e19. // Ipopt interprets any number greater than nlp_upper_bound_inf as // infinity. The default value of nlp_upper_bound_inf and nlp_lower_bound_inf // is 1e19 and can be changed through ipopt options. g_u[0] = 2e19; // the second constraint g2 is an equality constraint, so we set the // upper and lower bound to the same value g_l[1] = g_u[1] = 40.0; return true; }

virtual bool get_starting_point(Index n, bool init_x, Number* x, bool init_z, Number* z_L, Number* z_U, Index m, bool init_lambda, Number* lambda)Give IPOPT the starting point before it begins iterating.

`n`: (in), the number of variables in the problem (dimension of ).`init_x`: (in), if true, this method must provide an initial value for .`x`: (out), the initial values for the primal variables, .`init_z`: (in), if true, this method must provide an initial value for the bound multipliers and .`z_L`: (out), the initial values for the bound multipliers, .`z_U`: (out), the initial values for the bound multipliers, .`m`: (in), the number of constraints in the problem (dimension of ).`init_lambda`: (in), if true, this method must provide an initial value for the constraint multipliers, .`lambda`: (out), the initial values for the constraint multipliers, .

The variables `n` and `m` are passed in for your convenience.
These variables will have the same values you specified in ` get_nlp_info`.

Depending on the options that have been set, IPOPT may or may not
require bounds for the primal variables , the bound multipliers
and , and the constraint multipliers . The boolean
flags `init_x`, `init_z`, and `init_lambda` tell you
whether or not you should provide initial values for , , , or
respectively. The default options only require an initial
value for the primal variables . Note, the initial values for
bound multiplier components for ``infinity'' bounds
(
or
) are ignored.

In our example, we provide initial values for as specified in the example problem. We do not provide any initial values for the dual variables, but use an assert to immediately let us know if we are ever asked for them.

bool HS071_NLP::get_starting_point(Index n, bool init_x, Number* x, bool init_z, Number* z_L, Number* z_U, Index m, bool init_lambda, Number* lambda) { // Here, we assume we only have starting values for x, if you code // your own NLP, you can provide starting values for the dual variables // if you wish to use a warmstart option assert(init_x == true); assert(init_z == false); assert(init_lambda == false); // initialize to the given starting point x[0] = 1.0; x[1] = 5.0; x[2] = 5.0; x[3] = 1.0; return true; }

virtual bool eval_f(Index n, const Number* x, bool new_x, Number& obj_value)Return the value of the objective function at the point .

`n`: (in), the number of variables in the problem (dimension of ).`x`: (in), the values for the primal variables, , at which is to be evaluated.`new_x`: (in), false if any evaluation method was previously called with the same values in`x`, true otherwise.`obj_value`: (out) the value of the objective function ().

The boolean variable `new_x` will be false if the last call to
any of the evaluation methods (`eval_*`) used the same
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. IPOPT internally caches
results from the `TNLP` and generally, this flag can be ignored.

The variable `n` is passed in for your convenience. This variable
will have the same value you specified in `get_nlp_info`.

For our example, we ignore the `new_x` flag and calculate the objective.

bool HS071_NLP::eval_f(Index n, const Number* x, bool new_x, Number& obj_value) { assert(n == 4); obj_value = x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2]; return true; }

virtual bool eval_grad_f(Index n, const Number* x, bool new_x, Number* grad_f)Return the gradient of the objective function at the point .

`n`: (in), the number of variables in the problem (dimension of ).`x`: (in), the values for the primal variables, , at which is to be evaluated.`new_x`: (in), false if any evaluation method was previously called with the same values in`x`, true otherwise.`grad_f`: (out) the array of values for the gradient of the objective function ( ).

The gradient array is in the same order as the variables (i.e., the
gradient of the objective with respect to `x[2]` should be put in
`grad_f[2]`).

The boolean variable `new_x` will be false if the last call to
any of the evaluation methods (`eval_*`) used the same
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. IPOPT internally caches
results from the `TNLP` and generally, this flag can be ignored.

The variable `n` is passed in for your convenience. This
variable will have the same value you specified in `get_nlp_info`.

In our example, we ignore the `new_x` flag and calculate the
values for the gradient of the objective.

bool HS071_NLP::eval_grad_f(Index n, const Number* x, bool new_x, Number* grad_f) { assert(n == 4); grad_f[0] = x[0] * x[3] + x[3] * (x[0] + x[1] + x[2]); grad_f[1] = x[0] * x[3]; grad_f[2] = x[0] * x[3] + 1; grad_f[3] = x[0] * (x[0] + x[1] + x[2]); return true; }

virtual bool eval_g(Index n, const Number* x, bool new_x, Index m, Number* g)Return the value of the constraint function at the point .

`n`: (in), the number of variables in the problem (dimension of ).`x`: (in), the values for the primal variables, , at which the constraint functions, , are to be evaluated.`new_x`: (in), false if any evaluation method was previously called with the same values in`x`, true otherwise.`m`: (in), the number of constraints in the problem (dimension of ).`g`: (out) the array of constraint function values, .

The values returned in `g` should be only the values,
do not add or subtract the bound values or .

The boolean variable `new_x` will be false if the last call to
any of the evaluation methods (`eval_*`) used the same
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. IPOPT internally caches
results from the `TNLP` and generally, this flag can be ignored.

The variables `n` and `m` are passed in for your convenience.
These variables will have the same values you specified in ` get_nlp_info`.

In our example, we ignore the `new_x` flag and calculate the
values of constraint functions.

bool HS071_NLP::eval_g(Index n, const Number* x, bool new_x, Index m, Number* g) { assert(n == 4); assert(m == 2); g[0] = x[0] * x[1] * x[2] * x[3]; g[1] = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3]; return true; }

virtual bool eval_jac_g(Index n, const Number* x, bool new_x, Index m, Index nele_jac, Index* iRow, Index *jCol, Number* values)Return either the sparsity structure of the Jacobian of the constraints, or the values for the Jacobian of the constraints at the point .

`n`: (in), the number of variables in the problem (dimension of ).`x`: (in), the values for the primal variables, , at which the constraint Jacobian, , is to be evaluated.`new_x`: (in), false if any evaluation method was previously called with the same values in`x`, true otherwise.`m`: (in), the number of constraints in the problem (dimension of ).`n_ele_jac`: (in), the number of nonzero elements in the Jacobian (dimension of`iRow`,`jCol`, and`values`).`iRow`: (out), the row indices of entries in the Jacobian of the constraints.`jCol`: (out), the column indices of entries in the Jacobian of the constraints.`values`: (out), the values of the entries in the Jacobian of the constraints.

The Jacobian is the matrix of derivatives where the derivative of constraint with respect to variable is placed in row and column . See Appendix A for a discussion of the sparse matrix format used in this method.

If the `iRow` and `jCol` arguments are not `NULL`, then
IPOPT wants you to fill in the sparsity structure of the Jacobian
(the row and column indices only). At this time, the `x` argument
and the `values` argument will be `NULL`.

If the `x` argument and the `values` argument are not ` NULL`, then IPOPT wants you to fill in the values of the Jacobian
as calculated from the array `x` (using the same order as you used
when specifying the sparsity structure). At this time, the `iRow`
and `jCol` arguments will be `NULL`;

`new_x` will be false if the last call to
any of the evaluation methods (`eval_*`) used the same
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. IPOPT internally caches
results from the `TNLP` and generally, this flag can be ignored.

The variables `n`, `m`, and `nele_jac` are passed in for
your convenience. These arguments will have the same values you
specified in `get_nlp_info`.

In our example, the Jacobian is actually dense, but we still specify it using the sparse format.

bool HS071_NLP::eval_jac_g(Index n, const Number* x, bool new_x, Index m, Index nele_jac, Index* iRow, Index *jCol, Number* values) { if (values == NULL) { // return the structure of the Jacobian // this particular Jacobian is dense iRow[0] = 0; jCol[0] = 0; iRow[1] = 0; jCol[1] = 1; iRow[2] = 0; jCol[2] = 2; iRow[3] = 0; jCol[3] = 3; iRow[4] = 1; jCol[4] = 0; iRow[5] = 1; jCol[5] = 1; iRow[6] = 1; jCol[6] = 2; iRow[7] = 1; jCol[7] = 3; } else { // return the values of the Jacobian of the constraints values[0] = x[1]*x[2]*x[3]; // 0,0 values[1] = x[0]*x[2]*x[3]; // 0,1 values[2] = x[0]*x[1]*x[3]; // 0,2 values[3] = x[0]*x[1]*x[2]; // 0,3 values[4] = 2*x[0]; // 1,0 values[5] = 2*x[1]; // 1,1 values[6] = 2*x[2]; // 1,2 values[7] = 2*x[3]; // 1,3 } return true; }

virtual bool eval_h(Index n, const Number* x, bool new_x, Number obj_factor, Index m, const Number* lambda, bool new_lambda, Index nele_hess, Index* iRow, Index* jCol, Number* values)Return either the sparsity structure of the Hessian of the Lagrangian, or the values of the Hessian of the Lagrangian (9) for the given values for , , and .

`n`: (in), the number of variables in the problem (dimension of ).`x`: (in), the values for the primal variables, , at which the Hessian is to be evaluated.`new_x`: (in), false if any evaluation method was previously called with the same values in`x`, true otherwise.`obj_factor`: (in), factor in front of the objective term in the Hessian, .`m`: (in), the number of constraints in the problem (dimension of ).`lambda`: (in), the values for the constraint multipliers, , at which the Hessian is to be evaluated.`new_lambda`: (in), false if any evaluation method was previously called with the same values in`lambda`, true otherwise.`nele_hess`: (in), the number of nonzero elements in the Hessian (dimension of`iRow`,`jCol`, and`values`).`iRow`: (out), the row indices of entries in the Hessian.`jCol`: (out), the column indices of entries in the Hessian.`values`: (out), the values of the entries in the Hessian.

The Hessian matrix that IPOPT uses is defined in (9). See Appendix A for a discussion of the sparse symmetric matrix format used in this method.

If the `iRow` and `jCol` arguments are not `NULL`, then
IPOPT wants you to fill in the sparsity structure of the Hessian
(the row and column indices for the lower or upper triangular part
only). In this case, the `x`, `lambda`, and `values`
arrays will be `NULL`.

If the `x`, `lambda`, and `values` arrays are not ` NULL`, then IPOPT wants you to fill in the values of the Hessian
as calculated using `x` and `lambda` (using the same order as
you used when specifying the sparsity structure). In this case, the
`iRow` and `jCol` arguments will be `NULL`.

The boolean variables `new_x` and `new_lambda` will both be
false if the last call to any of the evaluation methods (` eval_*`) used the same values. This can be helpful when users have
efficient implementations that calculate multiple outputs at once.
IPOPT internally caches results from the `TNLP` and generally,
this flag can be ignored.

The variables `n`, `m`, and `nele_hess` are passed in for
your convenience. These arguments will have the same values you
specified in `get_nlp_info`.

In our example, the Hessian is dense, but we still specify it using the sparse matrix format. Because the Hessian is symmetric, we only need to specify the lower left corner.

bool HS071_NLP::eval_h(Index n, const Number* x, bool new_x, Number obj_factor, Index m, const Number* lambda, bool new_lambda, Index nele_hess, Index* iRow, Index* jCol, Number* values) { if (values == NULL) { // return the structure. This is a symmetric matrix, fill the lower left // triangle only. // the Hessian for this problem is actually dense Index idx=0; for (Index row = 0; row < 4; row++) { for (Index col = 0; col <= row; col++) { iRow[idx] = row; jCol[idx] = col; idx++; } } assert(idx == nele_hess); } else { // return the values. This is a symmetric matrix, fill the lower left // triangle only // fill the objective portion values[0] = obj_factor * (2*x[3]); // 0,0 values[1] = obj_factor * (x[3]); // 1,0 values[2] = 0; // 1,1 values[3] = obj_factor * (x[3]); // 2,0 values[4] = 0; // 2,1 values[5] = 0; // 2,2 values[6] = obj_factor * (2*x[0] + x[1] + x[2]); // 3,0 values[7] = obj_factor * (x[0]); // 3,1 values[8] = obj_factor * (x[0]); // 3,2 values[9] = 0; // 3,3 // add the portion for the first constraint values[1] += lambda[0] * (x[2] * x[3]); // 1,0 values[3] += lambda[0] * (x[1] * x[3]); // 2,0 values[4] += lambda[0] * (x[0] * x[3]); // 2,1 values[6] += lambda[0] * (x[1] * x[2]); // 3,0 values[7] += lambda[0] * (x[0] * x[2]); // 3,1 values[8] += lambda[0] * (x[0] * x[1]); // 3,2 // add the portion for the second constraint values[0] += lambda[1] * 2; // 0,0 values[2] += lambda[1] * 2; // 1,1 values[5] += lambda[1] * 2; // 2,2 values[9] += lambda[1] * 2; // 3,3 } return true; }

virtual void finalize_solution(SolverReturn status, Index n, const Number* x, const Number* z_L, const Number* z_U, Index m, const Number* g, const Number* lambda, Number obj_value, const IpoptData* ip_data, IpoptCalculatedQuantities* ip_cq)This is the only method that is not mentioned in Section 3.2. This method is called by IPOPT after the algorithm has finished (successfully or even with most errors).

`status`: (in), gives the status of the algorithm as specified in`IpAlgTypes.hpp`,`SUCCESS`: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options).`MAXITER_EXCEEDED`: Maximum number of iterations exceeded (can be specified by an option).`CPUTIME_EXCEEDED`: Maximum number of CPU seconds exceeded (can be specified by an option).`STOP_AT_TINY_STEP`: Algorithm proceeds with very little progress.`STOP_AT_ACCEPTABLE_POINT`: Algorithm stopped at a point that was converged, not to ``desired'' tolerances, but to ``acceptable'' tolerances (see the`acceptable-...`options).`LOCAL_INFEASIBILITY`: Algorithm converged to a point of local infeasibility. Problem may be infeasible.`USER_REQUESTED_STOP`: The user call-back function`intermediate_callback`(see Section 3.3.4) returned`false`, i.e., the user code requested a premature termination of the optimization.`DIVERGING_ITERATES`: It seems that the iterates diverge.`RESTORATION_FAILURE`: Restoration phase failed, algorithm doesn't know how to proceed.`ERROR_IN_STEP_COMPUTATION`: An unrecoverable error occurred while IPOPT tried to compute the search direction.`INVALID_NUMBER_DETECTED`: Algorithm received an invalid number (such as`NaN`or`Inf`) from the NLP; see also option`check_derivatives_for_naninf`.`INTERNAL_ERROR`: An unknown internal error occurred. Please contact the IPOPT authors through the mailing list.

`n`: (in), the number of variables in the problem (dimension of ).`x`: (in), the final values for the primal variables, .`z_L`: (in), the final values for the lower bound multipliers, .`z_U`: (in), the final values for the upper bound multipliers, .`m`: (in), the number of constraints in the problem (dimension of ).`g`: (in), the final value of the constraint function values, .`lambda`: (in), the final values of the constraint multipliers, .`obj_value`: (in), the final value of the objective, .`ip_data`and`ip_cq`are provided for expert users.

This method gives you the return status of the algorithm (SolverReturn), and the values of the variables, the objective and constraint function values when the algorithm exited.

In our example, we will print the values of some of the variables to the screen.

void HS071_NLP::finalize_solution(SolverReturn status, Index n, const Number* x, const Number* z_L, const Number* z_U, Index m, const Number* g, const Number* lambda, Number obj_value, const IpoptData* ip_data, IpoptCalculatedQuantities* ip_cq) { // here is where we would store the solution to variables, or write to a file, etc // so we could use the solution. // For this example, we write the solution to the console printf("\n\nSolution of the primal variables, x\n"); for (Index i=0; i<n; i++) { printf("x[%d] = %e\n", i, x[i]); } printf("\n\nSolution of the bound multipliers, z_L and z_U\n"); for (Index i=0; i<n; i++) { printf("z_L[%d] = %e\n", i, z_L[i]); } for (Index i=0; i<n; i++) { printf("z_U[%d] = %e\n", i, z_U[i]); } printf("\n\nObjective value\n"); printf("f(x*) = %e\n", obj_value); }

This is all that is required for our `HS071_NLP` class and
the coding of the problem representation.

Here, we must create an instance of our problem (`HS071_NLP`),
create an instance of the IPOPT solver (`IpoptApplication`),
initialize it, and ask the solver to find a solution. We always use
the `SmartPtr` template class instead of raw C++ pointers when
creating and passing IPOPT objects. To find out more information
about smart pointers and the `SmartPtr` implementation used in
IPOPT, see Appendix B.

Create the file `MyExample.cpp` in the MyExample directory.
Include the header files `HS071_NLP.hpp` and `IpIpoptApplication.hpp`, tell
the compiler to use the `Ipopt` namespace, and implement the ` main` function.

#include "IpIpoptApplication.hpp" #include "hs071_nlp.hpp" using namespace Ipopt; int main(int argv, char* argc[]) { // Create a new instance of your nlp // (use a SmartPtr, not raw) SmartPtr<TNLP> mynlp = new HS071_NLP(); // Create a new instance of IpoptApplication // (use a SmartPtr, not raw) // We are using the factory, since this allows us to compile this // example with an Ipopt Windows DLL SmartPtr<IpoptApplication> app = IpoptApplicationFactory(); // Change some options // Note: The following choices are only examples, they might not be // suitable for your optimization problem. app->Options()->SetNumericValue("tol", 1e-9); app->Options()->SetStringValue("mu_strategy", "adaptive"); app->Options()->SetStringValue("output_file", "ipopt.out"); // Intialize the IpoptApplication and process the options ApplicationReturnStatus status; status = app->Initialize(); if (status != Solve_Succeeded) { printf("\n\n*** Error during initialization!\n"); return (int) status; } // Ask Ipopt to solve the problem status = app->OptimizeTNLP(mynlp); if (status == Solve_Succeeded) { printf("\n\n*** The problem solved!\n"); } else { printf("\n\n*** The problem FAILED!\n"); } // As the SmartPtrs go out of scope, the reference count // will be decremented and the objects will automatically // be deleted. return (int) status; }

The first line of code in `main` creates an instance of ` HS071_NLP`. We then create an instance of the IPOPT solver, ` IpoptApplication`. You could use `new` to create a new
application object, but if you want to make sure that your code would
also work with a Windows DLL, you need to use the factory, as done in
the example above. The call to `app->Initialize(...)` will
initialize that object and process the options (particularly the output
related options). The call to `app->OptimizeTNLP(...)` will
run IPOPT and try to solve the problem. By default, IPOPT will
write its progress to the console, and return the ` SolverReturn` status.

- change the
`EXE`variable`EXE = my_example` - change the
`OBJS`variable`OBJS = HS071_NLP.o MyExample.o`

Now run the executable,

and you should see output resembling the following,

****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** Number of nonzeros in equality constraint Jacobian...: 4 Number of nonzeros in inequality constraint Jacobian.: 4 Number of nonzeros in Lagrangian Hessian.............: 10 Total number of variables............................: 4 variables with only lower bounds: 0 variables with lower and upper bounds: 4 variables with only upper bounds: 0 Total number of equality constraints.................: 1 Total number of inequality constraints...............: 1 inequality constraints with only lower bounds: 1 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.6109693e+01 1.12e+01 5.28e-01 0.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 1.7410406e+01 8.38e-01 2.25e+01 -0.3 7.97e-01 - 3.19e-01 1.00e+00f 1 2 1.8001613e+01 1.06e-02 4.96e+00 -0.3 5.60e-02 2.0 9.97e-01 1.00e+00h 1 3 1.7199482e+01 9.04e-02 4.24e-01 -1.0 9.91e-01 - 9.98e-01 1.00e+00f 1 4 1.6940955e+01 2.09e-01 4.58e-02 -1.4 2.88e-01 - 9.66e-01 1.00e+00h 1 5 1.7003411e+01 2.29e-02 8.42e-03 -2.9 7.03e-02 - 9.68e-01 1.00e+00h 1 6 1.7013974e+01 2.59e-04 8.65e-05 -4.5 6.22e-03 - 1.00e+00 1.00e+00h 1 7 1.7014017e+01 2.26e-07 5.71e-08 -8.0 1.43e-04 - 1.00e-00 1.00e+00h 1 8 1.7014017e+01 4.62e-14 9.09e-14 -8.0 6.95e-08 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 8 Number of objective function evaluations = 9 Number of objective gradient evaluations = 9 Number of equality constraint evaluations = 9 Number of inequality constraint evaluations = 9 Number of equality constraint Jacobian evaluations = 9 Number of inequality constraint Jacobian evaluations = 9 Number of Lagrangian Hessian evaluations = 8 Total CPU secs in IPOPT (w/o function evaluations) = 0.220 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. Solution of the primal variables, x x[0] = 1.000000e+00 x[1] = 4.743000e+00 x[2] = 3.821150e+00 x[3] = 1.379408e+00 Solution of the bound multipliers, z_L and z_U z_L[0] = 1.087871e+00 z_L[1] = 2.428776e-09 z_L[2] = 3.222413e-09 z_L[3] = 2.396076e-08 z_U[0] = 2.272727e-09 z_U[1] = 3.537314e-08 z_U[2] = 7.711676e-09 z_U[3] = 2.510890e-09 Objective value f(x*) = 1.701402e+01 *** The problem solved!

This completes the basic C++ tutorial, but see Section 6 which explains the standard console output of IPOPT and Section 5 for information about the use of options to customize the behavior of IPOPT.

The `Ipopt/examples/ScalableProblems` directory contains other NLP
problems coded in C++.

Additional methods in

The following methods are available to additional features that are not explained in the example. Default implementations for those methods are provided, so that a user can safely ignore them, unless she wants to make use of those features. From these features, only the intermediate callback is already available in the C and Fortran interfaces.

virtual bool intermediate_callback(AlgorithmMode mode, Index iter, Number obj_value, Number inf_pr, Number inf_du, Number mu, Number d_norm, Number regularization_size, Number alpha_du, Number alpha_pr, Index ls_trials, const IpoptData* ip_data, IpoptCalculatedQuantities* ip_cq)It is not required to implement (overload) this method. This method is called once per iteration (during the convergence check), and can be used to obtain information about the optimization status while IPOPT solves the problem, and also to request a premature termination.

The information provided by the entities in the argument list
correspond to what IPOPT prints in the iteration summary (see also
Section 6). Further information can be obtained from
the `ip_data` and `ip_cq` objects (in the C++ interface and for experts only :)).

You you let this method return `false`, IPOPT will terminate
with the `User_Requested_Stop` status. If you do not implement
this method (as we do in this example), the default implementation
always returns `true`.

A frequently asked question is how to access the values of the primal and dual variables in this callback. The values are stored in the `ip_cq` object for the internal representation of the problem.
To access the values in a form that corresponds to those used in the evaluation routines, the user has to request IPOPT's `TNLPAdapter` object to ``resort'' the data vectors and to fill in information about possibly filtered out fixed variables.
The `TNLPAdapter` can be accessed as follows.
First, add the following includes to your `TNLP` implementation:

#include "IpIpoptCalculatedQuantities.hpp" #include "IpIpoptData.hpp" #include "IpTNLPAdapter.hpp" #include "IpOrigIpoptNLP.hpp"Next, add the following code to your implementation of the

Ipopt::TNLPAdapter* tnlp_adapter = NULL; if( ip_cq != NULL ) { Ipopt::OrigIpoptNLP* orignlp; orignlp = dynamic_cast<OrigIpoptNLP*>(GetRawPtr(ip_cq->GetIpoptNLP())); if( orignlp != NULL ) tnlp_adapter = dynamic_cast<TNLPAdapter*>(GetRawPtr(orignlp->nlp())); }Note, that retrieving the

double* primals = new double[n]; double* dualeqs = new double[m]; double* duallbs = new double[n]; double* dualubs = new double[n]; tnlp_adapter->ResortX(*ip_data->curr()->x(), primals); tnlp_adapter->ResortG(*ip_data->curr()->y_c(), *ip_data->curr()->y_d(), dualeqs); tnlp_adapter->ResortBnds(*ip_data->curr()->z_L(), duallbs, *ip_data->curr()->z_U(), dualubs);Additionally, information about scaled violation of constraint 2 and violation of complementarity constraints can be obtained via

tnlp_adapter->ResortG(*ip_data->curr_c(), *ip_data->curr_d_minus_s(), ...) tnlp_adapter->ResortBnds(*ip_data->curr_compl_x_L(), ..., *ip_data->curr_compl_x_U(), ...) tnlp_adapter->ResortG(*ip_data->curr_compl_s_L(), ...) tnlp_adapter->ResortG(*ip_data->curr_compl_s_U(), ...)

virtual bool get_scaling_parameters(Number& obj_scaling, bool& use_x_scaling, Index n, Number* x_scaling, bool& use_g_scaling, Index m, Number* g_scaling)

This method is called if the `nlp_scaling_method` is chosen as
`user-scaling`. The user has to provide scaling factors for
the objective function as well as for the optimization variables
and/or constraints. The return value should be true, unless an error
occurred, and the program is to be aborted.

The value returned in `obj_scaling` determines, how IPOPT
should internally scale the objective function. For example, if this
number is chosen to be 10, then IPOPT solves internally an
optimization problem that has 10 times the value of the original
objective function provided by the `TNLP`. In particular, if this
value is negative, then IPOPT will maximize the objective function
instead of minimizing it.

The scaling factors for the variables can be returned in ` x_scaling`, which has the same length as `x` in the other ` TNLP` methods, and the factors are ordered like `x`. You need
to set `use_x_scaling` to `true`, if you want IPOPT so scale
the variables. If it is `false`, no internal scaling of the
variables is done. Similarly, the scaling factors for the constraints
can be returned in `g_scaling`, and this scaling is activated by
setting `use_g_scaling` to `true`.

As a guideline, we suggest to scale the optimization problem (either directly in the original formulation, or after using scaling factors) so that all sensitivities, i.e., all non-zero first partial derivatives, are typically of the order .

virtual Index get_number_of_nonlinear_variables()

This method is only important if the limited-memory quasi-Newton options is used, see Section 4.2. It is used to return the number of variables that appear nonlinearly in the objective function or in at least one constraint function. If a negative number is returned, IPOPT assumes that all variables are nonlinear.

If the user doesn't overload this method in her implementation of the
class derived from `TNLP`, the default implementation returns -1,
i.e., all variables are assumed to be nonlinear.

virtual bool get_list_of_nonlinear_variables(Index num_nonlin_vars, Index* pos_nonlin_vars)

This method is called by IPOPT only if the limited-memory
quasi-Newton options is used and if the ` get_number_of_nonlinear_variables` method returns a positive
number; this number is then identical with `num_nonlin_vars` and
the length of the array `pos_nonlin_vars`. In this call, you
need to list the indices of all nonlinear variables in ` pos_nonlin_vars`, where the numbering starts with 0 order 1,
depending on the numbering style determined in `get_nlp_info`.

virtual bool get_variables_linearity(Index n, LinearityType* var_types)

This method is never called by IPOPT, but is used by BONMIN to get information about which variables occur only in linear terms.
IPOPT passes a `var_types` array of size `n`, which the user should fill with the appropriate linearity type of the variables (`TNLP::LINEAR` or `TNLP::NON_LINEAR`).

If the user doesn't overload this method in her implementation of the class derived from `TNLP`, the default implementation returns `false`.

virtual bool get_constraints_linearity(Index m, LinearityType* const_types)

This method is never called by IPOPT, but is used by BONMIN to get information about which constraints are linear.
IPOPT passes a `const_types` array of size `m`, which the user should fill with the appropriate linearity type of the constraints (`TNLP::LINEAR` or `TNLP::NON_LINEAR`).

If the user doesn't overload this method in her implementation of the class derived from `TNLP`, the default implementation returns `false`.

virtual bool get_var_con_metadata(Index n, StringMetaDataMapType& var_string_md, IntegerMetaDataMapType& var_integer_md, NumericMetaDataMapType& var_numeric_md, Index m, StringMetaDataMapType& con_string_md, IntegerMetaDataMapType& con_integer_md, NumericMetaDataMapType& con_numeric_md)

This method is used to pass meta data about variables or constraints to
IPOPT. The data can be either of integer, numeric, or string type.
IPOPT passes this data on to its internal problem representation.
The meta data type is a `std::map` with `std::string` as key type and
a `std::vector` as value type.
So far, IPOPT itself makes only use of string meta data under the key `idx_names`. With this key, variable and constraint names can be passed to
IPOPT, which are shown when printing internal vector or matrix data structures
if IPOPT is run with a high value for the `print_level`
option. This allows a user to identify the original variables and constraints
corresponding to IPOPT's internal problem representation.

If the user doesn't overload this method in her implementation of the class
derived from `TNLP`, the default implementation does not set any meta data
and returns `false`.

virtual void finalize_metadata(Index n, const StringMetaDataMapType& var_string_md, const IntegerMetaDataMapType& var_integer_md, const NumericMetaDataMapType& var_numeric_md, Index m, const StringMetaDataMapType& con_string_md, const IntegerMetaDataMapType& con_integer_md, const NumericMetaDataMapType& con_numeric_md)

This method is called just before `finalize_solution` and is used to
return any meta data collected during the algorithms run, including the meta
data provided by the user with the `get_var_con_metadata` method.

If the user doesn't overload this method in her implementation of the class
derived from `TNLP`, the default implementation does nothing.

virtual bool get_warm_start_iterate(IteratesVector& warm_start_iterate)

Overload this method to provide an IPOPT iterate which is already in the form IPOPT requires it internally for a warm starts. This method is only for expert users.

If the user doesn't overload this method in her implementation of the class
derived from `TNLP`, the default implementation does not provide a warm
start iterate and returns `false`.