The `ipoptr` package (see Section 2.7 for installation
instructions) offers a R function `ipoptr` which takes an NLP
specification, a starting point, and IPOPT options as input and returns
information about a IPOPT run (status, message, ...) and a solution point.

In the following, we discuss necessary steps to implement the HS071 example
with `ipoptr`.
A more detailed documentation of `ipoptr` is available in
`Ipopt/contrib/RInterface/inst/doc/ipoptr.pdf`.

First, we define the objective function and its gradient

> eval_f <- function( x ) { return( x[1]*x[4]*(x[1] + x[2] + x[3]) + x[3] ) } > eval_grad_f <- function( x ) { return( c( x[1] * x[4] + x[4] * (x[1] + x[2] + x[3]), x[1] * x[4], x[1] * x[4] + 1.0, x[1] * (x[1] + x[2] + x[3]) ) ) }Then we define a function that returns the value of the two constraints. We define the bounds of the constraints (in this case the and are and ) later.

> # constraint functions > eval_g <- function( x ) { return( c( x[1] * x[2] * x[3] * x[4], x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2 ) ) }Then we define the structure of the Jacobian, which is a dense matrix in this case, and function to evaluate it

> eval_jac_g_structure <- list( c(1,2,3,4), c(1,2,3,4) ) > eval_jac_g <- function( x ) { return( c ( x[2]*x[3]*x[4], x[1]*x[3]*x[4], x[1]*x[2]*x[4], x[1]*x[2]*x[3], 2.0*x[1], 2.0*x[2], 2.0*x[3], 2.0*x[4] ) ) }The Hessian is also dense, but it looks slightly more complicated because we have to take into account the Hessian of the objective function and of the constraints at the same time, although you could write a function to calculate them both separately and then return the combined result in

> # The Hessian for this problem is actually dense, > # This is a symmetric matrix, fill the lower left triangle only. > eval_h_structure <- list( c(1), c(1,2), c(1,2,3), c(1,2,3,4) ) > eval_h <- function( x, obj_factor, hessian_lambda ) { values <- numeric(10) values[1] = obj_factor * (2*x[4]) # 1,1 values[2] = obj_factor * (x[4]) # 2,1 values[3] = 0 # 2,2 values[4] = obj_factor * (x[4]) # 3,1 values[5] = 0 # 4,2 values[6] = 0 # 3,3 values[7] = obj_factor * (2*x[1] + x[2] + x[3]) # 4,1 values[8] = obj_factor * (x[1]) # 4,2 values[9] = obj_factor * (x[1]) # 4,3 values[10] = 0 # 4,4 # add the portion for the first constraint values[2] = values[2] + hessian_lambda[1] * (x[3] * x[4]) # 2,1 values[4] = values[4] + hessian_lambda[1] * (x[2] * x[4]) # 3,1 values[5] = values[5] + hessian_lambda[1] * (x[1] * x[4]) # 3,2 values[7] = values[7] + hessian_lambda[1] * (x[2] * x[3]) # 4,1 values[8] = values[8] + hessian_lambda[1] * (x[1] * x[3]) # 4,2 values[9] = values[9] + hessian_lambda[1] * (x[1] * x[2]) # 4,3 # add the portion for the second constraint values[1] = values[1] + hessian_lambda[2] * 2 # 1,1 values[3] = values[3] + hessian_lambda[2] * 2 # 2,2 values[6] = values[6] + hessian_lambda[2] * 2 # 3,3 values[10] = values[10] + hessian_lambda[2] * 2 # 4,4 return ( values ) }After the hard part is done, we only have to define the initial values, the lower and upper bounds of the control variables, and the lower and upper bounds of the constraints. If a variable or a constraint does not have lower or upper bounds, the values

> # initial values > x0 <- c( 1, 5, 5, 1 ) > # lower and upper bounds of control > lb <- c( 1, 1, 1, 1 ) > ub <- c( 5, 5, 5, 5 ) > # lower and upper bounds of constraints > constraint_lb <- c( 25, 40 ) > constraint_ub <- c( Inf, 40 )Finally, we can call IPOPT with the

> opts <- list("print_level" = 0, "file_print_level" = 12, "output_file" = "hs071_nlp.out") > print( ipoptr( x0 = x0, eval_f = eval_f, eval_grad_f = eval_grad_f, lb = lb, ub = ub, eval_g = eval_g, eval_jac_g = eval_jac_g, constraint_lb = constraint_lb, constraint_ub = constraint_ub, eval_jac_g_structure = eval_jac_g_structure, eval_h = eval_h, eval_h_structure = eval_h_structure, opts = opts) ) Call: ipoptr(x0 = x0, eval_f = eval_f, eval_grad_f = eval_grad_f, lb = lb, ub = ub, eval_g = eval_g, eval_jac_g = eval_jac_g, eval_jac_g_structure = eval_jac_g_structure, constraint_lb = constraint_lb, constraint_ub = constraint_ub, eval_h = eval_h, eval_h_structure = eval_h_structure, opts = opts) Ipopt solver status: 0 ( SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options). ) Number of Iterations....: 8 Optimal value of objective function: 17.0140171451792 Optimal value of controls: 1 4.743 3.82115 1.379408

To pass additional data to the evaluation routines, one can either supply
additional arguments to the user defined functions and `ipoptr` or define
an environment that holds the data and pass this environment to `ipoptr`.
Both methods are shown in the file `tests/parameters.R` that comes with
`ipoptr`.

As a very simple example, suppose we want to find the minimum of

First, we define the objective function and its gradient using, assuming that
there is some variable `params` that contains the values of the
parameters.

> eval_f_ex1 <- function(x, params) { return( params[1]*x^2 + params[2]*x + params[3] ) } > eval_grad_f_ex1 <- function(x, params) { return( 2*params[1]*x + params[2] ) }Note, that the first parameter should always be the control variable. All of the user-defined functions should contain the same set of additional parameters. You have to supply them as input argument to all functions, even if you're not using them in some of the functions.

Then we can solve the problem for a specific set of parameters, in this case , and , from initial value , with the following command

> # solve using ipoptr with additional parameters > ipoptr(x0 = 0, eval_f = eval_f_ex1, eval_grad_f = eval_grad_f_ex1, opts = list("print_level"=0), params = c(1,2,3) ) Call: ipoptr(x0 = 0, eval_f = eval_f_ex1, eval_grad_f = eval_grad_f_ex1, opts = list(print_level = 0), params = c(1, 2, 3)) Ipopt solver status: 0 ( SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options). ) Number of Iterations....: 1 Optimal value of objective function: 2 Optimal value of controls: -1

For the second method, we don't have to supply the parameters as additional arguments to the function.

> eval_f_ex2 <- function(x) { return( params[1]*x^2 + params[2]*x + params[3] ) } > eval_grad_f_ex2 <- function(x) { return( 2*params[1]*x + params[2] ) }Instead, we define an environment that contains specific values of

> # define a new environment that contains params > auxdata <- new.env() > auxdata$params <- c(1,2,3)To solve this we supply

> # pass the environment that should be used to evaluate functions to ipoptr > ipoptr(x0 = 0, eval_f = eval_f_ex2, eval_grad_f = eval_grad_f_ex2, ipoptr_environment = auxdata, opts = list("print_level"=0) ) Call: ipoptr(x0 = 0, eval_f = eval_f_ex2, eval_grad_f = eval_grad_f_ex2, opts = list(print_level = 0), ipoptr_environment = auxdata) Ipopt solver status: 0 ( SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options). ) Number of Iterations....: 1 Optimal value of objective function: 2 Optimal value of controls: -1

- ... Ypma
^{19} - University College London