next up previous contents
Next: Function Evaluation Methods Up: Using OSInstance Methods: High Previous: Using OSInstance Methods: High   Contents

Sparsity Methods

Many solvers such as Ipopt (projects.coin-or.org/Ipopt) or Knitro (www.ziena.com) require the sparsity pattern of the Jacobian of the constraint matrix and the Hessian of the Lagrangian function. Note well that the constraint matrix of the example in Section 9.2 constitutes only the last two rows of (24) but does include the linear terms. The following code illustrates how to get the sparsity pattern of the constraint Jacobian matrix
\begin{verbatimtab}[4]
SparseJacobianMatrix *sparseJac;
sparseJac = osinstance->...
...'
col idx = ''<< *(sparseJac->indexes + k) << std::endl;
}
}
\end{verbatimtab}
For the example problem this will produce

JACOBIAN SPARSITY PATTERN
number constant terms in constraint 0 is 0
row idx = 0  col idx = 1
row idx = 0  col idx = 3
number constant terms in constraint 1 is 1
row idx = 1  col idx = 2
row idx = 1  col idx = 0
row idx = 1  col idx = 3
The constant term in constraint 1 corresponds to the linear term $ 7x_2$, which is added after the algorithmic differentiation has taken place. However, the linear term $ 5x_1$ in equation 0 does not contribute a nonzero in the Jacobian, as it is combined with the term $ 1.37x_1$ that is treated as a nonlinear term and therefore accounted for explicitly. The SparseJacobianMatrix object has a data member starts which is the index of the start of each constraint row. The int data member indexes gives is the variable index of every potentially nonzero derivative. There is also a double data member values that will the value of the partial derivative of the corresponding index at each iteration. Finally, there is an int data member conVals that is the number of constant terms in each gradient. A constant term is a partial derivative that cannot change at an iteration. A variable is considered to have a constant derivative if it appears in the <linearConstraintCoefficients> section but not in the <nonlinearExpressions>. For a row indexed by idx the variable indices are in the indexes array between the elements sparseJac->starts + idx and sparseJac->starts + idx + 1. The first sparseJac->conVals + idx variables listed are indices o!

f variables with constant derivatives. In this example, when idx is 1, there is one variable with a constant derivative and it is variable $ x_{2}.$ (Actually variable $ x_{1}$ has a constant derivative but the code does not check to see if variables that appear in the <nonlinearExpressions> section have constant derivative.) The variables with constant derivatives never appear in the AD evaluation.

The following code illustrates how to get the sparsity pattern of the Hessian of the Lagrangian.

SparseHessianMatrix *sparseHessian;
sparseHessian = osinstance->getLagrangianHessianSparsityPattern( );
for(idx = 0; idx < sparseHessian->hessDimension; idx++){
	std::cout <<  "Row Index = " << *(sparseHessian->hessRowIdx + idx) ;
	std::cout <<  "  Column Index = " << *(sparseHessian->hessColIdx + idx);
}
The SparseHessianMatrix class has the int data members hessRowIdx and hessColIdx for indexing potential nonzero elements in the Hessian matrix. The double data member hessValues holds the value of the respective second derivative at each iteration. The data member hessDimension is the number of nonzero elements in the Hessian.


next up previous contents
Next: Function Evaluation Methods Up: Using OSInstance Methods: High Previous: Using OSInstance Methods: High   Contents
Kipp Martin 2008-01-16