Prev Next Index-> contents reference index search external Up-> CppAD Appendix numeric_ad BenderQuad CppAD-> Install Introduction AD ADFun preprocessor multi_thread utility ipopt_solve Example speed Appendix Appendix-> Faq directory Theory glossary Bib wish_list whats_new deprecated compare_c numeric_ad addon License numeric_ad-> BenderQuad opt_val_hes LuRatio BenderQuad-> bender_quad.cpp Headings-> Syntax See Also Problem Purpose x y fun ---..fun.f ---..fun.h ---..fun.dy g gx gxx BAvector ADvector Example

Computing Jacobian and Hessian of Bender's Reduced Objective

Syntax
 # include <cppad/cppad.hpp> BenderQuad(x, y, fun, g, gx, gxx)

opt_val_hes

Problem
The type ADvector cannot be determined form the arguments above (currently the type ADvector must be CPPAD_TESTVECTOR(Base) .) This will be corrected in the future by requiring Fun to define Fun::vector_type which will specify the type ADvector .

Purpose
We are given the optimization problem $$\begin{array}{rcl} {\rm minimize} & F(x, y) & {\rm w.r.t.} \; (x, y) \in \B{R}^n \times \B{R}^m \end{array}$$ that is convex with respect to $y$. In addition, we are given a set of equations $H(x, y)$ such that $$H[ x , Y(x) ] = 0 \;\; \Rightarrow \;\; F_y [ x , Y(x) ] = 0$$ (In fact, it is often the case that $H(x, y) = F_y (x, y)$.) Furthermore, it is easy to calculate a Newton step for these equations; i.e., $$dy = - [ \partial_y H(x, y)]^{-1} H(x, y)$$ The purpose of this routine is to compute the value, Jacobian, and Hessian of the reduced objective function $$G(x) = F[ x , Y(x) ]$$ Note that if only the value and Jacobian are needed, they can be computed more quickly using the relations $$G^{(1)} (x) = \partial_x F [x, Y(x) ]$$

x
The BenderQuad argument x has prototype       const BAvector &x  (see BAvector below) and its size must be equal to n . It specifies the point at which we evaluating the reduced objective function and its derivatives.

y
The BenderQuad argument y has prototype       const BAvector &y  and its size must be equal to m . It must be equal to $Y(x)$; i.e., it must solve the problem in $y$ for this given value of $x$ $$\begin{array}{rcl} {\rm minimize} & F(x, y) & {\rm w.r.t.} \; y \in \B{R}^m \end{array}$$

fun
The BenderQuad object fun must support the member functions listed below. The AD<Base> arguments will be variables for a tape created by a call to Independent from BenderQuad (hence they can not be combined with variables corresponding to a different tape).

fun.f
The BenderQuad argument fun supports the syntax       f = fun.f(x, y)  The fun.f argument x has prototype       const ADvector &x  (see ADvector below) and its size must be equal to n . The fun.f argument y has prototype       const ADvector &y  and its size must be equal to m . The fun.f result f has prototype       ADvector f  and its size must be equal to one. The value of f is $$f = F(x, y)$$.

fun.h
The BenderQuad argument fun supports the syntax       h = fun.h(x, y)  The fun.h argument x has prototype       const ADvector &x  and its size must be equal to n . The fun.h argument y has prototype       const BAvector &y  and its size must be equal to m . The fun.h result h has prototype       ADvector h  and its size must be equal to m . The value of h is $$h = H(x, y)$$.

fun.dy
The BenderQuad argument fun supports the syntax       dy = fun.dy(x, y, h) x  The fun.dy argument x has prototype       const BAvector &x  and its size must be equal to n . Its value will be exactly equal to the BenderQuad argument x and values depending on it can be stored as private objects in f and need not be recalculated.  y  The fun.dy argument y has prototype       const BAvector &y  and its size must be equal to m . Its value will be exactly equal to the BenderQuad argument y and values depending on it can be stored as private objects in f and need not be recalculated.  h  The fun.dy argument h has prototype       const ADvector &h  and its size must be equal to m .  dy  The fun.dy result dy has prototype       ADvector dy  and its size must be equal to m . The return value dy is given by $$dy = - [ \partial_y H (x , y) ]^{-1} h$$ Note that if h is equal to $H(x, y)$, $dy$ is the Newton step for finding a zero of $H(x, y)$ with respect to $y$; i.e., $y + dy$ is an approximate solution for the equation $H (x, y + dy) = 0$.

g
The argument g has prototype       BAvector &g  and has size one. The input value of its element does not matter. On output, it contains the value of $G (x)$; i.e., $$g[0] = G (x)$$

gx
The argument gx has prototype       BAvector &gx  and has size $n$. The input values of its elements do not matter. On output, it contains the Jacobian of $G (x)$; i.e., for $j = 0 , \ldots , n-1$, $$gx[ j ] = G^{(1)} (x)_j$$

gxx
The argument gx has prototype       BAvector &gxx  and has size $n \times n$. The input values of its elements do not matter. On output, it contains the Hessian of $G (x)$; i.e., for $i = 0 , \ldots , n-1$, and $j = 0 , \ldots , n-1$, $$gxx[ i * n + j ] = G^{(2)} (x)_{i,j}$$

BAvector
The type BAvector must be a SimpleVector class. We use Base to refer to the type of the elements of BAvector ; i.e.,       BAvector::value_type 
The type ADvector must be a SimpleVector class with elements of type AD<Base> ; i.e.,       ADvector::value_type  must be the same type as       AD< BAvector::value_type >  .