@(@ \newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} } @)@,
cppad-20171217: A Package for Differentiation of C++ Algorithms
One section per web page    All sections in one web page
(fast to load)    (slow to load)
Math displayed using MathJax    cppad.htm    _printable.htm
Math displayed using MathML    cppad.xml    _printable.xml

a: Syntax
# include <cppad/cppad.hpp>


b: Introduction
We refer to the step by step conversion from an algorithm that computes function values to an algorithm that computes derivative values as Algorithmic Differentiation (often referred to as Automatic Differentiation.) Given a C++ algorithm that computes function values, CppAD generates an algorithm that computes its derivative values. A brief introduction to Algorithmic Differentiation can be found in wikipedia (http://en.wikipedia.org/wiki/Automatic_differentiation) . The web site autodiff.org (http://www.autodiff.org) is dedicated to research about, and promoting the use of, AD.
  1. CppAD (http://www.coin-or.org/CppAD/) uses operator overloading to compute derivatives of algorithms defined in C++. It is distributed by the COIN-OR Foundation (http://www.coin-or.org/foundation.html) with the Eclipse Public License EPL-1.0 (http://www.opensource.org/licenses/EPL-1.0) or the GNU General Public License GPL-3.0 (http://www.opensource.org/licenses/AGPL-3.0) . Testing and installation is supported for Unix, Microsoft, and Apple operating systems. Extensive user and developer documentation is included.
  2. An AD of Base 12.4.g.b: operation sequence is stored as an 5: AD function object which can evaluate function values and derivatives. Arbitrary order 5.3: forward and 5.4: reverse mode derivative calculations can be preformed on the operation sequence. Logical comparisons can be included in an operation sequence using AD 4.4.4: conditional expressions . Evaluation of user defined unary 4.4.5: discrete functions can also be included in the sequence of operations; i.e., functions that depend on the 12.4.k.c: independent variables but which have identically zero derivatives (e.g., a step function).
  3. Derivatives of functions that are defined in terms of other derivatives can be computed using multiple levels of AD; see 10.2.10.1: mul_level.cpp for a simple example and 10.2.12: mul_level_ode.cpp for a more realistic example. To this end, CppAD can also be used with other AD types; for example see 10.2.13: mul_level_adolc_ode.cpp .
  4. A set of programs for doing 11: speed comparisons between Adolc (https://projects.coin-or.org/ADOL-C) , CppAD, Fadbad (http://www.fadbad.com/) , and Sacado (http://trilinos.sandia.gov/packages/sacado/) are included.
  5. Includes a set of C++ 8: utilities that are useful for general operator overloaded numerical method. Allows for replacement of the 10.5: testvector template vector class which is used for extensive testing; for example, you can do your testing with the uBlas (http://www.boost.org/libs/numeric/ublas/doc/index.htm) template vector class.
  6. See 12.7: whats_new for a list of recent extensions and bug fixes.
You can find out about other algorithmic differentiation tools and about algorithmic differentiation in general at the following web sites: wikipedia (http://en.wikipedia.org/wiki/Automatic_differentiation) , autodiff.org (http://www.autodiff.org) .

c: Example
The file 10.1: get_started.cpp contains an example and test of using CppAD to compute the derivative of a polynomial. There are many other 10: examples .

d: Include File
The following include directive
     # include <cppad/cppad.hpp>
includes the CppAD package for the rest of the current compilation unit.

e: Preprocessor Symbols
All the 6: preprocessor symbols used by CppAD begin with eight CppAD or CPPAD_.

f: Namespace
All of the functions and objects defined by CppAD are in the CppAD namespace; for example, you can access the 4: AD types as
     size_t n = 2;
     CppAD::vector< CppAD::AD<
Base> > x(n)
You can abbreviate access to one object or function a using command of the form
     using CppAD::AD
     CppAD::vector< AD<
Base> > x(n)
You can abbreviate access to all CppAD objects and functions with a command of the form
     using namespace CppAD
     vector< AD<
Base> > x(n)
If you include other namespaces in a similar manner, this can cause naming conflicts.

g: Contents
_contents: 1Table of Contents
Install: 2CppAD Download, Test, and Install Instructions
Introduction: 3An Introduction by Example to Algorithmic Differentiation
AD: 4AD Objects
ADFun: 5ADFun Objects
preprocessor: 6CppAD API Preprocessor Symbols
multi_thread: 7Using CppAD in a Multi-Threading Environment
utility: 8Some General Purpose Utilities
ipopt_solve: 9Use Ipopt to Solve a Nonlinear Programming Problem
Example: 10Examples
speed: 11Speed Test an Operator Overloading AD Package
Appendix: 12Appendix
_reference: 13Alphabetic Listing of Cross Reference Tags
_index: 14Keyword Index
_external: 15External Internet References

Input File: doc.omh
1: Table of Contents
cppad-20171217: A Package for Differentiation of C++ Algorithms: : CppAD
    Table of Contents: 1: _contents
    CppAD Download, Test, and Install Instructions: 2: Install
        Download The CppAD Source Code: 2.1: download
        Using CMake to Configure CppAD: 2.2: cmake
            Including the ADOL-C Examples and Tests: 2.2.1: adolc_prefix
                Download and Install Adolc in Build Directory: 2.2.1.1: get_adolc.sh
            Including the ColPack Sparsity Calculations: 2.2.2: colpack_prefix
                ColPack: Sparse Jacobian Example and Test: 2.2.2.1: colpack_jac.cpp
                ColPack: Sparse Jacobian Example and Test: 2.2.2.2: colpack_jacobian.cpp
                ColPack: Sparse Hessian Example and Test: 2.2.2.3: colpack_hes.cpp
                ColPack: Sparse Hessian Example and Test: 2.2.2.4: colpack_hessian.cpp
                Download and Install ColPack in Build Directory: 2.2.2.5: get_colpack.sh
            Including the Eigen Examples and Tests: 2.2.3: eigen_prefix
                Download and Install Eigen in Build Directory: 2.2.3.1: get_eigen.sh
            Including the FADBAD Speed Tests: 2.2.4: fadbad_prefix
                Download and Install Fadbad in Build Directory: 2.2.4.1: get_fadbad.sh
            Including the cppad_ipopt Library and Tests: 2.2.5: ipopt_prefix
                Download and Install Ipopt in Build Directory: 2.2.5.1: get_ipopt.sh
            Including the Sacado Speed Tests: 2.2.6: sacado_prefix
                Download and Install Sacado in Build Directory: 2.2.6.1: get_sacado.sh
            Choosing the CppAD Test Vector Template Class: 2.2.7: cppad_testvector
        Checking the CppAD Examples and Tests: 2.3: cmake_check
        CppAD pkg-config Files: 2.4: pkgconfig
    An Introduction by Example to Algorithmic Differentiation: 3: Introduction
        Second Order Exponential Approximation: 3.1: exp_2
            exp_2: Implementation: 3.1.1: exp_2.hpp
            exp_2: Test: 3.1.2: exp_2.cpp
            exp_2: Operation Sequence and Zero Order Forward Mode: 3.1.3: exp_2_for0
                exp_2: Verify Zero Order Forward Sweep: 3.1.3.1: exp_2_for0.cpp
            exp_2: First Order Forward Mode: 3.1.4: exp_2_for1
                exp_2: Verify First Order Forward Sweep: 3.1.4.1: exp_2_for1.cpp
            exp_2: First Order Reverse Mode: 3.1.5: exp_2_rev1
                exp_2: Verify First Order Reverse Sweep: 3.1.5.1: exp_2_rev1.cpp
            exp_2: Second Order Forward Mode: 3.1.6: exp_2_for2
                exp_2: Verify Second Order Forward Sweep: 3.1.6.1: exp_2_for2.cpp
            exp_2: Second Order Reverse Mode: 3.1.7: exp_2_rev2
                exp_2: Verify Second Order Reverse Sweep: 3.1.7.1: exp_2_rev2.cpp
            exp_2: CppAD Forward and Reverse Sweeps: 3.1.8: exp_2_cppad
        An Epsilon Accurate Exponential Approximation: 3.2: exp_eps
            exp_eps: Implementation: 3.2.1: exp_eps.hpp
            exp_eps: Test of exp_eps: 3.2.2: exp_eps.cpp
            exp_eps: Operation Sequence and Zero Order Forward Sweep: 3.2.3: exp_eps_for0
                exp_eps: Verify Zero Order Forward Sweep: 3.2.3.1: exp_eps_for0.cpp
            exp_eps: First Order Forward Sweep: 3.2.4: exp_eps_for1
                exp_eps: Verify First Order Forward Sweep: 3.2.4.1: exp_eps_for1.cpp
            exp_eps: First Order Reverse Sweep: 3.2.5: exp_eps_rev1
                exp_eps: Verify First Order Reverse Sweep: 3.2.5.1: exp_eps_rev1.cpp
            exp_eps: Second Order Forward Mode: 3.2.6: exp_eps_for2
                exp_eps: Verify Second Order Forward Sweep: 3.2.6.1: exp_eps_for2.cpp
            exp_eps: Second Order Reverse Sweep: 3.2.7: exp_eps_rev2
                exp_eps: Verify Second Order Reverse Sweep: 3.2.7.1: exp_eps_rev2.cpp
            exp_eps: CppAD Forward and Reverse Sweeps: 3.2.8: exp_eps_cppad
        Correctness Tests For Exponential Approximation in Introduction: 3.3: exp_apx.cpp
    AD Objects: 4: AD
        AD Constructors: 4.1: ad_ctor
            AD Constructors: Example and Test: 4.1.1: ad_ctor.cpp
        AD Assignment Operator: 4.2: ad_assign
            AD Assignment: Example and Test: 4.2.1: ad_assign.cpp
        Conversion and I/O of AD Objects: 4.3: Convert
            Convert From an AD Type to its Base Type: 4.3.1: Value
                Convert From AD to its Base Type: Example and Test: 4.3.1.1: value.cpp
            Convert From AD to Integer: 4.3.2: Integer
                Convert From AD to Integer: Example and Test: 4.3.2.1: integer.cpp
            Convert An AD or Base Type to String: 4.3.3: ad_to_string
            AD Output Stream Operator: 4.3.4: ad_input
                AD Output Operator: Example and Test: 4.3.4.1: ad_input.cpp
            AD Output Stream Operator: 4.3.5: ad_output
                AD Output Operator: Example and Test: 4.3.5.1: ad_output.cpp
            Printing AD Values During Forward Mode: 4.3.6: PrintFor
                Printing During Forward Mode: Example and Test: 4.3.6.1: print_for_cout.cpp
                Print During Zero Order Forward Mode: Example and Test: 4.3.6.2: print_for_string.cpp
            Convert an AD Variable to a Parameter: 4.3.7: Var2Par
                Convert an AD Variable to a Parameter: Example and Test: 4.3.7.1: var2par.cpp
        AD Valued Operations and Functions: 4.4: ADValued
            AD Arithmetic Operators and Compound Assignments: 4.4.1: Arithmetic
                AD Unary Plus Operator: 4.4.1.1: UnaryPlus
                    AD Unary Plus Operator: Example and Test: 4.4.1.1.1: unary_plus.cpp
                AD Unary Minus Operator: 4.4.1.2: UnaryMinus
                    AD Unary Minus Operator: Example and Test: 4.4.1.2.1: unary_minus.cpp
                AD Binary Arithmetic Operators: 4.4.1.3: ad_binary
                    AD Binary Addition: Example and Test: 4.4.1.3.1: add.cpp
                    AD Binary Subtraction: Example and Test: 4.4.1.3.2: sub.cpp
                    AD Binary Multiplication: Example and Test: 4.4.1.3.3: mul.cpp
                    AD Binary Division: Example and Test: 4.4.1.3.4: div.cpp
                AD Compound Assignment Operators: 4.4.1.4: compound_assign
                    AD Compound Assignment Addition: Example and Test: 4.4.1.4.1: AddEq.cpp
                    AD Compound Assignment Subtraction: Example and Test: 4.4.1.4.2: sub_eq.cpp
                    AD Compound Assignment Multiplication: Example and Test: 4.4.1.4.3: mul_eq.cpp
                    AD Compound Assignment Division: Example and Test: 4.4.1.4.4: div_eq.cpp
            The Unary Standard Math Functions: 4.4.2: unary_standard_math
                Inverse Sine Function: acos: 4.4.2.1: acos
                    The AD acos Function: Example and Test: 4.4.2.1.1: acos.cpp
                Inverse Sine Function: asin: 4.4.2.2: asin
                    The AD asin Function: Example and Test: 4.4.2.2.1: asin.cpp
                Inverse Tangent Function: atan: 4.4.2.3: atan
                    The AD atan Function: Example and Test: 4.4.2.3.1: atan.cpp
                The Cosine Function: cos: 4.4.2.4: cos
                    The AD cos Function: Example and Test: 4.4.2.4.1: cos.cpp
                The Hyperbolic Cosine Function: cosh: 4.4.2.5: cosh
                    The AD cosh Function: Example and Test: 4.4.2.5.1: cosh.cpp
                The Exponential Function: exp: 4.4.2.6: exp
                    The AD exp Function: Example and Test: 4.4.2.6.1: exp.cpp
                The Exponential Function: log: 4.4.2.7: log
                    The AD log Function: Example and Test: 4.4.2.7.1: log.cpp
                The Base 10 Logarithm Function: log10: 4.4.2.8: log10
                    The AD log10 Function: Example and Test: 4.4.2.8.1: log10.cpp
                The Sine Function: sin: 4.4.2.9: sin
                    The AD sin Function: Example and Test: 4.4.2.9.1: sin.cpp
                The Hyperbolic Sine Function: sinh: 4.4.2.10: sinh
                    The AD sinh Function: Example and Test: 4.4.2.10.1: sinh.cpp
                The Square Root Function: sqrt: 4.4.2.11: sqrt
                    The AD sqrt Function: Example and Test: 4.4.2.11.1: sqrt.cpp
                The Tangent Function: tan: 4.4.2.12: tan
                    The AD tan Function: Example and Test: 4.4.2.12.1: tan.cpp
                The Hyperbolic Tangent Function: tanh: 4.4.2.13: tanh
                    The AD tanh Function: Example and Test: 4.4.2.13.1: tanh.cpp
                AD Absolute Value Functions: abs, fabs: 4.4.2.14: abs
                    AD Absolute Value Function: Example and Test: 4.4.2.14.1: fabs.cpp
                The Inverse Hyperbolic Cosine Function: acosh: 4.4.2.15: acosh
                    The AD acosh Function: Example and Test: 4.4.2.15.1: acosh.cpp
                The Inverse Hyperbolic Sine Function: asinh: 4.4.2.16: asinh
                    The AD asinh Function: Example and Test: 4.4.2.16.1: asinh.cpp
                The Inverse Hyperbolic Tangent Function: atanh: 4.4.2.17: atanh
                    The AD atanh Function: Example and Test: 4.4.2.17.1: atanh.cpp
                The Error Function: 4.4.2.18: erf
                    The AD erf Function: Example and Test: 4.4.2.18.1: erf.cpp
                The Exponential Function Minus One: expm1: 4.4.2.19: expm1
                    The AD exp Function: Example and Test: 4.4.2.19.1: expm1.cpp
                The Logarithm of One Plus Argument: log1p: 4.4.2.20: log1p
                    The AD log1p Function: Example and Test: 4.4.2.20.1: log1p.cpp
                The Sign: sign: 4.4.2.21: sign
                    Sign Function: Example and Test: 4.4.2.21.1: sign.cpp
            The Binary Math Functions: 4.4.3: binary_math
                AD Two Argument Inverse Tangent Function: 4.4.3.1: atan2
                    The AD atan2 Function: Example and Test: 4.4.3.1.1: atan2.cpp
                The AD Power Function: 4.4.3.2: pow
                    The AD Power Function: Example and Test: 4.4.3.2.1: pow.cpp
                Absolute Zero Multiplication: 4.4.3.3: azmul
                    AD Absolute Zero Multiplication: Example and Test: 4.4.3.3.1: azmul.cpp
            AD Conditional Expressions: 4.4.4: CondExp
                Conditional Expressions: Example and Test: 4.4.4.1: cond_exp.cpp
            Discrete AD Functions: 4.4.5: Discrete
                Taping Array Index Operation: Example and Test: 4.4.5.1: tape_index.cpp
                Interpolation With Out Retaping: Example and Test: 4.4.5.2: interp_onetape.cpp
                Interpolation With Retaping: Example and Test: 4.4.5.3: interp_retape.cpp
            Numeric Limits For an AD and Base Types: 4.4.6: numeric_limits
                Numeric Limits: Example and Test: 4.4.6.1: num_limits.cpp
            Atomic AD Functions: 4.4.7: atomic
                Checkpointing Functions: 4.4.7.1: checkpoint
                    Simple Checkpointing: Example and Test: 4.4.7.1.1: checkpoint.cpp
                    Atomic Operations and Multiple-Levels of AD: Example and Test: 4.4.7.1.2: atomic_mul_level.cpp
                    Checkpointing an ODE Solver: Example and Test: 4.4.7.1.3: checkpoint_ode.cpp
                    Checkpointing an Extended ODE Solver: Example and Test: 4.4.7.1.4: checkpoint_extended_ode.cpp
                User Defined Atomic AD Functions: 4.4.7.2: atomic_base
                    Atomic Function Constructor: 4.4.7.2.1: atomic_ctor
                    Set Atomic Function Options: 4.4.7.2.2: atomic_option
                    Using AD Version of Atomic Function: 4.4.7.2.3: atomic_afun
                    Atomic Forward Mode: 4.4.7.2.4: atomic_forward
                        Atomic Forward: Example and Test: 4.4.7.2.4.1: atomic_forward.cpp
                    Atomic Reverse Mode: 4.4.7.2.5: atomic_reverse
                        Atomic Reverse: Example and Test: 4.4.7.2.5.1: atomic_reverse.cpp
                    Atomic Forward Jacobian Sparsity Patterns: 4.4.7.2.6: atomic_for_sparse_jac
                        Atomic Forward Jacobian Sparsity: Example and Test: 4.4.7.2.6.1: atomic_for_sparse_jac.cpp
                    Atomic Reverse Jacobian Sparsity Patterns: 4.4.7.2.7: atomic_rev_sparse_jac
                        Atomic Reverse Jacobian Sparsity: Example and Test: 4.4.7.2.7.1: atomic_rev_sparse_jac.cpp
                    Atomic Forward Hessian Sparsity Patterns: 4.4.7.2.8: atomic_for_sparse_hes
                        Atomic Forward Hessian Sparsity: Example and Test: 4.4.7.2.8.1: atomic_for_sparse_hes.cpp
                    Atomic Reverse Hessian Sparsity Patterns: 4.4.7.2.9: atomic_rev_sparse_hes
                        Atomic Reverse Hessian Sparsity: Example and Test: 4.4.7.2.9.1: atomic_rev_sparse_hes.cpp
                    Free Static Variables: 4.4.7.2.10: atomic_base_clear
                    Getting Started with Atomic Operations: Example and Test: 4.4.7.2.11: atomic_get_started.cpp
                    Atomic Euclidean Norm Squared: Example and Test: 4.4.7.2.12: atomic_norm_sq.cpp
                    Reciprocal as an Atomic Operation: Example and Test: 4.4.7.2.13: atomic_reciprocal.cpp
                    Atomic Sparsity with Set Patterns: Example and Test: 4.4.7.2.14: atomic_set_sparsity.cpp
                    Tan and Tanh as User Atomic Operations: Example and Test: 4.4.7.2.15: atomic_tangent.cpp
                    Atomic Eigen Matrix Multiply: Example and Test: 4.4.7.2.16: atomic_eigen_mat_mul.cpp
                        Atomic Eigen Matrix Multiply Class: 4.4.7.2.16.1: atomic_eigen_mat_mul.hpp
                    Atomic Eigen Matrix Inverse: Example and Test: 4.4.7.2.17: atomic_eigen_mat_inv.cpp
                        Atomic Eigen Matrix Inversion Class: 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp
                    Atomic Eigen Cholesky Factorization: Example and Test: 4.4.7.2.18: atomic_eigen_cholesky.cpp
                        AD Theory for Cholesky Factorization: 4.4.7.2.18.1: cholesky_theory
                        Atomic Eigen Cholesky Factorization Class: 4.4.7.2.18.2: atomic_eigen_cholesky.hpp
                    User Atomic Matrix Multiply: Example and Test: 4.4.7.2.19: atomic_mat_mul.cpp
                        Matrix Multiply as an Atomic Operation: 4.4.7.2.19.1: atomic_mat_mul.hpp
        Bool Valued Operations and Functions with AD Arguments: 4.5: BoolValued
            AD Binary Comparison Operators: 4.5.1: Compare
                AD Binary Comparison Operators: Example and Test: 4.5.1.1: compare.cpp
            Compare AD and Base Objects for Nearly Equal: 4.5.2: NearEqualExt
                Compare AD with Base Objects: Example and Test: 4.5.2.1: near_equal_ext.cpp
            AD Boolean Functions: 4.5.3: BoolFun
                AD Boolean Functions: Example and Test: 4.5.3.1: bool_fun.cpp
            Is an AD Object a Parameter or Variable: 4.5.4: ParVar
                AD Parameter and Variable Functions: Example and Test: 4.5.4.1: par_var.cpp
            Check if Two Value are Identically Equal: 4.5.5: EqualOpSeq
                EqualOpSeq: Example and Test: 4.5.5.1: equal_op_seq.cpp
        AD Vectors that Record Index Operations: 4.6: VecAD
            AD Vectors that Record Index Operations: Example and Test: 4.6.1: vec_ad.cpp
        AD<Base> Requirements for a CppAD Base Type: 4.7: base_require
            Required Base Class Member Functions: 4.7.1: base_member
            Base Type Requirements for Conditional Expressions: 4.7.2: base_cond_exp
            Base Type Requirements for Identically Equal Comparisons: 4.7.3: base_identical
            Base Type Requirements for Ordered Comparisons: 4.7.4: base_ordered
            Base Type Requirements for Standard Math Functions: 4.7.5: base_std_math
            Base Type Requirements for Numeric Limits: 4.7.6: base_limits
            Extending to_string To Another Floating Point Type: 4.7.7: base_to_string
            Base Type Requirements for Hash Coding Values: 4.7.8: base_hash
            Example AD Base Types That are not AD<OtherBase>: 4.7.9: base_example
                Example AD<Base> Where Base Constructor Allocates Memory: 4.7.9.1: base_alloc.hpp
                Using a User Defined AD Base Type: Example and Test: 4.7.9.2: base_require.cpp
                Enable use of AD<Base> where Base is Adolc's adouble Type: 4.7.9.3: base_adolc.hpp
                    Using Adolc with Multiple Levels of Taping: Example and Test: 4.7.9.3.1: mul_level_adolc.cpp
                Enable use of AD<Base> where Base is float: 4.7.9.4: base_float.hpp
                Enable use of AD<Base> where Base is double: 4.7.9.5: base_double.hpp
                Enable use of AD<Base> where Base is std::complex<double>: 4.7.9.6: base_complex.hpp
                    Complex Polynomial: Example and Test: 4.7.9.6.1: complex_poly.cpp
    ADFun Objects: 5: ADFun
        Create an ADFun Object (Record an Operation Sequence): 5.1: record_adfun
            Declare Independent Variables and Start Recording: 5.1.1: Independent
                Independent and ADFun Constructor: Example and Test: 5.1.1.1: independent.cpp
            Construct an ADFun Object and Stop Recording: 5.1.2: FunConstruct
                ADFun Assignment: Example and Test: 5.1.2.1: fun_assign.cpp
            Stop Recording and Store Operation Sequence: 5.1.3: Dependent
            Abort Recording of an Operation Sequence: 5.1.4: abort_recording
                Abort Current Recording: Example and Test: 5.1.4.1: abort_recording.cpp
            ADFun Sequence Properties: 5.1.5: seq_property
                ADFun Sequence Properties: Example and Test: 5.1.5.1: seq_property.cpp
        First and Second Order Derivatives: Easy Drivers: 5.2: drivers
            Jacobian: Driver Routine: 5.2.1: Jacobian
                Jacobian: Example and Test: 5.2.1.1: jacobian.cpp
            Hessian: Easy Driver: 5.2.2: Hessian
                Hessian: Example and Test: 5.2.2.1: hessian.cpp
                Hessian of Lagrangian and ADFun Default Constructor: Example and Test: 5.2.2.2: hes_lagrangian.cpp
            First Order Partial Derivative: Driver Routine: 5.2.3: ForOne
                First Order Partial Driver: Example and Test: 5.2.3.1: for_one.cpp
            First Order Derivative: Driver Routine: 5.2.4: RevOne
                First Order Derivative Driver: Example and Test: 5.2.4.1: rev_one.cpp
            Forward Mode Second Partial Derivative Driver: 5.2.5: ForTwo
                Subset of Second Order Partials: Example and Test: 5.2.5.1: for_two.cpp
            Reverse Mode Second Partial Derivative Driver: 5.2.6: RevTwo
                Second Partials Reverse Driver: Example and Test: 5.2.6.1: rev_two.cpp
        Forward Mode: 5.3: Forward
            Zero Order Forward Mode: Function Values: 5.3.1: forward_zero
            First Order Forward Mode: Derivative Values: 5.3.2: forward_one
            Second Order Forward Mode: Derivative Values: 5.3.3: forward_two
            Multiple Order Forward Mode: 5.3.4: forward_order
                Forward Mode: Example and Test: 5.3.4.1: forward.cpp
                Forward Mode: Example and Test of Multiple Orders: 5.3.4.2: forward_order.cpp
            Multiple Directions Forward Mode: 5.3.5: forward_dir
                Forward Mode: Example and Test of Multiple Directions: 5.3.5.1: forward_dir.cpp
            Number Taylor Coefficient Orders Currently Stored: 5.3.6: size_order
            Comparison Changes Between Taping and Zero Order Forward: 5.3.7: compare_change
                CompareChange and Re-Tape: Example and Test: 5.3.7.1: compare_change.cpp
            Controlling Taylor Coefficients Memory Allocation: 5.3.8: capacity_order
                Controlling Taylor Coefficient Memory Allocation: Example and Test: 5.3.8.1: capacity_order.cpp
            Number of Variables that Can be Skipped: 5.3.9: number_skip
                Number of Variables That Can be Skipped: Example and Test: 5.3.9.1: number_skip.cpp
        Reverse Mode: 5.4: Reverse
            First Order Reverse Mode: 5.4.1: reverse_one
                First Order Reverse Mode: Example and Test: 5.4.1.1: reverse_one.cpp
            Second Order Reverse Mode: 5.4.2: reverse_two
                Second Order Reverse ModeExample and Test: 5.4.2.1: reverse_two.cpp
                Hessian Times Direction: Example and Test: 5.4.2.2: hes_times_dir.cpp
            Any Order Reverse Mode: 5.4.3: reverse_any
                Third Order Reverse Mode: Example and Test: 5.4.3.1: reverse_three.cpp
                Reverse Mode General Case (Checkpointing): Example and Test: 5.4.3.2: reverse_checkpoint.cpp
            Reverse Mode Using Subgraphs: 5.4.4: subgraph_reverse
                Computing Reverse Mode on Subgraphs: Example and Test: 5.4.4.1: subgraph_reverse.cpp
        Calculating Sparsity Patterns: 5.5: sparsity_pattern
            Forward Mode Jacobian Sparsity Patterns: 5.5.1: for_jac_sparsity
                Forward Mode Jacobian Sparsity: Example and Test: 5.5.1.1: for_jac_sparsity.cpp
            Jacobian Sparsity Pattern: Forward Mode: 5.5.2: ForSparseJac
                Forward Mode Jacobian Sparsity: Example and Test: 5.5.2.1: for_sparse_jac.cpp
            Reverse Mode Jacobian Sparsity Patterns: 5.5.3: rev_jac_sparsity
                Reverse Mode Jacobian Sparsity: Example and Test: 5.5.3.1: rev_jac_sparsity.cpp
            Jacobian Sparsity Pattern: Reverse Mode: 5.5.4: RevSparseJac
                Reverse Mode Jacobian Sparsity: Example and Test: 5.5.4.1: rev_sparse_jac.cpp
            Reverse Mode Hessian Sparsity Patterns: 5.5.5: rev_hes_sparsity
                Reverse Mode Hessian Sparsity: Example and Test: 5.5.5.1: rev_hes_sparsity.cpp
            Hessian Sparsity Pattern: Reverse Mode: 5.5.6: RevSparseHes
                Reverse Mode Hessian Sparsity: Example and Test: 5.5.6.1: rev_sparse_hes.cpp
                Sparsity Patterns For a Subset of Variables: Example and Test: 5.5.6.2: sparsity_sub.cpp
            Forward Mode Hessian Sparsity Patterns: 5.5.7: for_hes_sparsity
                Forward Mode Hessian Sparsity: Example and Test: 5.5.7.1: for_hes_sparsity.cpp
            Hessian Sparsity Pattern: Forward Mode: 5.5.8: ForSparseHes
                Forward Mode Hessian Sparsity: Example and Test: 5.5.8.1: for_sparse_hes.cpp
            Computing Dependency: Example and Test: 5.5.9: dependency.cpp
            Preferred Sparsity Patterns: Row and Column Indices: Example and Test: 5.5.10: rc_sparsity.cpp
            Subgraph Dependency Sparsity Patterns: 5.5.11: subgraph_sparsity
                Subgraph Dependency Sparsity Patterns: Example and Test: 5.5.11.1: subgraph_sparsity.cpp
        Calculating Sparse Derivatives: 5.6: sparse_derivative
            Computing Sparse Jacobians: 5.6.1: sparse_jac
                Computing Sparse Jacobian Using Forward Mode: Example and Test: 5.6.1.1: sparse_jac_for.cpp
                Computing Sparse Jacobian Using Reverse Mode: Example and Test: 5.6.1.2: sparse_jac_rev.cpp
            Sparse Jacobian: 5.6.2: sparse_jacobian
                Sparse Jacobian: Example and Test: 5.6.2.1: sparse_jacobian.cpp
            Computing Sparse Hessians: 5.6.3: sparse_hes
                Computing Sparse Hessian: Example and Test: 5.6.3.1: sparse_hes.cpp
            Sparse Hessian: 5.6.4: sparse_hessian
                Sparse Hessian: Example and Test: 5.6.4.1: sparse_hessian.cpp
                Computing Sparse Hessian for a Subset of Variables: 5.6.4.2: sub_sparse_hes.cpp
                Subset of a Sparse Hessian: Example and Test: 5.6.4.3: sparse_sub_hes.cpp
            Compute Sparse Jacobians Using Subgraphs: 5.6.5: subgraph_jac_rev
                Computing Sparse Jacobian Using Reverse Mode: Example and Test: 5.6.5.1: subgraph_jac_rev.cpp
                Sparse Hessian Using Subgraphs and Jacobian: Example and Test: 5.6.5.2: subgraph_hes2jac.cpp
        Optimize an ADFun Object Tape: 5.7: optimize
            Example Optimization and Forward Activity Analysis: 5.7.1: optimize_forward_active.cpp
            Example Optimization and Reverse Activity Analysis: 5.7.2: optimize_reverse_active.cpp
            Example Optimization and Comparison Operators: 5.7.3: optimize_compare_op.cpp
            Example Optimization and Print Forward Operators: 5.7.4: optimize_print_for.cpp
            Example Optimization and Conditional Expressions: 5.7.5: optimize_conditional_skip.cpp
            Example Optimization and Nested Conditional Expressions: 5.7.6: optimize_nest_conditional.cpp
            Example Optimization and Cumulative Sum Operations: 5.7.7: optimize_cumulative_sum.cpp
        Abs-normal Representation of Non-Smooth Functions: 5.8: abs_normal
            Create An Abs-normal Representation of a Function: 5.8.1: abs_normal_fun
                abs_normal Getting Started: Example and Test: 5.8.1.1: abs_get_started.cpp
            abs_normal: Print a Vector or Matrix: 5.8.2: abs_print_mat
            abs_normal: Evaluate First Order Approximation: 5.8.3: abs_eval
                abs_eval: Example and Test: 5.8.3.1: abs_eval.cpp
                abs_eval Source Code: 5.8.3.2: abs_eval.hpp
            abs_normal: Solve a Linear Program Using Simplex Method: 5.8.4: simplex_method
                abs_normal simplex_method: Example and Test: 5.8.4.1: simplex_method.cpp
                simplex_method Source Code: 5.8.4.2: simplex_method.hpp
            abs_normal: Solve a Linear Program With Box Constraints: 5.8.5: lp_box
                abs_normal lp_box: Example and Test: 5.8.5.1: lp_box.cpp
                lp_box Source Code: 5.8.5.2: lp_box.hpp
            abs_normal: Minimize a Linear Abs-normal Approximation: 5.8.6: abs_min_linear
                abs_min_linear: Example and Test: 5.8.6.1: abs_min_linear.cpp
                abs_min_linear Source Code: 5.8.6.2: abs_min_linear.hpp
            Non-Smooth Optimization Using Abs-normal Linear Approximations: 5.8.7: min_nso_linear
                abs_normal min_nso_linear: Example and Test: 5.8.7.1: min_nso_linear.cpp
                min_nso_linear Source Code: 5.8.7.2: min_nso_linear.hpp
            Solve a Quadratic Program Using Interior Point Method: 5.8.8: qp_interior
                abs_normal qp_interior: Example and Test: 5.8.8.1: qp_interior.cpp
                qp_interior Source Code: 5.8.8.2: qp_interior.hpp
            abs_normal: Solve a Quadratic Program With Box Constraints: 5.8.9: qp_box
                abs_normal qp_box: Example and Test: 5.8.9.1: qp_box.cpp
                qp_box Source Code: 5.8.9.2: qp_box.hpp
            abs_normal: Minimize a Linear Abs-normal Approximation: 5.8.10: abs_min_quad
                abs_min_quad: Example and Test: 5.8.10.1: abs_min_quad.cpp
                abs_min_quad Source Code: 5.8.10.2: abs_min_quad.hpp
            Non-Smooth Optimization Using Abs-normal Quadratic Approximations: 5.8.11: min_nso_quad
                abs_normal min_nso_quad: Example and Test: 5.8.11.1: min_nso_quad.cpp
                min_nso_quad Source Code: 5.8.11.2: min_nso_quad.hpp
        Check an ADFun Sequence of Operations: 5.9: FunCheck
            ADFun Check and Re-Tape: Example and Test: 5.9.1: fun_check.cpp
        Check an ADFun Object For Nan Results: 5.10: check_for_nan
            ADFun Checking For Nan: Example and Test: 5.10.1: check_for_nan.cpp
    CppAD API Preprocessor Symbols: 6: preprocessor
    Using CppAD in a Multi-Threading Environment: 7: multi_thread
        Enable AD Calculations During Parallel Mode: 7.1: parallel_ad
        Run Multi-Threading Examples and Speed Tests: 7.2: thread_test.cpp
            A Simple OpenMP Example and Test: 7.2.1: a11c_openmp.cpp
            A Simple Boost Thread Example and Test: 7.2.2: a11c_bthread.cpp
            A Simple Parallel Pthread Example and Test: 7.2.3: a11c_pthread.cpp
            A Simple OpenMP AD: Example and Test: 7.2.4: simple_ad_openmp.cpp
            A Simple Boost Threading AD: Example and Test: 7.2.5: simple_ad_bthread.cpp
            A Simple pthread AD: Example and Test: 7.2.6: simple_ad_pthread.cpp
            Using a Team of AD Threads: Example and Test: 7.2.7: team_example.cpp
            Multi-Threading Harmonic Summation Example / Test: 7.2.8: harmonic.cpp
                Common Variables Used by Multi-threading Sum of 1/i: 7.2.8.1: harmonic_common
                Set Up Multi-threading Sum of 1/i: 7.2.8.2: harmonic_setup
                Do One Thread's Work for Sum of 1/i: 7.2.8.3: harmonic_worker
                Take Down Multi-threading Sum of 1/i: 7.2.8.4: harmonic_takedown
                Multi-Threaded Implementation of Summation of 1/i: 7.2.8.5: harmonic_sum
                Timing Test of Multi-Threaded Summation of 1/i: 7.2.8.6: harmonic_time
            Multi-Threading User Atomic Example / Test: 7.2.9: multi_atomic.cpp
                Defines a User Atomic Operation that Computes Square Root: 7.2.9.1: multi_atomic_user
                Multi-Threaded User Atomic Common Information: 7.2.9.2: multi_atomic_common
                Multi-Threaded User Atomic Set Up: 7.2.9.3: multi_atomic_setup
                Multi-Threaded User Atomic Worker: 7.2.9.4: multi_atomic_worker
                Multi-Threaded User Atomic Take Down: 7.2.9.5: multi_atomic_takedown
                Run Multi-Threaded User Atomic Calculation: 7.2.9.6: multi_atomic_run
                Timing Test for Multi-Threaded User Atomic Calculation: 7.2.9.7: multi_atomic_time
            Multi-Threaded Newton Method Example / Test: 7.2.10: multi_newton.cpp
                Common Variables use by Multi-Threaded Newton Method: 7.2.10.1: multi_newton_common
                Set Up Multi-Threaded Newton Method: 7.2.10.2: multi_newton_setup
                Do One Thread's Work for Multi-Threaded Newton Method: 7.2.10.3: multi_newton_worker
                Take Down Multi-threaded Newton Method: 7.2.10.4: multi_newton_takedown
                A Multi-Threaded Newton's Method: 7.2.10.5: multi_newton_run
                Timing Test of Multi-Threaded Newton Method: 7.2.10.6: multi_newton_time
            Specifications for A Team of AD Threads: 7.2.11: team_thread.hpp
                OpenMP Implementation of a Team of AD Threads: 7.2.11.1: team_openmp.cpp
                Boost Thread Implementation of a Team of AD Threads: 7.2.11.2: team_bthread.cpp
                Pthread Implementation of a Team of AD Threads: 7.2.11.3: team_pthread.cpp
    Some General Purpose Utilities: 8: utility
        Replacing the CppAD Error Handler: 8.1: ErrorHandler
            Replacing The CppAD Error Handler: Example and Test: 8.1.1: error_handler.cpp
            CppAD Assertions During Execution: 8.1.2: cppad_assert
        Determine if Two Values Are Nearly Equal: 8.2: NearEqual
            NearEqual Function: Example and Test: 8.2.1: near_equal.cpp
        Run One Speed Test and Return Results: 8.3: speed_test
            speed_test: Example and test: 8.3.1: speed_test.cpp
        Run One Speed Test and Print Results: 8.4: SpeedTest
            Example Use of SpeedTest: 8.4.1: speed_program.cpp
        Determine Amount of Time to Execute a Test: 8.5: time_test
            Returns Elapsed Number of Seconds: 8.5.1: elapsed_seconds
                Elapsed Seconds: Example and Test: 8.5.1.1: elapsed_seconds.cpp
            time_test: Example and test: 8.5.2: time_test.cpp
        Object that Runs a Group of Tests: 8.6: test_boolofvoid
        Definition of a Numeric Type: 8.7: NumericType
            The NumericType: Example and Test: 8.7.1: numeric_type.cpp
        Check NumericType Class Concept: 8.8: CheckNumericType
            The CheckNumericType Function: Example and Test: 8.8.1: check_numeric_type.cpp
        Definition of a Simple Vector: 8.9: SimpleVector
            Simple Vector Template Class: Example and Test: 8.9.1: simple_vector.cpp
        Check Simple Vector Concept: 8.10: CheckSimpleVector
            The CheckSimpleVector Function: Example and Test: 8.10.1: check_simple_vector.cpp
        Obtain Nan or Determine if a Value is Nan: 8.11: nan
            nan: Example and Test: 8.11.1: nan.cpp
        The Integer Power Function: 8.12: pow_int
            The Pow Integer Exponent: Example and Test: 8.12.1: pow_int.cpp
        Evaluate a Polynomial or its Derivative: 8.13: Poly
            Polynomial Evaluation: Example and Test: 8.13.1: poly.cpp
            Source: Poly: 8.13.2: poly.hpp
        Compute Determinants and Solve Equations by LU Factorization: 8.14: LuDetAndSolve
            Compute Determinant and Solve Linear Equations: 8.14.1: LuSolve
                LuSolve With Complex Arguments: Example and Test: 8.14.1.1: lu_solve.cpp
                Source: LuSolve: 8.14.1.2: lu_solve.hpp
            LU Factorization of A Square Matrix: 8.14.2: LuFactor
                LuFactor: Example and Test: 8.14.2.1: lu_factor.cpp
                Source: LuFactor: 8.14.2.2: lu_factor.hpp
            Invert an LU Factored Equation: 8.14.3: LuInvert
                LuInvert: Example and Test: 8.14.3.1: lu_invert.cpp
                Source: LuInvert: 8.14.3.2: lu_invert.hpp
        One DimensionalRomberg Integration: 8.15: RombergOne
            One Dimensional Romberg Integration: Example and Test: 8.15.1: romberg_one.cpp
        Multi-dimensional Romberg Integration: 8.16: RombergMul
            One Dimensional Romberg Integration: Example and Test: 8.16.1: Rombergmul.cpp
        An Embedded 4th and 5th Order Runge-Kutta ODE Solver: 8.17: Runge45
            Runge45: Example and Test: 8.17.1: runge45_1.cpp
            Runge45: Example and Test: 8.17.2: runge45_2.cpp
        A 3rd and 4th Order Rosenbrock ODE Solver: 8.18: Rosen34
            Rosen34: Example and Test: 8.18.1: rosen_34.cpp
        An Error Controller for ODE Solvers: 8.19: OdeErrControl
            OdeErrControl: Example and Test: 8.19.1: ode_err_control.cpp
            OdeErrControl: Example and Test Using Maxabs Argument: 8.19.2: ode_err_maxabs.cpp
        An Arbitrary Order Gear Method: 8.20: OdeGear
            OdeGear: Example and Test: 8.20.1: ode_gear.cpp
        An Error Controller for Gear's Ode Solvers: 8.21: OdeGearControl
            OdeGearControl: Example and Test: 8.21.1: ode_gear_control.cpp
        The CppAD::vector Template Class: 8.22: CppAD_vector
            CppAD::vector Template Class: Example and Test: 8.22.1: cppad_vector.cpp
            CppAD::vectorBool Class: Example and Test: 8.22.2: vector_bool.cpp
        A Fast Multi-Threading Memory Allocator: 8.23: thread_alloc
            Fast Multi-Threading Memory Allocator: Example and Test: 8.23.1: thread_alloc.cpp
            Setup thread_alloc For Use in Multi-Threading Environment: 8.23.2: ta_parallel_setup
            Get Number of Threads: 8.23.3: ta_num_threads
            Is The Current Execution in Parallel Mode: 8.23.4: ta_in_parallel
            Get the Current Thread Number: 8.23.5: ta_thread_num
            Get At Least A Specified Amount of Memory: 8.23.6: ta_get_memory
            Return Memory to thread_alloc: 8.23.7: ta_return_memory
            Free Memory Currently Available for Quick Use by a Thread: 8.23.8: ta_free_available
            Control When Thread Alloc Retains Memory For Future Use: 8.23.9: ta_hold_memory
            Amount of Memory a Thread is Currently Using: 8.23.10: ta_inuse
            Amount of Memory Available for Quick Use by a Thread: 8.23.11: ta_available
            Allocate An Array and Call Default Constructor for its Elements: 8.23.12: ta_create_array
            Deallocate An Array and Call Destructor for its Elements: 8.23.13: ta_delete_array
            Free All Memory That Was Allocated for Use by thread_alloc: 8.23.14: ta_free_all
        Returns Indices that Sort a Vector: 8.24: index_sort
            Index Sort: Example and Test: 8.24.1: index_sort.cpp
        Convert Certain Types to a String: 8.25: to_string
            to_string: Example and Test: 8.25.1: to_string.cpp
        Union of Standard Sets: 8.26: set_union
            Set Union: Example and Test: 8.26.1: set_union.cpp
        Row and Column Index Sparsity Patterns: 8.27: sparse_rc
            sparse_rc: Example and Test: 8.27.1: sparse_rc.cpp
        Sparse Matrix Row, Column, Value Representation: 8.28: sparse_rcv
            sparse_rcv: Example and Test: 8.28.1: sparse_rcv.cpp
    Use Ipopt to Solve a Nonlinear Programming Problem: 9: ipopt_solve
        Nonlinear Programming Using CppAD and Ipopt: Example and Test: 9.1: ipopt_solve_get_started.cpp
        Nonlinear Programming Retaping: Example and Test: 9.2: ipopt_solve_retape.cpp
        ODE Inverse Problem Definitions: Source Code: 9.3: ipopt_solve_ode_inverse.cpp
    Examples: 10: Example
        Getting Started Using CppAD to Compute Derivatives: 10.1: get_started.cpp
        General Examples: 10.2: General
            Creating Your Own Interface to an ADFun Object: 10.2.1: ad_fun.cpp
            Example and Test Linking CppAD to Languages Other than C++: 10.2.2: ad_in_c.cpp
            Differentiate Conjugate Gradient Algorithm: Example and Test: 10.2.3: conj_grad.cpp
            Enable Use of Eigen Linear Algebra Package with CppAD: 10.2.4: cppad_eigen.hpp
                Source Code for eigen_plugin.hpp: 10.2.4.1: eigen_plugin.hpp
                Using Eigen Arrays: Example and Test: 10.2.4.2: eigen_array.cpp
                Using Eigen To Compute Determinant: Example and Test: 10.2.4.3: eigen_det.cpp
            Gradient of Determinant Using Expansion by Minors: Example and Test: 10.2.5: hes_minor_det.cpp
            Gradient of Determinant Using LU Factorization: Example and Test: 10.2.6: hes_lu_det.cpp
            Interfacing to C: Example and Test: 10.2.7: interface2c.cpp
            Gradient of Determinant Using Expansion by Minors: Example and Test: 10.2.8: jac_minor_det.cpp
            Gradient of Determinant Using Lu Factorization: Example and Test: 10.2.9: jac_lu_det.cpp
            Using Multiple Levels of AD: 10.2.10: mul_level
                Multiple Level of AD: Example and Test: 10.2.10.1: mul_level.cpp
                Computing a Jacobian With Constants that Change: 10.2.10.2: change_param.cpp
            A Stiff Ode: Example and Test: 10.2.11: ode_stiff.cpp
            Taylor's Ode Solver: A Multi-Level AD Example and Test: 10.2.12: mul_level_ode.cpp
            Taylor's Ode Solver: A Multi-Level Adolc Example and Test: 10.2.13: mul_level_adolc_ode.cpp
            Taylor's Ode Solver: An Example and Test: 10.2.14: ode_taylor.cpp
            Example Differentiating a Stack Machine Interpreter: 10.2.15: stack_machine.cpp
        Utility Routines used by CppAD Examples: 10.3: ExampleUtility
            CppAD Examples and Tests: 10.3.1: general.cpp
            Run the Speed Examples: 10.3.2: speed_example.cpp
            Lu Factor and Solve with Recorded Pivoting: 10.3.3: lu_vec_ad.cpp
                Lu Factor and Solve With Recorded Pivoting: Example and Test: 10.3.3.1: lu_vec_ad_ok.cpp
        List All (Except Deprecated) CppAD Examples: 10.4: ListAllExamples
        Using The CppAD Test Vector Template Class: 10.5: testvector
        Suppress Suspect Implicit Conversion Warnings: 10.6: wno_conversion
    Speed Test an Operator Overloading AD Package: 11: speed
        Running the Speed Test Program: 11.1: speed_main
            Speed Testing Gradient of Determinant Using Lu Factorization: 11.1.1: link_det_lu
            Speed Testing Gradient of Determinant by Minor Expansion: 11.1.2: link_det_minor
            Speed Testing Derivative of Matrix Multiply: 11.1.3: link_mat_mul
            Speed Testing the Jacobian of Ode Solution: 11.1.4: link_ode
            Speed Testing Second Derivative of a Polynomial: 11.1.5: link_poly
            Speed Testing Sparse Hessian: 11.1.6: link_sparse_hessian
            Speed Testing Sparse Jacobian: 11.1.7: link_sparse_jacobian
            Microsoft Version of Elapsed Number of Seconds: 11.1.8: microsoft_timer
        Speed Testing Utilities: 11.2: speed_utility
            Determinant Using Expansion by Lu Factorization: 11.2.1: det_by_lu
                Determinant Using Lu Factorization: Example and Test: 11.2.1.1: det_by_lu.cpp
                Source: det_by_lu: 11.2.1.2: det_by_lu.hpp
            Determinant of a Minor: 11.2.2: det_of_minor
                Determinant of a Minor: Example and Test: 11.2.2.1: det_of_minor.cpp
                Source: det_of_minor: 11.2.2.2: det_of_minor.hpp
            Determinant Using Expansion by Minors: 11.2.3: det_by_minor
                Determinant Using Expansion by Minors: Example and Test: 11.2.3.1: det_by_minor.cpp
                Source: det_by_minor: 11.2.3.2: det_by_minor.hpp
            Check Determinant of 3 by 3 matrix: 11.2.4: det_33
                Source: det_33: 11.2.4.1: det_33.hpp
            Check Gradient of Determinant of 3 by 3 matrix: 11.2.5: det_grad_33
                Source: det_grad_33: 11.2.5.1: det_grad_33.hpp
            Sum Elements of a Matrix Times Itself: 11.2.6: mat_sum_sq
                Sum of the Elements of the Square of a Matrix: Example and Test: 11.2.6.1: mat_sum_sq.cpp
                Source: mat_sum_sq: 11.2.6.2: mat_sum_sq.hpp
            Evaluate a Function Defined in Terms of an ODE: 11.2.7: ode_evaluate
                ode_evaluate: Example and test: 11.2.7.1: ode_evaluate.cpp
                Source: ode_evaluate: 11.2.7.2: ode_evaluate.hpp
            Evaluate a Function That Has a Sparse Jacobian: 11.2.8: sparse_jac_fun
                sparse_jac_fun: Example and test: 11.2.8.1: sparse_jac_fun.cpp
                Source: sparse_jac_fun: 11.2.8.2: sparse_jac_fun.hpp
            Evaluate a Function That Has a Sparse Hessian: 11.2.9: sparse_hes_fun
                sparse_hes_fun: Example and test: 11.2.9.1: sparse_hes_fun.cpp
                Source: sparse_hes_fun: 11.2.9.2: sparse_hes_fun.hpp
            Simulate a [0,1] Uniform Random Variate: 11.2.10: uniform_01
                Source: uniform_01: 11.2.10.1: uniform_01.hpp
        Speed Test of Functions in Double: 11.3: speed_double
            Double Speed: Determinant by Minor Expansion: 11.3.1: double_det_minor.cpp
            Double Speed: Determinant Using Lu Factorization: 11.3.2: double_det_lu.cpp
            CppAD Speed: Matrix Multiplication (Double Version): 11.3.3: double_mat_mul.cpp
            Double Speed: Ode Solution: 11.3.4: double_ode.cpp
            Double Speed: Evaluate a Polynomial: 11.3.5: double_poly.cpp
            Double Speed: Sparse Hessian: 11.3.6: double_sparse_hessian.cpp
            Double Speed: Sparse Jacobian: 11.3.7: double_sparse_jacobian.cpp
        Speed Test of Derivatives Using Adolc: 11.4: speed_adolc
            Adolc Speed: Gradient of Determinant by Minor Expansion: 11.4.1: adolc_det_minor.cpp
            Adolc Speed: Gradient of Determinant Using Lu Factorization: 11.4.2: adolc_det_lu.cpp
            Adolc Speed: Matrix Multiplication: 11.4.3: adolc_mat_mul.cpp
            Adolc Speed: Ode: 11.4.4: adolc_ode.cpp
            Adolc Speed: Second Derivative of a Polynomial: 11.4.5: adolc_poly.cpp
            Adolc Speed: Sparse Hessian: 11.4.6: adolc_sparse_hessian.cpp
            adolc Speed: Sparse Jacobian: 11.4.7: adolc_sparse_jacobian.cpp
            Adolc Test Utility: Allocate and Free Memory For a Matrix: 11.4.8: adolc_alloc_mat
        Speed Test Derivatives Using CppAD: 11.5: speed_cppad
            CppAD Speed: Gradient of Determinant by Minor Expansion: 11.5.1: cppad_det_minor.cpp
            CppAD Speed: Gradient of Determinant Using Lu Factorization: 11.5.2: cppad_det_lu.cpp
            CppAD Speed, Matrix Multiplication: 11.5.3: cppad_mat_mul.cpp
            CppAD Speed: Gradient of Ode Solution: 11.5.4: cppad_ode.cpp
            CppAD Speed: Second Derivative of a Polynomial: 11.5.5: cppad_poly.cpp
            CppAD Speed: Sparse Hessian: 11.5.6: cppad_sparse_hessian.cpp
            CppAD Speed: Sparse Jacobian: 11.5.7: cppad_sparse_jacobian.cpp
        Speed Test Derivatives Using Fadbad: 11.6: speed_fadbad
            Fadbad Speed: Gradient of Determinant by Minor Expansion: 11.6.1: fadbad_det_minor.cpp
            Fadbad Speed: Gradient of Determinant Using Lu Factorization: 11.6.2: fadbad_det_lu.cpp
            Fadbad Speed: Matrix Multiplication: 11.6.3: fadbad_mat_mul.cpp
            Fadbad Speed: Ode: 11.6.4: fadbad_ode.cpp
            Fadbad Speed: Second Derivative of a Polynomial: 11.6.5: fadbad_poly.cpp
            Fadbad Speed: Sparse Hessian: 11.6.6: fadbad_sparse_hessian.cpp
            fadbad Speed: sparse_jacobian: 11.6.7: fadbad_sparse_jacobian.cpp
        Speed Test Derivatives Using Sacado: 11.7: speed_sacado
            Sacado Speed: Gradient of Determinant by Minor Expansion: 11.7.1: sacado_det_minor.cpp
            Sacado Speed: Gradient of Determinant Using Lu Factorization: 11.7.2: sacado_det_lu.cpp
            Sacado Speed: Matrix Multiplication: 11.7.3: sacado_mat_mul.cpp
            Sacado Speed: Gradient of Ode Solution: 11.7.4: sacado_ode.cpp
            Sacado Speed: Second Derivative of a Polynomial: 11.7.5: sacado_poly.cpp
            Sacado Speed: Sparse Hessian: 11.7.6: sacado_sparse_hessian.cpp
            sacado Speed: sparse_jacobian: 11.7.7: sacado_sparse_jacobian.cpp
    Appendix: 12: Appendix
        Frequently Asked Questions and Answers: 12.1: Faq
        Directory Structure: 12.2: directory
        The Theory of Derivative Calculations: 12.3: Theory
            The Theory of Forward Mode: 12.3.1: ForwardTheory
                Exponential Function Forward Mode Theory: 12.3.1.1: exp_forward
                Logarithm Function Forward Mode Theory: 12.3.1.2: log_forward
                Square Root Function Forward Mode Theory: 12.3.1.3: sqrt_forward
                Trigonometric and Hyperbolic Sine and Cosine Forward Theory: 12.3.1.4: sin_cos_forward
                Inverse Tangent and Hyperbolic Tangent Forward Mode Theory: 12.3.1.5: atan_forward
                Inverse Sine and Hyperbolic Sine Forward Mode Theory: 12.3.1.6: asin_forward
                Inverse Cosine and Hyperbolic Cosine Forward Mode Theory: 12.3.1.7: acos_forward
                Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory: 12.3.1.8: tan_forward
                Error Function Forward Taylor Polynomial Theory: 12.3.1.9: erf_forward
            The Theory of Reverse Mode: 12.3.2: ReverseTheory
                Exponential Function Reverse Mode Theory: 12.3.2.1: exp_reverse
                Logarithm Function Reverse Mode Theory: 12.3.2.2: log_reverse
                Square Root Function Reverse Mode Theory: 12.3.2.3: sqrt_reverse
                Trigonometric and Hyperbolic Sine and Cosine Reverse Theory: 12.3.2.4: sin_cos_reverse
                Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory: 12.3.2.5: atan_reverse
                Inverse Sine and Hyperbolic Sine Reverse Mode Theory: 12.3.2.6: asin_reverse
                Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory: 12.3.2.7: acos_reverse
                Tangent and Hyperbolic Tangent Reverse Mode Theory: 12.3.2.8: tan_reverse
                Error Function Reverse Mode Theory: 12.3.2.9: erf_reverse
            An Important Reverse Mode Identity: 12.3.3: reverse_identity
        Glossary: 12.4: glossary
        Bibliography: 12.5: Bib
        The CppAD Wish List: 12.6: wish_list
        Changes and Additions to CppAD: 12.7: whats_new
            Changes and Additions to CppAD During 2017: 12.7.1: whats_new_17
            Changes and Additions to CppAD During 2016: 12.7.2: whats_new_16
            CppAD Changes and Additions During 2015: 12.7.3: whats_new_15
            CppAD Changes and Additions During 2014: 12.7.4: whats_new_14
            CppAD Changes and Additions During 2013: 12.7.5: whats_new_13
            CppAD Changes and Additions During 2012: 12.7.6: whats_new_12
            Changes and Additions to CppAD During 2011: 12.7.7: whats_new_11
            Changes and Additions to CppAD During 2010: 12.7.8: whats_new_10
            Changes and Additions to CppAD During 2009: 12.7.9: whats_new_09
            Changes and Additions to CppAD During 2008: 12.7.10: whats_new_08
            Changes and Additions to CppAD During 2007: 12.7.11: whats_new_07
            Changes and Additions to CppAD During 2006: 12.7.12: whats_new_06
            Changes and Additions to CppAD During 2005: 12.7.13: whats_new_05
            Changes and Additions to CppAD During 2004: 12.7.14: whats_new_04
            Changes and Additions to CppAD During 2003: 12.7.15: whats_new_03
        CppAD Deprecated API Features: 12.8: deprecated
            Deprecated Include Files: 12.8.1: include_deprecated
            ADFun Object Deprecated Member Functions: 12.8.2: FunDeprecated
            Comparison Changes During Zero Order Forward Mode: 12.8.3: CompareChange
            OpenMP Parallel Setup: 12.8.4: omp_max_thread
            Routines That Track Use of New and Delete: 12.8.5: TrackNewDel
                Tracking Use of New and Delete: Example and Test: 12.8.5.1: TrackNewDel.cpp
            A Quick OpenMP Memory Allocator Used by CppAD: 12.8.6: omp_alloc
                Set and Get Maximum Number of Threads for omp_alloc Allocator: 12.8.6.1: omp_max_num_threads
                Is The Current Execution in OpenMP Parallel Mode: 12.8.6.2: omp_in_parallel
                Get the Current OpenMP Thread Number: 12.8.6.3: omp_get_thread_num
                Get At Least A Specified Amount of Memory: 12.8.6.4: omp_get_memory
                Return Memory to omp_alloc: 12.8.6.5: omp_return_memory
                Free Memory Currently Available for Quick Use by a Thread: 12.8.6.6: omp_free_available
                Amount of Memory a Thread is Currently Using: 12.8.6.7: omp_inuse
                Amount of Memory Available for Quick Use by a Thread: 12.8.6.8: omp_available
                Allocate Memory and Create A Raw Array: 12.8.6.9: omp_create_array
                Return A Raw Array to The Available Memory for a Thread: 12.8.6.10: omp_delete_array
                Check If A Memory Allocation is Efficient for Another Use: 12.8.6.11: omp_efficient
                Set Maximum Number of Threads for omp_alloc Allocator: 12.8.6.12: old_max_num_threads
                OpenMP Memory Allocator: Example and Test: 12.8.6.13: omp_alloc.cpp
            Memory Leak Detection: 12.8.7: memory_leak
            Machine Epsilon For AD Types: 12.8.8: epsilon
            Choosing The Vector Testing Template Class: 12.8.9: test_vector
            Nonlinear Programming Using the CppAD Interface to Ipopt: 12.8.10: cppad_ipopt_nlp
                Nonlinear Programming Using CppAD and Ipopt: Example and Test: 12.8.10.1: ipopt_nlp_get_started.cpp
                Example Simultaneous Solution of Forward and Inverse Problem: 12.8.10.2: ipopt_nlp_ode
                    An ODE Inverse Problem Example: 12.8.10.2.1: ipopt_nlp_ode_problem
                        ODE Inverse Problem Definitions: Source Code: 12.8.10.2.1.1: ipopt_nlp_ode_problem.hpp
                    ODE Fitting Using Simple Representation: 12.8.10.2.2: ipopt_nlp_ode_simple
                        ODE Fitting Using Simple Representation: 12.8.10.2.2.1: ipopt_nlp_ode_simple.hpp
                    ODE Fitting Using Fast Representation: 12.8.10.2.3: ipopt_nlp_ode_fast
                        ODE Fitting Using Fast Representation: 12.8.10.2.3.1: ipopt_nlp_ode_fast.hpp
                    Driver for Running the Ipopt ODE Example: 12.8.10.2.4: ipopt_nlp_ode_run.hpp
                    Correctness Check for Both Simple and Fast Representations: 12.8.10.2.5: ipopt_nlp_ode_check.cpp
                Speed Test for Both Simple and Fast Representations: 12.8.10.3: ipopt_ode_speed.cpp
            User Defined Atomic AD Functions: 12.8.11: old_atomic
                Old Atomic Operation Reciprocal: Example and Test: 12.8.11.1: old_reciprocal.cpp
                Using AD to Compute Atomic Function Derivatives: 12.8.11.2: old_usead_1.cpp
                Using AD to Compute Atomic Function Derivatives: 12.8.11.3: old_usead_2.cpp
                Old Tan and Tanh as User Atomic Operations: Example and Test: 12.8.11.4: old_tan.cpp
                Old Matrix Multiply as a User Atomic Operation: Example and Test: 12.8.11.5: old_mat_mul.cpp
                    Define Matrix Multiply as a User Atomic Operation: 12.8.11.5.1: old_mat_mul.hpp
            zdouble: An AD Base Type With Absolute Zero: 12.8.12: zdouble
                zdouble: Example and Test: 12.8.12.1: zdouble.cpp
            Autotools Unix Test and Installation: 12.8.13: autotools
        Compare Speed of C and C++: 12.9: compare_c
            Determinant of a Minor: 12.9.1: det_of_minor_c
            Compute Determinant using Expansion by Minors: 12.9.2: det_by_minor_c
            Simulate a [0,1] Uniform Random Variate: 12.9.3: uniform_01_c
            Correctness Test of det_by_minor Routine: 12.9.4: correct_det_by_minor_c
            Repeat det_by_minor Routine A Specified Number of Times: 12.9.5: repeat_det_by_minor_c
            Returns Elapsed Number of Seconds: 12.9.6: elapsed_seconds_c
            Determine Amount of Time to Execute det_by_minor: 12.9.7: time_det_by_minor_c
            Main Program For Comparing C and C++ Speed: 12.9.8: main_compare_c
        Some Numerical AD Utilities: 12.10: numeric_ad
            Computing Jacobian and Hessian of Bender's Reduced Objective: 12.10.1: BenderQuad
                BenderQuad: Example and Test: 12.10.1.1: bender_quad.cpp
            Jacobian and Hessian of Optimal Values: 12.10.2: opt_val_hes
                opt_val_hes: Example and Test: 12.10.2.1: opt_val_hes.cpp
            LU Factorization of A Square Matrix and Stability Calculation: 12.10.3: LuRatio
                LuRatio: Example and Test: 12.10.3.1: lu_ratio.cpp
        CppAD Addons: 12.11: addon
        Your License for the CppAD Software: 12.12: License
    Alphabetic Listing of Cross Reference Tags: 13: _reference
    Keyword Index: 14: _index
    External Internet References: 15: _external

@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2: CppAD Download, Test, and Install Instructions

2.a: Instructions

2.a.a: Step 1: Download
Use the 2.1: download instructions to obtain a copy or CppAD.

2.a.b: Step 2: Cmake
Use the 2.2: cmake instructions to configure CppAD.

2.a.c: Step 3: Check
Use the 2.3: cmake_check instructions to check the CppAD examples and tests.

2.a.d: Step 4: Installation
Use the command
 
     make install
to install CppAD. If you created nmake makefiles, you will have to use
 
     nmake install
see the 2.2.e: generator option for the cmake command.

2.b: Contents
download: 2.1Download The CppAD Source Code
cmake: 2.2Using CMake to Configure CppAD
cmake_check: 2.3Checking the CppAD Examples and Tests
pkgconfig: 2.4CppAD pkg-config Files

2.c: Deprecated
12.8.13: autotools
Input File: omh/install/install.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.1: Download The CppAD Source Code

2.1.a: Purpose
CppAD is an include file library and you therefore need the source code to use it. This section discusses how to download the different versions of CppAD.

2.1.b: Distribution Directory
We refer to the CppAD source directory created by the download instructions below as the distribution directory. To be specific, the distribution directory contains the file cppad/cppad.hpp.

2.1.c: Version
A CppAD version number has the following fields: yyyy is four decimal digits denoting a year, mm is two decimal digits denoting a month, and dd is two decimal digits denoting a day. For example version = 20160101 corresponds to January 1, 2016.

2.1.d: Release
Special versions corresponding to the beginning of each year have mm and dd equal to zero. These version numbers are combined with release numbers denoted by rel . Higher release numbers correspond to more bug fixes. For example version.rel = 20160000.0 corresponds to the first release of the version for 2016, 20160000.1 corresponds to the first bug fix for 2016.

2.1.e: License
We use lic to denote the licence corresponding to an archived version of CppAD. The GNU General Public License is denoted by lic = gpl and the Eclipse Public License is denoted by lic = epl .

2.1.f: Compressed Archives
The Coin compressed archives have the documentation built into them. If you downloading an old version using another method; see 2.1.k: building documentation .

2.1.f.a: Coin
The compressed archive names on the Coin download page (http://www.coin-or.org/download/source/CppAD/) have one of the following formats:
     cppad-
version.rel.lic.tgz
     cppad-
version.lic.tgz
In Unix, you can extract these compressed archives using tar. For example,
     tar -xzf cppad-
version.rel.lic.tgz
No matter what the format of the name, the corresponding distribution directory is cppad-version . To see that the extraction has been done correctly, check for the following file:
     cppad-
version/cppad/cppad.hpp

2.1.f.b: Github
The compressed archive names on the Github download page (https://github.com/coin-or/CppAD/releases) have the format
     cppad-
version.rel.tgz
These archives correspond to the Eclipse Public License.

2.1.g: Source Code Control
These methods only provide the Eclipse Public License version of CppAD.

2.1.g.a: Git
CppAD source code development is current done using git You can a git clone of the current version using the command
    git clone https://github.com/coin-or/CppAD.git cppad.git
This procedure requires that the git (https://en.wikipedia.org/wiki/Git_%28software%29) is installed on your system.

2.1.g.b: Subversion
A subversion copy of the source code is kept on the Coin web site. You can obtain this subversion copy using the command
     svn checkout https://projects.coin-or.org/svn/CppAD/trunk cppad.svn/trunk
This procedure requires that the subversion (http://subversion.tigris.org/) program is installed on your system.

2.1.h: Monthly Versions
Monthly versions of the compressed tar files are available on the Coin download page (http://www.coin-or.org/download/source/CppAD/) . These are kept until the end of the current year, when the next release is created. The monthly versions have the form
     cppad-
yyyy0101.lic.tgz

2.1.i: Windows File Extraction and Testing
If you know how to extract the distribution directory from the tar file, just do so. Otherwise, below is one way you can do it. (Note that if 7z.exe, cmake.exe, and nmake.exe are you your execution path, you will not need to specify their paths below.)
  1. Download and install the open source program http://www.7-zip.org .
  2. Download and install the Visual Studio Express; for example Visual Studio 2013 (http://www.microsoft.com/en-us/download/confirmation.aspx?id=44914)
  3. In a command window, execute the following commands:
         set PATH=
    path_to_7_zip;%PATH%
         set PATH=
    path_to_cmake;%PATH%
         set VCDIR=
    path_to_vcdir;%PATH%
         call "%VCDIR%\vcvarsall.bat" x86
    For example, on one machine these paths had the following values:
         
    path_to_7_zip=C:\Program Files\7-zip
         
    path_to_cmake=C:\Program Files (x86)\CMake\bin
         
    path_to_vcdir=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC
  4. Use the following commands to extract the distribution from the compressed archive:
         7z x cppad-
    version.lic.tgz
         7z x cppad-
    version.lic.tar
  5. To see if this has been done correctly, check for the following file:
         cppad-
    version\cppad\cppad.hpp
  6. The commands below are optional. They run the CppAD tests using the default 2.2: cmake settings (except for the 2.2.e: generator option)
         mkdir build
         cd build
         cmake -G "NMake Makefiles" ..
         nmake check

2.1.j: Install Instructions
The 2: install instructions on this web site correspond to the current version of CppAD. If you are using an old version of CppAD these instructions may work. If you have trouble (or just to be careful), you should follow the instructions in the doc subdirectory of the distribution directory. If there is no such documentation, you can build it; see 2.1.k: building documentation .

2.1.k: Building Documentation
If you are using one of these download methods, you can build the documentation to get the corresponding install instructions. The documentation for CppAD is built from the source code files using OMhelp (http://www.seanet.com/~bradbell/omhelp/) . You will need to install the omhelp command so that
 
     which omhelp
shows it is in your path. Once you have done this, in the distribution directory execute the following command:
     bin/run_omhelp.sh htm
You will then be able to follow the install instructions in the doc subdirectory of the distribution directory.
Input File: omh/install/download.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2: Using CMake to Configure CppAD

2.2.a: The CMake Program
The cmake (http://www.cmake.org/cmake/help/install.html) program enables one to create a single set of scripts, called CMakeLists.txt, that can be used to test and install a program on Unix, Microsoft, or Apple operating systems. For example, one can use it to automatically generate Microsoft project files.

2.2.b: CMake Command
The command below assumes that cmake is in your execution path with version greater than or equal 2.8. If not, you can put the path to the version of cmake in font the command. Only the cmake command and the path to the distribution directory (.. at the end of the command below) are required. In other words, the first and last lines below are required and all of the other lines are optional.

2.2.b.a: Build Directory
Create a build subdirectory of the 2.1.b: distribution directory , change into the build directory, and execute the following command:
cmake 
                                                                     \
    -D CMAKE_VERBOSE_MAKEFILE=
cmake_verbose_makefile                       \
    -G 
generator                                                           \
     \
    -D cppad_prefix=
cppad_prefix                                           \
    -D cppad_postfix=
cppad_postfix                                         \
     \
    -D cmake_install_includedirs=
cmake_install_includedirs                 \
    -D cmake_install_libdirs=
cmake_install_libdirs                         \
     \
    -D cmake_install_datadir=
cmake_install_datadir                         \
    -D cmake_install_docdir=
cmake_install_docdir                           \
    \
    -D adolc_prefix=
adolc_prefix                                           \
    -D colpack_prefix=
colpack_prefix                                       \
    -D eigen_prefix=
eigen_prefix                                           \
    -D fadbad_prefix=
fadbad_prefix                                         \
    -D ipopt_prefix=
ipopt_prefix                                           \
    -D sacado_prefix=
sacado_prefix                                         \
    \
    -D cppad_cxx_flags=
cppad_cxx_flags                                     \
    -D cppad_profile_flag=
cppad_profile_flag                               \
    \
    -D cppad_testvector=
cppad_testvector                                   \
    -D cppad_max_num_threads=
cppad_max_num_threads                         \
    -D cppad_tape_id_type=
cppad_tape_id_type                               \
    -D cppad_tape_addr_type=
cppad_tape_addr_type                           \
    -D cppad_debug_which=
cppad_debug_which                                 \
    -D cppad_deprecated=
cppad_deprecated                                   \
    \
    ..

2.2.c: make check
Important information about the CppAD configuration is output by this command. If you have the grep program, and store the output in cmake.log, you can get a list of all the test options with the command:
 
     grep 'make check' cmake.log

2.2.d: cmake_verbose_makefile
This value should be either YES or NO. The default value, when it is not present, is NO. If it is YES, then the output of the make commands will include all of the files and flags used to run the compiler and linker. This can be useful for seeing how to compile and link your own applications.

2.2.e: generator
The CMake program is capable of generating different kinds of files. Below is a table with a few of the possible files
generator Description
"Unix Makefiles" make files for unix operating system
"NMake Makefiles" make files for Visual Studio
Other generator choices are available; see the cmake generators (http://www.cmake.org/cmake/help/cmake2.6docs.html#section_Generators) documentation.

2.2.f: cppad_prefix
This is the top level absolute path below which all of the CppAD files are installed by the command
     make install
For example, if cppad_prefix is /usr, cmake_install_includedirs is include, and cppad_postfix is not specified, the file cppad.hpp is installed in the location
     /usr/include/cppad/cppad.hpp
The default value for cppad_prefix is /usr.

2.2.g: cppad_postfix
This is the bottom level relative path below which all of the CppAD files are installed. For example, if cppad_prefix is /usr, cmake_install_includedirs is include, and cppad_postfix is coin, the file cppad.hpp is installed in the location
     /usr/include/coin/cppad/cppad.hpp
The default value for cppad_postfix is empty; i.e, there is no bottom level relative directory for the installed files.

2.2.h: cmake_install_includedirs
This is one directory, or a list of directories separated by spaces or by semi-colons. This first entry in the list is the middle level relative path below which the CppAD include files are installed. The entire list is used for searching for include files. For example, if cppad_prefix is /usr, cmake_install_includedirs is include, and cppad_postfix is not specified, the file cppad.hpp is installed in the location
     /usr/include/cppad/cppad.hpp
The default value for this directory list is include.

2.2.i: cmake_install_libdirs
This is one directory, or a list of directories separated by spaces or by semi-colons. This first entry in the list is the middle level relative path below which the CppAD library files are installed. The entire list is used for searching for library files. For example, if cppad_prefix is /usr, cmake_install_libdirs is lib, cppad_postfix is not specified, and ipopt_prefix is specified, the file libcppad_ipopt.a is installed in the location
     /usr/lib/libcppad_ipopt.a
The default value for this directory list is lib.

2.2.j: cmake_install_datadir
This is the middle level relative path below which the CppAD data files are installed. For example, if cppad_prefix is /usr, cmake_install_datadir is share, and cppad_postfix is not specified, the 2.4: pkgconfig file cppad.pc is installed in the location
     /usr/share/pkgconfig/cppad.pc
The default value for cmake_install_datadir is share.

2.2.k: cmake_install_docdir
This is the middle level relative path below which the CppAD documentation files are installed. For example, if cppad_prefix is /usr, cmake_install_docdir is share/doc, and cppad_postfix is not specified, the file cppad.xml is installed in the location
     /usr/share/doc/cppad/cppad.xml
There is no default value for cmake_install_docdir . If it is not specified, the documentation files are not installed.

2.2.l: package_prefix
Each of these packages corresponds to optional CppAD examples, that can be compiled and tested if the corresponding prefix is provided:
2.2.1: adolc_prefix Including the ADOL-C Examples and Tests
2.2.2: colpack_prefix Including the ColPack Sparsity Calculations
2.2.3: eigen_prefix Including the Eigen Examples and Tests
2.2.4: fadbad_prefix Including the FADBAD Speed Tests
2.2.5: ipopt_prefix Including the cppad_ipopt Library and Tests
2.2.6: sacado_prefix Including the Sacado Speed Tests

2.2.m: cppad_cxx_flags
This specifies the addition compiler flags that are used when compiling the CppAD examples and tests. The default value for these flags is the empty string "". These flags must be valid for the C++ compiler on your system. For example, if you are using g++ you could specify
 
     -D cppad_cxx_flags="-Wall -ansi -pedantic-errors -std=c++11 -Wshadow"

2.2.m.a: C++11
In order for the compiler to take advantage of features that are new in C++11, the cppad_cxx_flags must enable these features. The compiler may still be used with a flag that disables the new features (unless it is a Microsoft compiler; i.e., _MSC_VER is defined).

2.2.m.b: debug and release
The CppAD examples and tests decide which files to compile for debugging and which to compile for release. Hence debug and release flags should not be included in cppad_cxx_flags . See also the 6.b.a: CPPAD_DEBUG_AND_RELEASE compiler flag (which should not be included in cppad_cxx_flags ).

2.2.n: cppad_profile_flag
This specifies an addition compiler and link flag that is used for 11.1.c.c: profiling the speed tests. A profile version of the speed test is only build when this argument is present.

2.2.n.a: Eigen and Fadbad
The packages 2.2.3: eigen and 2.2.4: fadbad currently generate a lot of shadowed variable warnings. If the -Wshadow flag is present, it is automatically removed when compiling examples and test that use these packages.

2.2.o: cppad_testvector
See 2.2.7: Choosing the CppAD Test Vector Template Class.

2.2.p: cppad_max_num_threads
The value cppad_max_num_threads must be greater than or equal to four; i.e., max_num_threads >= 4 . The current default value for cppad_max_num_threads is 48, but it may change in future versions of CppAD. The value cppad_max_num_threads in turn specifies the default value for the preprocessor symbol 7.b: CPPAD_MAX_NUM_THREADS .

2.2.q: cppad_tape_id_type
The type cppad_tape_id_type is used for identifying different tapes. The valid values for this type are unsigned char, unsigned short int, unsigned int, and size_t. The smaller the value of sizeof(cppad_tape_id_type) , the less memory is used. On the other hand, the value
     std::numeric_limits<
cppad_tape_id_type>::max()
must be larger than the maximum number of tapes used by one thread times 7.b: CPPAD_MAX_NUM_THREADS .

2.2.q.a: cstdint
If all of the following cstdint types are defined, they can also be used as the value of cppad_tape_addr_type : uint8_t, uint16_t, uint32_t, uint64_t.

2.2.r: cppad_tape_addr_type
The type cppad_tape_addr_type is used for address in the AD recordings (tapes). The valid values for this argument are unsigned char, unsigned short int, unsigned int, size_t. The smaller the value of sizeof(cppad_tape_addr_type) , the less memory is used. On the other hand, the value
     std::numeric_limits<
cppad_tape_addr_type>::max()
must be larger than any of the following: 5.1.5.i: size_op , 5.1.5.j: size_op_arg , 5.1.5.h: size_par , 5.1.5.k: size_text , 5.1.5.l: size_VecAD .

2.2.r.a: cstdint
If all of the following cstdint types are defined, they can also be used as the value of cppad_tape_addr_type : uint8_t, uint16_t, uint32_t, uint64_t.

2.2.s: cppad_debug_which
All of the CppAD examples and test can optionally be tested in debug or release mode (see exception below). This option controls which mode is chosen for the corresponding files. The value cppad_debug_which be one of the following: debug_even, debug_odd, debug_all, debug_none. If it is debug_even (debug_odd), files with an even (old) index in a list for each case will be compiled in debug mode. The remaining files will be compiled in release mode. If it is debug_all (debug_none), all the files will be compiled in debug (release) mode. If cppad_debug_which does not appear on the command line, the default value debug_all is used.

2.2.s.a: Exception
The test corresponding to make cppad_ipopt_speed always get complied in release more (to avoid the extra time it would take to run in debug mode). Note that this test corresponds a deprecated interface; see 12.8.10: cppad_ipopt_nlp .

2.2.t: cppad_deprecated
The default value for cppad_deprecated is NO (the value YES is not currently being used).
Input File: omh/install/cmake.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.1: Including the ADOL-C Examples and Tests

2.2.1.a: Purpose
CppAD includes examples and tests that can use the AD package ADOL-C (https://projects.coin-or.org/ADOL-C) . The includes speed comparison with other AD packages; see 11.4: speed_adolc . It also includes examples that combine ADOL-C with CppAD; see
4.7.9.3: base_adolc.hpp Enable use of AD<Base> where Base is Adolc's adouble Type
4.7.9.3.1: mul_level_adolc.cpp Using Adolc with Multiple Levels of Taping: Example and Test
10.2.13: mul_level_adolc_ode.cpp Taylor's Ode Solver: A Multi-Level Adolc Example and Test

2.2.1.b: adolc_prefix
If ADOL-C is installed on your system, you can specify a value for its install adolc_prefix on the 2.2: cmake command line. The value of adolc_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,
     
adolc_prefix/dir/adolc/adouble.h
is a valid way to reference to the include file adouble.h; Note that CppAD assumes ADOL-C has been configured with its sparse matrix computations enabled; i.e, using
     --with-colpack=
adolc_prefix
In other words ColPack is installed and with the same prefix as ACOL-C; see 2.2.2.5: get_colpack.sh .

2.2.1.c: Examples
If you include adolc_prefix on the 2.2: cmake command line, you will be able to run the ADOL-C examples listed above by executing the following commands starting in the 2.1.b: distribution directory :
     cd build/example
     make check_example
If you do this, you will see an indication that the examples mul_level_adolc and mul_level_adolc_ode have passed their correctness check.

2.2.1.d: Speed Tests
If you include adolc_prefix on the 2.2: cmake command line, you will be able to run the ADOL-C speed correctness tests by executing the following commands starting in the 2.1.b: distribution directory :
     cd build/speed/adolc
     make check_speed_adolc
After executing make check_speed_adolc, you can run a specific ADOL-C speed tests by executing the command ./speed_adolc; see 11.1: speed_main for the meaning of the command line options to this program.

2.2.1.e: Unix
If you are using Unix, you may have to add adolc_prefix to LD_LIBRARY_PATH. For example, if you use the bash shell to run your programs, you could include
     LD_LIBRARY_PATH=
adolc_prefix/lib:${LD_LIBRARY_PATH}
     export LD_LIBRARY_PATH
in your $HOME/.bashrc file.

2.2.1.f: Cygwin
If you are using Cygwin, you may have to add to following lines to the file .bashrc in your home directory:
     PATH=
adolc_prefix/bin:${PATH}
     export PATH
in order for ADOL-C to run properly. If adolc_prefix begins with a disk specification, you must use the Cygwin format for the disk specification. For example, if d:/adolc_base is the proper directory, /cygdrive/d/adolc_base should be used for adolc_prefix .

2.2.1.g: get_adolc
If you are using Unix, you can download and install a copy of Adolc using 2.2.1.1: get_adolc.sh . The corresponding adolc_prefix would be build/prefix.
Input File: omh/install/adolc_prefix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.1.1: Download and Install Adolc in Build Directory

2.2.1.1.a: Syntax
bin/get_adolc.sh


2.2.1.1.b: Purpose
If you are using Unix, this command will download and install ADOL-C (https://projects.coin-or.org/ADOL-C) in the CppAD build directory.

2.2.1.1.c: Requirements
You must first use 2.2.2.5: get_colpack.sh to download and install ColPack (coloring algorithms used for sparse matrix derivatives).

2.2.1.1.d: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.1.1.e: External Directory
The Adolc source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.1.1.f: Prefix Directory
The Adolc include files are installed in the sub-directory build/prefix/include/adolc below the distribution directory.

2.2.1.1.g: Reuse
The files build/external/ADOL-C-version.tgz and the directory build/external/ADOL-C-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_adolc.sh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.2: Including the ColPack Sparsity Calculations

2.2.2.a: Purpose
If you specify a colpack_prefix on the 2.2.b: cmake command line, the CppAD 5.6.2: sparse_jacobian calculations use the ColPack (http://cscapes.cs.purdue.edu/dox/ColPack/html) package.

2.2.2.b: colpack_prefix
If ColPack is installed on your system, you can specify a value for its install colpack_prefix on the 2.2: cmake command line. The value of colpack_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,
     
colpack_prefix/dir/ColPack/ColPackHeaders.h
is a valid way to reference to the include file ColPackHeaders.h.

2.2.2.c: cppad_lib
The ColPack header files has a
     using namespace std
at the global level. For this reason, CppAD does not include these files. It is therefore necessary to link the object library cppad_lib when using ColPack.

2.2.2.d: Example
The file 2.2.2.1: colpack_jac.cpp (2.2.2.3: colpack_hes.cpp ) contains an example and test of using ColPack to compute the coloring for sparse Jacobians (Hessians). It returns true, if it succeeds and false otherwise.

2.2.2.e: get_colpack
If you are using Unix, you can download and install a copy of ColPack using 2.2.2.5: get_colpack.sh . The corresponding colpack_prefix would be build/prefix.
Input File: omh/install/colpack_prefix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.2.1: ColPack: Sparse Jacobian Example and Test

# include <cppad/cppad.hpp>
bool colpack_jac(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(AD<double>)            a_vector;
     typedef CPPAD_TESTVECTOR(double)                d_vector;
     typedef CppAD::vector<size_t>                   i_vector;
     typedef CppAD::sparse_rc<i_vector>              sparsity;
     typedef CppAD::sparse_rcv<i_vector, d_vector>   sparse_matrix;

     // domain space vector
     size_t n = 4;
     a_vector  a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);

     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     size_t m = 3;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[1];
     a_y[1] = a_x[2] + a_x[3];
     a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // new value for the independent variable vector
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j);

     /*
           [ 1 1 0 0  ]
     jac = [ 0 0 1 1  ]
           [ 1 1 1 x_3]
     */
     // Normally one would use CppAD to compute sparsity pattern, but for this
     // example we set it directly
     size_t nr  = m;
     size_t nc  = n;
     size_t nnz = 8;
     sparsity pattern(nr, nc, nnz);
     d_vector check(nnz);
     for(size_t k = 0; k < nnz; k++)
     {     size_t r, c;
          if( k < 2 )
          {     r = 0;
               c = k;
          }
          else if( k < 4 )
          {     r = 1;
               c = k;
          }
          else
          {     r = 2;
               c = k - 4;
          }
          pattern.set(k, r, c);
          if( k == nnz - 1 )
               check[k] = x[3];
          else
               check[k] = 1.0;
     }

     // using row and column indices to compute non-zero in rows 1 and 2
     sparse_matrix subset( pattern );

     // check results for both CppAD and Colpack
     for(size_t i_method = 0; i_method < 4; i_method++)
     {     // coloring method
          std::string coloring;
          if( i_method % 2 == 0 )
               coloring = "cppad";
          else
               coloring = "colpack";
          //
          CppAD::sparse_jac_work work;
          size_t group_max = 1;
          if( i_method / 2 == 0 )
          {     size_t n_sweep = f.sparse_jac_for(
                    group_max, x, subset, pattern, coloring, work
               );
               ok &= n_sweep == 4;
          }
          else
          {     size_t n_sweep = f.sparse_jac_rev(
                    x, subset, pattern, coloring, work
               );
               ok &= n_sweep == 2;
          }
          const d_vector& hes( subset.val() );
          for(size_t k = 0; k < nnz; k++)
               ok &= check[k] == hes[k];
     }
     return ok;
}

Input File: example/sparse/colpack_jac.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.2.2: ColPack: Sparse Jacobian Example and Test

# include <cppad/cppad.hpp>
bool colpack_jacobian(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)     d_vector;
     typedef CppAD::vector<size_t>        i_vector;
     size_t i, j, k, ell;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 4;
     a_vector  a_x(n);
     for(j = 0; j < n; j++)
          a_x[j] = AD<double> (0);

     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     size_t m = 3;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[1];
     a_y[1] = a_x[2] + a_x[3];
     a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // new value for the independent variable vector
     d_vector x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j);

     /*
           [ 1 1 0 0  ]
     jac = [ 0 0 1 1  ]
           [ 1 1 1 x_3]
     */
     d_vector check(m * n);
     check[0] = 1.; check[1] = 1.; check[2]  = 0.; check[3]  = 0.;
     check[4] = 0.; check[5] = 0.; check[6]  = 1.; check[7]  = 1.;
     check[8] = 1.; check[9] = 1.; check[10] = 1.; check[11] = x[3];

     // Normally one would use f.ForSparseJac or f.RevSparseJac to compute
     // sparsity pattern, but for this example we extract it from check.
     std::vector< std::set<size_t> >  p(m);

     // using row and column indices to compute non-zero in rows 1 and 2
     i_vector row, col;
     for(i = 0; i < m; i++)
     {     for(j = 0; j < n; j++)
          {     ell = i * n + j;
               if( check[ell] != 0. )
               {     row.push_back(i);
                    col.push_back(j);
                    p[i].insert(j);
               }
          }
     }
     size_t K = row.size();
     d_vector jac(K);

     // empty work structure
     CppAD::sparse_jacobian_work work;
     ok &= work.color_method == "cppad";

     // choose to use ColPack
     work.color_method = "colpack";

     // forward mode
     size_t n_sweep = f.SparseJacobianForward(x, p, row, col, jac, work);
     for(k = 0; k < K; k++)
     {     ell = row[k] * n + col[k];
          ok &= NearEqual(check[ell], jac[k], eps, eps);
     }
     ok &= n_sweep == 4;

     // reverse mode
     work.clear();
     work.color_method = "colpack";
     n_sweep = f.SparseJacobianReverse(x, p, row, col, jac, work);
     for(k = 0; k < K; k++)
     {     ell = row[k] * n + col[k];
          ok &= NearEqual(check[ell], jac[k], eps, eps);
     }
     ok &= n_sweep == 2;

     return ok;
}

Input File: example/sparse/colpack_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.2.3: ColPack: Sparse Hessian Example and Test

# include <cppad/cppad.hpp>
bool colpack_hes(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(AD<double>)            a_vector;
     typedef CPPAD_TESTVECTOR(double)                d_vector;
     typedef CppAD::vector<size_t>                   i_vector;
     typedef CppAD::sparse_rc<i_vector>              sparsity;
     typedef CppAD::sparse_rcv<i_vector, d_vector>   sparse_matrix;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // domain space vector
     size_t n = 5;
     a_vector  a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);
     //
     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     // colpack example case where hessian is a spear head
     // i.e, H(i, j) non zero implies i = 0, j = 0, or i = j
     AD<double> sum = 0.0;
     // partial_0 partial_j = x[j]
     // partial_j partial_j = x[0]
     for(size_t j = 1; j < n; j++)
          sum += a_x[0] * a_x[j] * a_x[j] / 2.0;
     //
     // partial_i partial_i = 2 * x[i]
     for(size_t i = 0; i < n; i++)
          sum += a_x[i] * a_x[i] * a_x[i] / 3.0;

     // declare dependent variables
     size_t m = 1;
     a_vector  a_y(m);
     a_y[0] = sum;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // new value for the independent variable vector
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j + 1);

     /*
           [ 2  2  3  4  5 ]
     hes = [ 2  5  0  0  0 ]
           [ 3  0  7  0  0 ]
           [ 4  0  0  9  0 ]
           [ 5  0  0  0 11 ]
     */
     // Normally one would use CppAD to compute sparsity pattern, but for this
     // example we set it directly
     size_t nr  = n;
     size_t nc  = n;
     size_t nnz = n + 2 * (n - 1);
     sparsity pattern(nr, nc, nnz);
     for(size_t k = 0; k < n; k++)
     {     size_t r = k;
          size_t c = k;
          pattern.set(k, r, c);
     }
     for(size_t i = 1; i < n; i++)
     {     size_t k = n + 2 * (i - 1);
          size_t r = i;
          size_t c = 0;
          pattern.set(k,   r, c);
          pattern.set(k+1, c, r);
     }

     // subset of elements to compute
     // (only compute lower traingle)
     nnz = n + (n - 1);
     sparsity lower_triangle(nr, nc, nnz);
     d_vector check(nnz);
     for(size_t k = 0; k < n; k++)
     {     size_t r = k;
          size_t c = k;
          lower_triangle.set(k, r, c);
          check[k] = 2.0 * x[k];
          if( k > 0 )
               check[k] += x[0];
     }
     for(size_t j = 1; j < n; j++)
     {     size_t k = n + (j - 1);
          size_t r = 0;
          size_t c = j;
          lower_triangle.set(k, r, c);
          check[k] = x[c];
     }
     sparse_matrix subset( lower_triangle );

     // check results for both CppAD and Colpack
     for(size_t i_method = 0; i_method < 4; i_method++)
     {     // coloring method
          std::string coloring;
          switch(i_method)
          {     case 0:
               coloring = "cppad.symmetric";
               break;

               case 1:
               coloring = "cppad.general";
               break;

               case 2:
               coloring = "colpack.symmetric";
               break;

               case 3:
               coloring = "colpack.general";
               break;
          }
          //
          // compute Hessian
          CppAD::sparse_hes_work work;
          d_vector w(m);
          w[0] = 1.0;
          size_t n_sweep = f.sparse_hes(
               x, w, subset, pattern, coloring, work
          );
          //
          // check result
          const d_vector& hes( subset.val() );
          for(size_t k = 0; k < nnz; k++)
               ok &= NearEqual(check[k], hes[k], eps, eps);
          if(
               coloring == "cppad.symmetric"
          ||     coloring == "colpack.symmetric"
          )
               ok &= n_sweep == 2;
          else
               ok &= n_sweep == 5;
     }

     return ok;
}

Input File: example/sparse/colpack_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.2.4: ColPack: Sparse Hessian Example and Test

# include <cppad/cppad.hpp>
bool colpack_hessian(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)     d_vector;
     typedef CppAD::vector<size_t>        i_vector;
     size_t i, j, k, ell;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 5;
     a_vector  a_x(n);
     for(j = 0; j < n; j++)
          a_x[j] = AD<double> (0);

     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     // colpack example case where hessian is a spear head
     // i.e, H(i, j) non zero implies i = 0, j = 0, or i = j
     AD<double> sum = 0.0;
     // partial_0 partial_j = x[j]
     // partial_j partial_j = x[0]
     for(j = 1; j < n; j++)
          sum += a_x[0] * a_x[j] * a_x[j] / 2.0;
     //
     // partial_i partial_i = 2 * x[i]
     for(i = 0; i < n; i++)
          sum += a_x[i] * a_x[i] * a_x[i] / 3.0;

     // declare dependent variables
     size_t m = 1;
     a_vector  a_y(m);
     a_y[0] = sum;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // new value for the independent variable vector
     d_vector x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j + 1);

     /*
           [ 2  2  3  4  5 ]
     hes = [ 2  5  0  0  0 ]
           [ 3  0  7  0  0 ]
           [ 4  0  0  9  0 ]
           [ 5  0  0  0 11 ]
     */
     d_vector check(n * n);
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     size_t index = i * n + j;
               check[index] = 0.0;
               if( i == 0 && 1 <= j )
                    check[index] += x[j];
               if( 1 <= i && j == 0 )
                    check[index] += x[i];
               if( i == j )
               {     check[index] += 2.0 * x[i];
                    if( i != 0 )
                         check[index] += x[0];
               }
          }
     }
     // Normally one would use f.RevSparseHes to compute
     // sparsity pattern, but for this example we extract it from check.
     std::vector< std::set<size_t> >  p(n);
     i_vector row, col;
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     ell = i * n + j;
               if( check[ell] != 0. )
               {     // insert this non-zero entry in sparsity pattern
                    p[i].insert(j);

                    // the Hessian is symmetric, so only lower triangle
                    if( j <= i )
                    {     row.push_back(i);
                         col.push_back(j);
                    }
               }
          }
     }
     size_t K = row.size();
     d_vector hes(K);

     // default coloring method is cppad.symmetric
     CppAD::sparse_hessian_work work;
     ok &= work.color_method == "cppad.symmetric";

     // contrast and check results for both CppAD and Colpack
     for(size_t i_method = 0; i_method < 4; i_method++)
     {     // empty work structure
          switch(i_method)
          {     case 0:
               work.color_method = "cppad.symmetric";
               break;

               case 1:
               work.color_method = "cppad.general";
               break;

               case 2:
               work.color_method = "colpack.symmetric";
               break;

               case 3:
               work.color_method = "colpack.general";
               break;
          }

          // compute Hessian
          d_vector w(m);
          w[0] = 1.0;
          size_t n_sweep = f.SparseHessian(x, w, p, row, col, hes, work);
          //
          // check result
          for(k = 0; k < K; k++)
          {     ell = row[k] * n + col[k];
               ok &= NearEqual(check[ell], hes[k], eps, eps);
          }
          if(
               work.color_method == "cppad.symmetric"
          ||     work.color_method == "colpack.symmetric"
          )
               ok &= n_sweep == 2;
          else
               ok &= n_sweep == 5;
          //
          // check that clear resets color_method to cppad.symmetric
          work.clear();
          ok &= work.color_method == "cppad.symmetric";
     }

     return ok;
}

Input File: example/sparse/colpack_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.2.5: Download and Install ColPack in Build Directory

2.2.2.5.a: Syntax
bin/get_colpack.sh


2.2.2.5.b: Purpose
If you are using Unix, this command will download and install ColPack (http://cscapes.cs.purdue.edu/dox/ColPack/html/) in the CppAD build directory.

2.2.2.5.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.2.5.d: External Directory
The ColPack source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.2.5.e: Prefix Directory
The ColPack include files are installed in the sub-directory build/prefix/include/ColPack below the distribution directory.

2.2.2.5.f: Reuse
The file build/external/ColPack-version.tar.gz and the directory build/external/ColPack-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_colpack.sh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.3: Including the Eigen Examples and Tests

2.2.3.a: Purpose
CppAD can include the following examples and tests that use the linear algebra package Eigen (http://eigen.tuxfamily.org) :
10.2.4: cppad_eigen.hpp Enable Use of Eigen Linear Algebra Package with CppAD
10.2.4.2: eigen_array.cpp Using Eigen Arrays: Example and Test
10.2.4.3: eigen_det.cpp Using Eigen To Compute Determinant: Example and Test
4.4.7.2.16.1: atomic_eigen_mat_mul.hpp Atomic Eigen Matrix Multiply Class
4.4.7.2.17.1: atomic_eigen_mat_inv.hpp Atomic Eigen Matrix Inversion Class

2.2.3.b: eigen_prefix
If Eigen is installed on your system, you can specify a value for its install eigen_prefix on the 2.2: cmake command line. The value of eigen_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,
     
eigen_prefix/dir/Eigen/Core
is a valid way to reference to the include file Core;

2.2.3.c: Examples
If you include eigen_prefix on the 2.2: cmake command line, you will be able to run the Eigen examples list above by executing the following commands starting in the 2.1.b: distribution directory :
     cd build/example
     make check_example
If you do this, you will see an indication that the examples eigen_array and eigen_det have passed their correctness check. options to this program.

2.2.3.d: Test Vector
If you have specified eigen_prefix you can choose
     -D cppad_testvector=eigen
on the 2.2.b: cmake command line. This we set the CppAD 10.5: testvector to use Eigen vectors.

2.2.3.e: get_eigen
If you are using Unix, you can download and install a copy of Eigen using 2.2.3.1: get_eigen.sh . The corresponding eigen_prefix would be build/prefix.
Input File: omh/install/eigen_prefix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.3.1: Download and Install Eigen in Build Directory

2.2.3.1.a: Syntax
bin/get_eigen.sh


2.2.3.1.b: Purpose
If you are using Unix, this command will download and install Eigen (http://eigen.tuxfamily.org) in the CppAD build directory.

2.2.3.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.3.1.d: External Directory
The Eigen source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.3.1.e: Prefix Directory
The Eigen include files are installed in the sub-directory build/prefix/include/Eigen below the distribution directory.

2.2.3.1.f: Reuse
The file build/external/eigen-version.tar.gz and the directory build/external/eigen-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_eigen.sh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.4: Including the FADBAD Speed Tests

2.2.4.a: Purpose
CppAD includes speed comparisons for the AD package FADBAD (http://www.fadbad.com) ; see 11.6: speed_fadbad .

2.2.4.b: fadbad_prefix
If FADBAD is installed on your system, you can specify a value for its install fadbad_prefix on the 2.2: cmake command line. The value of fadbad_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,
     
fadbad_prefix/dir/FADBAD++/badiff.h
is a valid way to reference to the include file badiff.h;

2.2.4.c: Speed Tests
If you include fadbad_prefix on the 2.2: cmake command line, you will be able to run the FADBAD speed correctness tests by executing the following commands starting in the 2.1.b: distribution directory :
     cd build/speed/fadbad
     make check_speed_fadbad
After executing make check, you can run a specific FADBAD speed test by executing the command ./speed_fadbad; see 11.1: speed_main for the meaning of the command line options to this program.

2.2.4.d: get_fadbad
If you are using Unix, you can download and install a copy of Fadbad using 2.2.4.1: get_fadbad.sh . The corresponding fadbad_prefix would be build/prefix.
Input File: omh/install/fadbad_prefix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.4.1: Download and Install Fadbad in Build Directory

2.2.4.1.a: Syntax
bin/get_fadbad.sh


2.2.4.1.b: Purpose
If you are using Unix, this command will download and install Fadbad (http://www.fadbad.com) in the CppAD build directory.

2.2.4.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.4.1.d: External Directory
The Fadbad source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.4.1.e: Prefix Directory
The Fadbad include files are installed in the sub-directory build/prefix/include/FADBAD++ below the distribution directory.
Input File: bin/get_fadbad.sh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.5: Including the cppad_ipopt Library and Tests

2.2.5.a: Purpose
Includes the 12.8.10: cppad_ipopt_nlp example and tests as well as installing the cppad_ipopt library during the make install step.

2.2.5.b: ipopt_prefix
If you have Ipopt (http://www.coin-or.org/projects/Ipopt.xml) installed on your system, you can specify the value for ipopt_prefix on the 2.2: cmake command line. The value of ipopt_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,
     
ipopt_prefix/dir/coin/IpIpoptApplication.hpp
is a valid way to reference to the include file IpIpoptApplication.hpp.

2.2.5.c: Examples and Tests
If you include ipopt_prefix on the 2.2: cmake command line, you will be able to run the Ipopt examples and tests by executing the following commands starting in the 2.1.b: distribution directory :
     cd cppad_ipopt
     make check_ipopt

2.2.5.d: get_ipopt
If you are using Unix, you can download and install a copy of Ipopt using 2.2.5.1: get_ipopt.sh . The corresponding ipopt_prefix would be build/prefix.
Input File: omh/install/ipopt_prefix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.5.1: Download and Install Ipopt in Build Directory

2.2.5.1.a: Syntax
bin/get_ipopt.sh


2.2.5.1.b: Purpose
If you are using Unix, this command will download and install Ipopt (http://www.coin-or.org/projects/Ipopt.xml) in the CppAD build directory.

2.2.5.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.5.1.d: External Directory
The Ipopt source code is downloaded and compiled in the sub-directory build/external below the distribution directory.

2.2.5.1.e: Prefix Directory
The Ipopt libraries and include files are installed in the sub-directory build/prefix below the distribution directory.

2.2.5.1.f: Reuse
The file build/external/Ipopt-version.tgz and the directory build/external/Ipopt-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_ipopt.sh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.6: Including the Sacado Speed Tests

2.2.6.a: Purpose
CppAD includes speed comparisons for the AD package Sacado (http://trilinos.sandia.gov/packages/sacado) ; see 11.7: speed_sacado .

2.2.6.b: sacado_prefix
If Sacado is installed on your system, you can specify a value for its install sacado_prefix on the 2.2: cmake command line. The value of sacado_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,
     
sacado_prefix/dir/Sacado.hpp
is a valid way to reference to the include file Sacado.hpp;

2.2.6.c: Speed Tests
If you include sacado_prefix on the 2.2: cmake command line, you will be able to run the Sacado speed correctness tests by executing the following commands starting in the 2.1.b: distribution directory :
     cd build/speed/sacado
     make check_speed_sacado
After executing make check_speed_sacado, you can run a specific Sacado speed test by executing the command ./speed_sacado; see 11.1: speed_main for the meaning of the command line options to this program.

2.2.6.d: get_sacado
If you are using Unix, you can download and install a copy of Sacado using 2.2.6.1: get_sacado.sh . The corresponding sacado_prefix would be build/prefix.
Input File: omh/install/sacado_prefix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.6.1: Download and Install Sacado in Build Directory

2.2.6.1.a: Syntax
bin/get_sacado.sh


2.2.6.1.b: Purpose
If you are using Unix, this command will download and install Sacado (http://trilinos.sandia.gov/packages/sacado) in the CppAD build directory.

2.2.6.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.6.1.d: External Directory
The Sacado source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.6.1.e: Prefix Directory
The Sacado libraries and include files are installed in the sub-directory build/prefix below the distribution directory.

2.2.6.1.f: Reuse
The file build/external/trilinos-version-Source.tar.gz and the directory build/external/trilinos-version-Source will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_sacado.sh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.2.7: Choosing the CppAD Test Vector Template Class

2.2.7.a: Purpose
The value cppad_testvector in the 2.2.b: cmake command must be one of the following: boost, cppad, eigen, or std. It specifies which type of vector is corresponds to the template class 10.5: CPPAD_TESTVECTOR which is used for many of the CppAD examples and tests.

2.2.7.b: std
If cppad_testvector is std , the std::vector template class is used to define CPPAD_TESTVECTOR.

2.2.7.c: cppad
If cppad_testvector is cppad , the 8.22: cppad_vector template class is used to define CPPAD_TESTVECTOR.

2.2.7.d: boost
If cppad_testvector is boost , boost ublas vector (http://www.boost.org/doc/libs/1_52_0/libs/numeric/ublas/doc/vector.htm) template class is used to define CPPAD_TESTVECTOR. In this case, the cmake FindBoost (http://www.cmake.org/cmake/help/cmake2.6docs.html#module:FindBoost) module must be able to automatically figure out where Boost is installed.

2.2.7.e: eigen
If cppad_testvector is eigen , one of the eigen template classes is used to define CPPAD_TESTVECTOR. In this case, 2.2.3: eigen_prefix must be specified on the cmake command line.
Input File: omh/install/testvector.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.3: Checking the CppAD Examples and Tests

2.3.a: Purpose
After you configure your system with the 2.2.b: cmake command you can run the CppAD example and tests to make sure that CppAD functions properly on your system.

2.3.b: Check All
In the build subdirectory of the 2.1.b: distribution directory execute the command
 
     make check
This will build and run all of the tests that are support by your system and the 2.2: cmake command options.

2.3.b.a: Windows
If you created nmake makefiles, you will have to use nmake instead of make in the commands above and below; see 2.1.i: windows file extraction and testing .

2.3.c: Subsets of make check
In unix, you can determine which subsets of make check are available by putting the output of the 2.2.b: cmake command in a file (called cmake.out below) and executing:
     grep 'make check.*available' 
cmake.out

2.3.d: First Level
The first level of subsets of make check are described below:
Command Description
make check_introduction the 3: Introduction functions
make check_example the normal 10.4: example functions plus some deprecated examples.
make check_test_more correctness tests that are not examples
make check_speed correctness for single thread 11: speed tests
make check_cppad_ipopt the deprecated 12.8.10: cppad_ipopt_nlp speed and correctness tests
Note that make check_example_multi_thread is used for the 7: multi-threading speed tests.
Input File: omh/install/cmake_check.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
2.4: CppAD pkg-config Files

2.4.a: Purpose
The pkg-config package helps with the use of installed libraries; see its guide (http://people.freedesktop.org/~dbn/pkg-config-guide.html) for more information.

2.4.b: Usage
The necessary flags for compiling code that includes CppAD can be obtained with the command
 
     pkg-config --cflags cppad
Note that this command assumes 2.4: cppad.pc is in the search path PKG_CONFIG_PATH. If 2.2.5: ipopt_prefix is specified, the necessary flags for linking 12.8.10: cppad_ipopt can be obtained with the commands
 
     pkg-config --libs cppad
Note that this command assumes ipopt.pc is in the search path PKG_CONFIG_PATH.

2.4.c: Defined Fields
Name A human-readable name for the CppAD package.
Description A brief description of the CppAD package.
URL A URL where people can get more information about the CppAD package.
Version A string specifically defining the version of the CppAD package.
Cflags The necessary flags for using any of the CppAD include files.
Libs If 2.2.5: ipopt_prefix is specified, the necessary flags for using the 12.8.10: cppad_ipopt library
Requires If 2.2.5: ipopt_prefix is specified, the packages required to use the 12.8.10: cppad_ipopt library

2.4.d: CppAD Configuration Files
In the table below, builddir is the build directory; i.e., the directory where the CppAD 2.2.b: cmake command is executed. The directory prefix is the value of 2.2.f: cppad_prefix during configuration. The directory datadir is the value of 2.2.j: cmake_install_datadir . The following configuration files contain the information above
File Description
prefix/datadir/pkgconfig/cppad.pc for use after 2.a.d: make install
builddir/pkgconfig/cppad-uninstalled.pc for testing before make install

Input File: omh/install/pkgconfig.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3: An Introduction by Example to Algorithmic Differentiation

3.a: Purpose
This is an introduction by example to Algorithmic Differentiation. Its purpose is to aid in understand what AD calculates, how the calculations are preformed, and the amount of computation and memory required for a forward or reverse sweep.

3.b: Preface

3.b.a: Algorithmic Differentiation
Algorithmic Differentiation (often referred to as Automatic Differentiation or just AD) uses the software representation of a function to obtain an efficient method for calculating its derivatives. These derivatives can be of arbitrary order and are analytic in nature (do not have any truncation error).

3.b.b: Forward Mode
A forward mode sweep computes the partial derivative of all the dependent variables with respect to one independent variable (or independent variable direction).

3.b.c: Reverse Mode
A reverse mode sweep computes the derivative of one dependent variable (or one dependent variable direction) with respect to all the independent variables.

3.b.d: Operation Count
The number of floating point operations for either a forward or reverse mode sweep is a small multiple of the number required to evaluate the original function. Thus, using reverse mode, you can evaluate the derivative of a scalar valued function with respect to thousands of variables in a small multiple of the work to evaluate the original function.

3.b.e: Efficiency
AD automatically takes advantage of the speed of your algorithmic representation of a function. For example, if you calculate a determinant using LU factorization, AD will use the LU representation for the derivative of the determinant (which is faster than using the definition of the determinant).

3.c: Outline
  1. Demonstrate the use of CppAD to calculate derivatives of a polynomial: 10.1: get_started.cpp .
  2. Present two algorithms that approximate the exponential function. The first algorithm 3.1.1: exp_2.hpp is simpler and does not include any logical variables or loops. The second algorithm 3.2.1: exp_eps.hpp includes logical operations and a while loop. For each of these algorithms, do the following:
    1. Define the mathematical function corresponding to the algorithm (3.1: exp_2 and 3.2: exp_eps ).
    2. Write out the floating point operation sequence, and corresponding values, that correspond to executing the algorithm for a specific input (3.1.3: exp_2_for0 and 3.2.3: exp_eps_for0 ).
    3. Compute a forward sweep derivative of the operation sequence (3.1.4: exp_2_for1 and 3.2.4: exp_eps_for1 ).
    4. Compute a reverse sweep derivative of the operation sequence (3.1.5: exp_2_rev1 and 3.2.5: exp_eps_rev1 ).
    5. Use CppAD to compute both a forward and reverse sweep of the operation sequence (3.1.8: exp_2_cppad and 3.2.8: exp_eps_cppad ).
  3. The program 3.3: exp_apx.cpp runs all of the test routines that validate the calculations in the 3.1: exp_2 and 3.2: exp_eps presentation.


3.d: Reference
An in-depth review of AD theory and methods can be found in the book Evaluating Derivatives:
Principles and Techniques of Algorithmic Differentiation
, Andreas Griewank, SIAM Frontiers in Applied Mathematics, 2000.

3.e: Contents
exp_2: 3.1Second Order Exponential Approximation
exp_eps: 3.2An Epsilon Accurate Exponential Approximation
exp_apx.cpp: 3.3Correctness Tests For Exponential Approximation in Introduction

Input File: omh/introduction.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1: Second Order Exponential Approximation

3.1.a: Syntax
# include "exp_2.hpp"
y = exp_2(x)

3.1.b: Purpose
This is a simple example algorithm that is used to demonstrate Algorithmic Differentiation (see 3.2: exp_eps for a more complex example).

3.1.c: Mathematical Form
The exponential function can be defined by @[@ \exp (x) = 1 + x^1 / 1 ! + x^2 / 2 ! + \cdots @]@ The second order approximation for the exponential function is @[@ {\rm exp\_2} (x) = 1 + x + x^2 / 2 @]@

3.1.d: include
The include command in the syntax is relative to
     cppad-
yyyymmdd/introduction/exp_apx
where cppad-yyyymmdd is the distribution directory created during the beginning steps of the 2: installation of CppAD.

3.1.e: x
The argument x has prototype
     const 
Type &x
(see Type below). It specifies the point at which to evaluate the approximation for the second order exponential approximation.

3.1.f: y
The result y has prototype
     
Type y
It is the value of the exponential function approximation defined above.

3.1.g: Type
If u and v are Type objects and i is an int:
Operation Result Type Description
Type(i) Type construct object with value equal to i
Type u = v Type construct u with value equal to v
u * v Type result is value of @(@ u * v @)@
u / v Type result is value of @(@ u / v @)@
u + v Type result is value of @(@ u + v @)@

3.1.h: Contents
exp_2.hpp: 3.1.1exp_2: Implementation
exp_2.cpp: 3.1.2exp_2: Test
exp_2_for0: 3.1.3exp_2: Operation Sequence and Zero Order Forward Mode
exp_2_for1: 3.1.4exp_2: First Order Forward Mode
exp_2_rev1: 3.1.5exp_2: First Order Reverse Mode
exp_2_for2: 3.1.6exp_2: Second Order Forward Mode
exp_2_rev2: 3.1.7exp_2: Second Order Reverse Mode
exp_2_cppad: 3.1.8exp_2: CppAD Forward and Reverse Sweeps

3.1.i: Implementation
The file 3.1.1: exp_2.hpp contains a C++ implementation of this function.

3.1.j: Test
The file 3.1.2: exp_2.cpp contains a test of this implementation. It returns true for success and false for failure.

3.1.k: Exercises
  1. Suppose that we make the call
     
         double x = .1;
         double y = exp_2(x);
    
    What is the value assigned to v1, v2, ... ,v5 in 3.1.1: exp_2.hpp ?
  2. Extend the routine exp_2.hpp to a routine exp_3.hpp that computes @[@ 1 + x^2 / 2 ! + x^3 / 3 ! @]@ Do this in a way that only assigns one value to each variable (as exp_2 does).
  3. Suppose that we make the call
     
         double x = .5;
         double y = exp_3(x);
    
    using exp_3 created in the previous problem. What is the value assigned to the new variables in exp_3 (variables that are in exp_3 and not in exp_2) ?

Input File: introduction/exp_2.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.1: exp_2: Implementation
template <class Type>
Type exp_2(const Type &x)
{       Type v1  = x;                // v1 = x
        Type v2  = Type(1) + v1;     // v2 = 1 + x
        Type v3  = v1 * v1;          // v3 = x^2
        Type v4  = v3 / Type(2);     // v4 = x^2 / 2
        Type v5  = v2 + v4;          // v5 = 1 + x + x^2 / 2
        return v5;                   // exp_2(x) = 1 + x + x^2 / 2
}

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.2: exp_2: Test
# include <cmath>           // define fabs function
# include "exp_2.hpp"       // definition of exp_2 algorithm
bool exp_2(void)
{     double x     = .5;
     double check = 1 + x + x * x / 2.;
     bool   ok    = std::fabs( exp_2(x) - check ) <= 1e-10;
     return ok;
}

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode

3.1.3.a: Mathematical Form
The operation sequence (see below) corresponding to the algorithm 3.1.1: exp_2.hpp is the same for all values of x . The mathematical form for the corresponding function is @[@ f(x) = 1 + x + x^2 / 2 @]@ An algorithmic differentiation package does not operate on the mathematical function @(@ f(x) @)@ but rather on the particular algorithm used to compute the function (in this case 3.1.1: exp_2.hpp ).

3.1.3.b: Zero Order Expansion
In general, a zero order forward sweep is given a vector @(@ x^{(0)} \in \B{R}^n @)@ and it returns the corresponding vector @(@ y^{(0)} \in \B{R}^m @)@ given by @[@ y^{(0)} = f( x^{(0)} ) @]@ The superscript @(@ (0) @)@ denotes zero order derivative; i.e., it is equal to the value of the corresponding variable. For the example we are considering here, both @(@ n @)@ and @(@ m @)@ are equal to one.

3.1.3.c: Operation Sequence
An atomic Type operation is an operation that has a Type result and is not made up of other more basic operations. A sequence of atomic Type operations is called a Type operation sequence. Given an C++ algorithm and its inputs, there is a corresponding Type operation sequence for each type. If Type is clear from the context, we drop it and just refer to the operation sequence.

We consider the case where 3.1.1: exp_2.hpp is executed with @(@ x^{(0)} = .5 @)@. The table below contains the corresponding operation sequence and the results of a zero order sweep.

3.1.3.c.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.

3.1.3.c.b: Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.

3.1.3.c.c: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.1.3.c.d: Zero Order
The Zero Order column contains the zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e, in order of increasing index in the operation sequence.

3.1.3.c.e: Sweep
Index    Code    Operation    Zero Order
1    Type v1 = x; @(@ v_1 = x @)@ @(@ v_1^{(0)} = 0.5 @)@
2    Type v2 = Type(1) + v1; @(@ v_2 = 1 + v_1 @)@ @(@ v_2^{(0)} = 1.5 @)@
3    Type v3 = v1 * v1; @(@ v_3 = v_1 * v_1 @)@ @(@ v_3^{(0)} = 0.25 @)@
4    Type v4 = v3 / Type(2); @(@ v_4 = v_3 / 2 @)@ @(@ v_4^{(0)} = 0.125 @)@
5    Type v5 = v2 + v4; @(@ v_5 = v_2 + v_4 @)@ @(@ v_5^{(0)} = 1.625 @)@
3.1.3.d: Return Value
The return value for this case is @[@ 1.625 = v_5^{(0)} = f( x^{(0)} ) @]@

3.1.3.e: Verification
The file 3.1.3.1: exp_2_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.1.3.f: Exercises
  1. Suppose that @(@ x^{(0)} = .2 @)@, what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for @[@ v_1^{(0)} , v_2^{(0)} , \cdots , v_5^{(0)} @]@
  2. Create a modified version of 3.1.3.1: exp_2_for0.cpp that verifies the values you obtained for the previous exercise.
  3. Create and run a main program that reports the result of calling the modified version of 3.1.3.1: exp_2_for0.cpp in the previous exercise.

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.3.1: exp_2: Verify Zero Order Forward Sweep
# include <cmath>            // for fabs function
bool exp_2_for0(double *v0)  // double v0[6]
{     bool  ok = true;
     double x = .5;

     v0[1] = x;                                  // v1 = x
     ok  &= std::fabs( v0[1] - 0.5) < 1e-10;

     v0[2] = 1. + v0[1];                         // v2 = 1 + v1
     ok  &= std::fabs( v0[2] - 1.5) < 1e-10;

     v0[3] = v0[1] * v0[1];                      // v3 = v1 * v1
     ok  &= std::fabs( v0[3] - 0.25) < 1e-10;

     v0[4] = v0[3] / 2.;                         // v4 = v3 / 2
     ok  &= std::fabs( v0[4] - 0.125) < 1e-10;

     v0[5] = v0[2] + v0[4];                      // v5  = v2 + v4
     ok  &= std::fabs( v0[5] - 1.625) < 1e-10;

     return ok;
}
bool exp_2_for0(void)
{     double v0[6];
     return exp_2_for0(v0);
}

Input File: introduction/exp_2_for0.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.4: exp_2: First Order Forward Mode

3.1.4.a: First Order Expansion
We define @(@ x(t) @)@ near @(@ t = 0 @)@ by the first order expansion @[@ x(t) = x^{(0)} + x^{(1)} * t @]@ it follows that @(@ x^{(0)} @)@ is the zero, and @(@ x^{(1)} @)@ the first, order derivative of @(@ x(t) @)@ at @(@ t = 0 @)@.

3.1.4.b: Purpose
In general, a first order forward sweep is given the 3.1.3.b: zero order derivative for all of the variables in an operation sequence, and the first order derivatives for the independent variables. It uses these to compute the first order derivatives, and thereby obtain the first order expansion, for all the other variables in the operation sequence.

3.1.4.c: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute @[@ f(x) = 1 + x + x^2 / 2 @]@ The corresponding derivative function is @[@ \partial_x f (x) = 1 + x @]@ An algorithmic differentiation package does not operate on the mathematical form of the function, or its derivative, but rather on the 3.1.3.c: operation sequence for the for the algorithm that is used to evaluate the function.

3.1.4.d: Operation Sequence
We consider the case where 3.1.1: exp_2.hpp is executed with @(@ x = .5 @)@. The corresponding operation sequence and zero order forward mode values (see 3.1.3.c.e: zero order sweep ) are inputs and are used by a first order forward sweep.

3.1.4.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.1.4.d.b: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.1.4.d.c: Zero Order
The Zero Order column contains the zero order derivatives for the corresponding variable in the operation sequence (see 3.1.3.c.e: zero order sweep ).

3.1.4.d.d: Derivative
The Derivative column contains the mathematical function corresponding to the derivative with respect to @(@ t @)@, at @(@ t = 0 @)@, for each variable in the sequence.

3.1.4.d.e: First Order
The First Order column contains the first order derivatives for the corresponding variable in the operation sequence; i.e., @[@ v_j (t) = v_j^{(0)} + v_j^{(1)} t @]@ We use @(@ x^{(1)} = 1 @)@ so that differentiation with respect to @(@ t @)@, at @(@ t = 0 @)@, is the same as partial differentiation with respect to @(@ x @)@ at @(@ x = x^{(0)} @)@.

3.1.4.d.f: Sweep
Index    Operation    Zero Order    Derivative    First Order
1    @(@ v_1 = x @)@ 0.5 @(@ v_1^{(1)} = x^{(1)} @)@ @(@ v_1^{(1)} = 1 @)@
2    @(@ v_2 = 1 + v_1 @)@ 1.5 @(@ v_2^{(1)} = v_1^{(1)} @)@ @(@ v_2^{(1)} = 1 @)@
3    @(@ v_3 = v_1 * v_1 @)@ 0.25 @(@ v_3^{(1)} = 2 * v_1^{(0)} * v_1^{(1)} @)@ @(@ v_3^{(1)} = 1 @)@
4    @(@ v_4 = v_3 / 2 @)@ 0.125 @(@ v_4^{(1)} = v_3^{(1)} / 2 @)@ @(@ v_4^{(1)} = 0.5 @)@
5   @(@ v_5 = v_2 + v_4 @)@ 1.625 @(@ v_5^{(1)} = v_2^{(1)} + v_4^{(1)} @)@ @(@ v_5^{(1)} = 1.5 @)@
3.1.4.e: Return Value
The derivative of the return value for this case is @[@ \begin{array}{rcl} 1.5 & = & v_5^{(1)} = \left[ \D{v_5}{t} \right]_{t=0} = \left[ \D{}{t} f ( x^{(0)} + x^{(1)} t ) \right]_{t=0} \\ & = & f^{(1)} ( x^{(0)} ) * x^{(1)} = f^{(1)} ( x^{(0)} ) \end{array} @]@ (We have used the fact that @(@ x^{(1)} = 1 @)@.)

3.1.4.f: Verification
The file 3.1.4.1: exp_2_for1.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.1.4.g: Exercises
  1. Which statement in the routine defined by 3.1.4.1: exp_2_for1.cpp uses the values that are calculated by the routine defined by 3.1.3.1: exp_2_for0.cpp ?
  2. Suppose that @(@ x = .1 @)@, what are the results of a zero and first order forward sweep for the operation sequence above; i.e., what are the corresponding values for @(@ v_1^{(0)}, v_2^{(0)}, \cdots , v_5^{(0)} @)@ and @(@ v_1^{(1)}, v_2^{(1)}, \cdots , v_5^{(1)} @)@ ?
  3. Create a modified version of 3.1.4.1: exp_2_for1.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.4.1: exp_2_for1.cpp .

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.4.1: exp_2: Verify First Order Forward Sweep
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
bool exp_2_for1(double *v1)         // double v1[6]
{     bool ok = true;
     double v0[6];

     // set the value of v0[j] for j = 1 , ... , 5
     ok &= exp_2_for0(v0);

     v1[1] = 1.;                                     // v1 = x
     ok    &= std::fabs( v1[1] - 1. ) <= 1e-10;

     v1[2] = v1[1];                                  // v2 = 1 + v1
     ok    &= std::fabs( v1[2] - 1. ) <= 1e-10;

     v1[3] = v1[1] * v0[1] + v0[1] * v1[1];          // v3 = v1 * v1
     ok    &= std::fabs( v1[3] - 1. ) <= 1e-10;

     v1[4] = v1[3] / 2.;                             // v4 = v3 / 2
     ok    &= std::fabs( v1[4] - 0.5) <= 1e-10;

     v1[5] = v1[2] + v1[4];                          // v5 = v2 + v4
     ok    &= std::fabs( v1[5] - 1.5) <= 1e-10;

     return ok;
}
bool exp_2_for1(void)
{     double v1[6];
     return exp_2_for1(v1);
}

Input File: introduction/exp_2_for1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.5: exp_2: First Order Reverse Mode

3.1.5.a: Purpose
First order reverse mode uses the 3.1.3.c: operation sequence , and zero order forward sweep values, to compute the first order derivative of one dependent variable with respect to all the independent variables. The computations are done in reverse of the order of the computations in the original algorithm.

3.1.5.b: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute @[@ f(x) = 1 + x + x^2 / 2 @]@ The corresponding derivative function is @[@ \partial_x f (x) = 1 + x @]@

3.1.5.c: f_5
For our example, we chose to compute the derivative of the value returned by 3.1.1: exp_2.hpp which is equal to the symbol @(@ v_5 @)@ in the 3.1.3.c: exp_2 operation sequence . We begin with the function @(@ f_5 @)@ where @(@ v_5 @)@ is both an argument and the value of the function; i.e., @[@ \begin{array}{rcl} f_5 ( v_1 , v_2 , v_3 , v_4 , v_5 ) & = & v_5 \\ \D{f_5}{v_5} & = & 1 \end{array} @]@ All the other partial derivatives of @(@ f_5 @)@ are zero.

3.1.5.d: Index 5: f_4
Reverse mode starts with the last operation in the sequence. For the case in question, this is the operation with index 5, @[@ v_5 = v_2 + v_4 @]@ We define the function @(@ f_4 ( v_1 , v_2 , v_3 , v_4 ) @)@ as equal to @(@ f_5 @)@ except that @(@ v_5 @)@ is eliminated using this operation; i.e. @[@ f_4 = f_5 [ v_1 , v_2 , v_3 , v_4 , v_5 ( v_2 , v_4 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_4}{v_2} & = & \D{f_5}{v_2} + \D{f_5}{v_5} * \D{v_5}{v_2} & = 1 \\ \D{f_4}{v_4} & = & \D{f_5}{v_4} + \D{f_5}{v_5} * \D{v_5}{v_4} & = 1 \end{array} @]@ All the other partial derivatives of @(@ f_4 @)@ are zero.

3.1.5.e: Index 4: f_3
The next operation has index 4, @[@ v_4 = v_3 / 2 @]@ We define the function @(@ f_3 ( v_1 , v_2 , v_3 ) @)@ as equal to @(@ f_4 @)@ except that @(@ v_4 @)@ is eliminated using this operation; i.e., @[@ f_3 = f_4 [ v_1 , v_2 , v_3 , v_4 ( v_3 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_3}{v_1} & = & \D{f_4}{v_1} & = 0 \\ \D{f_3}{v_2} & = & \D{f_4}{v_2} & = 1 \\ \D{f_3}{v_3} & = & \D{f_4}{v_3} + \D{f_4}{v_4} * \D{v_4}{v_3} & = 0.5 \end{array} @]@

3.1.5.f: Index 3: f_2
The next operation has index 3, @[@ v_3 = v_1 * v_1 @]@ We define the function @(@ f_2 ( v_1 , v_2 ) @)@ as equal to @(@ f_3 @)@ except that @(@ v_3 @)@ is eliminated using this operation; i.e., @[@ f_2 = f_3 [ v_1 , v_2 , v_3 ( v_1 ) ] @]@ Note that the value of @(@ v_1 @)@ is equal to @(@ x @)@ which is .5 for this evaluation. It follows that @[@ \begin{array}{rcll} \D{f_2}{v_1} & = & \D{f_3}{v_1} + \D{f_3}{v_3} * \D{v_3}{v_1} & = 0.5 \\ \D{f_2}{v_2} & = & \D{f_3}{v_2} & = 1 \end{array} @]@

3.1.5.g: Index 2: f_1
The next operation has index 2, @[@ v_2 = 1 + v_1 @]@ We define the function @(@ f_1 ( v_1 ) @)@ as equal to @(@ f_2 @)@ except that @(@ v_2 @)@ is eliminated using this operation; i.e., @[@ f_1 = f_2 [ v_1 , v_2 ( v_1 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_1}{v_1} & = & \D{f_2}{v_1} + \D{f_2}{v_2} * \D{v_2}{v_1} & = 1.5 \end{array} @]@ Note that @(@ v_1 @)@ is equal to @(@ x @)@, so the derivative of this is the derivative of the function defined by 3.1.1: exp_2.hpp at @(@ x = .5 @)@.

3.1.5.h: Verification
The file 3.1.5.1: exp_2_rev1.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of @(@ f_j @)@ that might not be equal to the corresponding partials of @(@ f_{j+1} @)@; i.e., the other partials of @(@ f_j @)@ must be equal to the corresponding partials of @(@ f_{j+1} @)@.

3.1.5.i: Exercises
  1. Which statement in the routine defined by 3.1.5.1: exp_2_rev1.cpp uses the values that are calculated by the routine defined by 3.1.3.1: exp_2_for0.cpp ?
  2. Consider the case where @(@ x = .1 @)@ and we first preform a zero order forward sweep for the operation sequence used above. What are the results of a first order reverse sweep; i.e., what are the corresponding derivatives of @(@ f_5 , f_4 , \ldots , f_1 @)@.
  3. Create a modified version of 3.1.5.1: exp_2_rev1.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.5.1: exp_2_rev1.cpp .

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.5.1: exp_2: Verify First Order Reverse Sweep
# include <cstddef>                 // define size_t
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
bool exp_2_rev1(void)
{     bool ok = true;

     // set the value of v0[j] for j = 1 , ... , 5
     double v0[6];
     ok &= exp_2_for0(v0);

     // initial all partial derivatives as zero
     double f_v[6];
     size_t j;
     for(j = 0; j < 6; j++)
          f_v[j] = 0.;

     // set partial derivative for f5
     f_v[5] = 1.;
     ok &= std::fabs( f_v[5] - 1. ) <= 1e-10; // f5_v5

     // f4 = f5( v1 , v2 , v3 , v4 , v2 + v4 )
     f_v[2] += f_v[5] * 1.;
     f_v[4] += f_v[5] * 1.;
     ok &= std::fabs( f_v[2] - 1. ) <= 1e-10; // f4_v2
     ok &= std::fabs( f_v[4] - 1. ) <= 1e-10; // f4_v4

     // f3 = f4( v1 , v2 , v3 , v3 / 2 )
     f_v[3] += f_v[4] / 2.;
     ok &= std::fabs( f_v[3] - 0.5) <= 1e-10; // f3_v3

     // f2 = f3( v1 , v2 , v1 * v1 )
     f_v[1] += f_v[3] * 2. * v0[1];
     ok &= std::fabs( f_v[1] - 0.5) <= 1e-10; // f2_v1

     // f1 = f2( v1 , 1 + v1 )
     f_v[1] += f_v[2] * 1.;
     ok &= std::fabs( f_v[1] - 1.5) <= 1e-10; // f1_v1

     return ok;
}

Input File: introduction/exp_2_rev1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.6: exp_2: Second Order Forward Mode

3.1.6.a: Second Order Expansion
We define @(@ x(t) @)@ near @(@ t = 0 @)@ by the second order expansion @[@ x(t) = x^{(0)} + x^{(1)} * t + x^{(2)} * t^2 / 2 @]@ It follows that for @(@ k = 0 , 1 , 2 @)@, @[@ x^{(k)} = \dpow{k}{t} x (0) @]@

3.1.6.b: Purpose
In general, a second order forward sweep is given the 3.1.4.a: first order expansion for all of the variables in an operation sequence, and the second order derivatives for the independent variables. It uses these to compute the second order derivative, and thereby obtain the second order expansion, for all the variables in the operation sequence.

3.1.6.c: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute @[@ f(x) = 1 + x + x^2 / 2 @]@ The corresponding second derivative function is @[@ \Dpow{2}{x} f (x) = 1 @]@

3.1.6.d: Operation Sequence
We consider the case where 3.1.1: exp_2.hpp is executed with @(@ x = .5 @)@. The corresponding operation sequence, zero order forward sweep values, and first order forward sweep values are inputs and are used by a second order forward sweep.

3.1.6.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.1.6.d.b: Zero
The Zero column contains the zero order sweep results for the corresponding variable in the operation sequence (see 3.1.3.c.e: zero order sweep ).

3.1.6.d.c: Operation
The Operation column contains the first order sweep operation for this variable.

3.1.6.d.d: First
The First column contains the first order sweep results for the corresponding variable in the operation sequence (see 3.1.4.d.f: first order sweep ).

3.1.6.d.e: Derivative
The Derivative column contains the mathematical function corresponding to the second derivative with respect to @(@ t @)@, at @(@ t = 0 @)@, for each variable in the sequence.

3.1.6.d.f: Second
The Second column contains the second order derivatives for the corresponding variable in the operation sequence; i.e., the second order expansion for the i-th variable is given by @[@ v_i (t) = v_i^{(0)} + v_i^{(1)} * t + v_i^{(2)} * t^2 / 2 @]@ We use @(@ x^{(0)} = 1 @)@, and @(@ x^{(2)} = 0 @)@ so that second order differentiation with respect to @(@ t @)@, at @(@ t = 0 @)@, is the same as the second partial differentiation with respect to @(@ x @)@ at @(@ x = x^{(0)} @)@.

3.1.6.d.g: Sweep
Index    Zero    Operation    First    Derivative    Second
1 0.5    @(@ v_1^{(1)} = x^{(1)} @)@ 1 @(@ v_1^{(2)} = x^{(2)} @)@ @(@ v_1^{(2)} = 0 @)@
2 1.5    @(@ v_2^{(1)} = v_1^{(1)} @)@ 1 @(@ v_2^{(2)} = v_1^{(2)} @)@ @(@ v_2^{(2)} = 0 @)@
3 0.25    @(@ v_3^{(1)} = 2 * v_1^{(0)} * v_1^{(1)} @)@ 1 @(@ v_3^{(2)} = 2 * (v_1^{(1)} * v_1^{(1)} + v_1^{(0)} * v_1^{(2)} ) @)@ @(@ v_3^{(2)} = 2 @)@
4 0.125    @(@ v_4^{(1)} = v_3^{(1)} / 2 @)@ .5 @(@ v_4^{(2)} = v_3^{(2)} / 2 @)@ @(@ v_4^{(2)} = 1 @)@
5 1.625   @(@ v_5^{(1)} = v_2^{(1)} + v_4^{(1)} @)@ 1.5 @(@ v_5^{(2)} = v_2^{(2)} + v_4^{(2)} @)@ @(@ v_5^{(2)} = 1 @)@
3.1.6.e: Return Value
The second derivative of the return value for this case is @[@ \begin{array}{rcl} 1 & = & v_5^{(2)} = \left[ \Dpow{2}{t} v_5 \right]_{t=0} = \left[ \Dpow{2}{t} f( x^{(0)} + x^{(1)} * t ) \right]_{t=0} \\ & = & x^{(1)} * \Dpow{2}{x} f ( x^{(0)} ) * x^{(1)} = \Dpow{2}{x} f ( x^{(0)} ) \end{array} @]@ (We have used the fact that @(@ x^{(1)} = 1 @)@ and @(@ x^{(2)} = 0 @)@.)

3.1.6.f: Verification
The file 3.1.6.1: exp_2_for2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.1.6.g: Exercises
  1. Which statement in the routine defined by 3.1.6.1: exp_2_for2.cpp uses the values that are calculated by the routine defined by 3.1.4.1: exp_2_for1.cpp ?
  2. Suppose that @(@ x = .1 @)@, what are the results of a zero, first, and second order forward sweep for the operation sequence above; i.e., what are the corresponding values for @(@ v_i^{(k)} @)@ for @(@ i = 1, \ldots , 5 @)@ and @(@ k = 0, 1, 2 @)@.
  3. Create a modified version of 3.1.6.1: exp_2_for2.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.6.1: exp_2_for2.cpp .

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.6.1: exp_2: Verify Second Order Forward Sweep
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
extern bool exp_2_for1(double *v1); // computes first order forward sweep
bool exp_2_for2(void)
{     bool ok = true;
     double v0[6], v1[6], v2[6];

     // set the value of v0[j], v1[j], for j = 1 , ... , 5
     ok &= exp_2_for0(v0);
     ok &= exp_2_for1(v1);

     v2[1] = 0.;                                     // v1 = x
     ok    &= std::fabs( v2[1] - 0. ) <= 1e-10;

     v2[2] = v2[1];                                  // v2 = 1 + v1
     ok    &= std::fabs( v2[2] - 0. ) <= 1e-10;

     v2[3] = 2.*(v0[1]*v2[1] + v1[1]*v1[1]);         // v3 = v1 * v1
     ok    &= std::fabs( v2[3] - 2. ) <= 1e-10;

     v2[4] = v2[3] / 2.;                             // v4 = v3 / 2
     ok    &= std::fabs( v2[4] - 1. ) <= 1e-10;

     v2[5] = v2[2] + v2[4];                          // v5 = v2 + v4
     ok    &= std::fabs( v2[5] - 1. ) <= 1e-10;

     return ok;
}

Input File: introduction/exp_2_for2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.7: exp_2: Second Order Reverse Mode

3.1.7.a: Purpose
In general, a second order reverse sweep is given the 3.1.4.a: first order expansion for all of the variables in an operation sequence. Given a choice of a particular variable, it computes the derivative, of that variables first order expansion coefficient, with respect to all of the independent variables.

3.1.7.b: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute @[@ f(x) = 1 + x + x^2 / 2 @]@ The corresponding second derivative is @[@ \Dpow{2}{x} f (x) = 1 @]@

3.1.7.c: f_5
For our example, we chose to compute the derivative of @(@ v_5^{(1)} @)@ with respect to all the independent variable. For the case computed for the 3.1.4.d.f: first order sweep , @(@ v_5^{(1)} @)@ is the derivative of the value returned by 3.1.1: exp_2.hpp . This the value computed will be the second derivative of the value returned by 3.1.1: exp_2.hpp . We begin with the function @(@ f_5 @)@ where @(@ v_5^{(1)} @)@ is both an argument and the value of the function; i.e., @[@ \begin{array}{rcl} f_5 \left( v_1^{(0)}, v_1^{(1)} , \ldots , v_5^{(0)} , v_5^{(1)} \right) & = & v_5^{(1)} \\ \D{f_5}{v_5^{(1)}} & = & 1 \end{array} @]@ All the other partial derivatives of @(@ f_5 @)@ are zero.

3.1.7.d: Index 5: f_4
Second order reverse mode starts with the last operation in the sequence. For the case in question, this is the operation with index 5. The zero and first order sweep representations of this operation are @[@ \begin{array}{rcl} v_5^{(0)} & = & v_2^{(0)} + v_4^{(0)} \\ v_5^{(1)} & = & v_2^{(1)} + v_4^{(1)} \end{array} @]@ We define the function @(@ f_4 \left( v_1^{(0)} , \ldots , v_4^{(1)} \right) @)@ as equal to @(@ f_5 @)@ except that @(@ v_5^{(0)} @)@ and @(@ v_5^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_4 = f_5 \left[ v_1^{(0)} , \ldots , v_4^{(1)} , v_5^{(0)} \left( v_2^{(0)} , v_4^{(0)} \right) , v_5^{(1)} \left( v_2^{(1)} , v_4^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_4}{v_2^{(1)}} & = & \D{f_5}{v_2^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_2^{(1)}} & = 1 \\ \D{f_4}{v_4^{(1)}} & = & \D{f_5}{v_4^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5}{v_4^{(1)}} & = 1 \end{array} @]@ All the other partial derivatives of @(@ f_4 @)@ are zero.

3.1.7.e: Index 4: f_3
The next operation has index 4, @[@ \begin{array}{rcl} v_4^{(0)} & = & v_3^{(0)} / 2 \\ v_4^{(1)} & = & v_3^{(1)} / 2 \end{array} @]@ We define the function @(@ f_3 \left( v_1^{(0)} , \ldots , v_3^{(1)} \right) @)@ as equal to @(@ f_4 @)@ except that @(@ v_4^{(0)} @)@ and @(@ v_4^{(1)} @)@ are eliminated using this operation; i.e., @[@ f_3 = f_4 \left[ v_1^{(0)} , \ldots , v_3^{(1)} , v_4^{(0)} \left( v_3^{(0)} \right) , v_4^{(1)} \left( v_3^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_3}{v_2^{(1)}} & = & \D{f_4}{v_2^{(1)}} & = 1 \\ \D{f_3}{v_3^{(1)}} & = & \D{f_4}{v_3^{(1)}} + \D{f_4}{v_4^{(1)}} * \D{v_4^{(1)}}{v_3^{(1)}} & = 0.5 \end{array} @]@ All the other partial derivatives of @(@ f_3 @)@ are zero.

3.1.7.f: Index 3: f_2
The next operation has index 3, @[@ \begin{array}{rcl} v_3^{(0)} & = & v_1^{(0)} * v_1^{(0)} \\ v_3^{(1)} & = & 2 * v_1^{(0)} * v_1^{(1)} \end{array} @]@ We define the function @(@ f_2 \left( v_1^{(0)} , \ldots , v_2^{(1)} \right) @)@ as equal to @(@ f_3 @)@ except that @(@ v_3^{(0)} @)@ and @(@ v_3^{(1)} @)@ are eliminated using this operation; i.e., @[@ f_2 = f_3 \left[ v_1^{(0)} , \ldots , v_2^{(1)} , v_3^{(0)} ( v_1^{(0)} ) , v_3^{(1)} ( v_1^{(0)} , v_1^{(1)} ) \right] @]@ Note that, from the 3.1.4.d.f: first order forward sweep , the value of @(@ v_1^{(0)} @)@ is equal to @(@ .5 @)@ and @(@ v_1^{(1)} @)@ is equal 1. It follows that @[@ \begin{array}{rcll} \D{f_2}{v_1^{(0)}} & = & \D{f_3}{v_1^{(0)}} + \D{f_3}{v_3^{(0)}} * \D{v_3^{(0)}}{v_1^{(0)}} + \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_1^{(0)}} & = 1 \\ \D{f_2}{v_1^{(1)}} & = & \D{f_3}{v_1^{(1)}} + \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_1^{(1)}} & = 0.5 \\ \D{f_2}{v_2^{(0)}} & = & \D{f_3}{v_2^{(0)}} & = 0 \\ \D{f_2}{v_2^{(1)}} & = & \D{f_3}{v_2^{(1)}} & = 1 \end{array} @]@

3.1.7.g: Index 2: f_1
The next operation has index 2, @[@ \begin{array}{rcl} v_2^{(0)} & = & 1 + v_1^{(0)} \\ v_2^{(1)} & = & v_1^{(1)} \end{array} @]@ We define the function @(@ f_1 ( v_1^{(0)} , v_1^{(1)} ) @)@ as equal to @(@ f_2 @)@ except that @(@ v_2^{(0)} @)@ and @(@ v_2^{(1)} @)@ are eliminated using this operation; i.e., @[@ f_1 = f_2 \left[ v_1^{(0)} , v_1^{(1)} , v_2^{(0)} ( v_1^{(0)} ) , v_2^{(1)} ( v_1^{(1)} ) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_1}{v_1^{(0)}} & = & \D{f_2}{v_1^{(0)}} + \D{f_2}{v_2^{(0)}} * \D{v_2^{(0)}}{v_1^{(0)}} & = 1 \\ \D{f_1}{v_1^{(1)}} & = & \D{f_2}{v_1^{(1)}} + \D{f_2}{v_2^{(1)}} * \D{v_2^{(1)}}{v_1^{(1)}} & = 1.5 \end{array} @]@ Note that @(@ v_1 @)@ is equal to @(@ x @)@, so the second derivative of the function defined by 3.1.1: exp_2.hpp at @(@ x = .5 @)@ is given by @[@ \Dpow{2}{x} v_5^{(0)} = \D{ v_5^{(1)} }{x} = \D{ v_5^{(1)} }{v_1^{(0)}} = \D{f_1}{v_1^{(0)}} = 1 @]@ There is a theorem about Algorithmic Differentiation that explains why the other partial of @(@ f_1 @)@ is equal to the first derivative of the function defined by 3.1.1: exp_2.hpp at @(@ x = .5 @)@.

3.1.7.h: Verification
The file 3.1.7.1: exp_2_rev2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of @(@ f_j @)@ that might not be equal to the corresponding partials of @(@ f_{j+1} @)@; i.e., the other partials of @(@ f_j @)@ must be equal to the corresponding partials of @(@ f_{j+1} @)@.

3.1.7.i: Exercises
  1. Which statement in the routine defined by 3.1.7.1: exp_2_rev2.cpp uses the values that are calculated by the routine defined by 3.1.3.1: exp_2_for0.cpp ? Which statements use values that are calculate by the routine defined in 3.1.4.1: exp_2_for1.cpp ?
  2. Consider the case where @(@ x = .1 @)@ and we first preform a zero order forward sweep, then a first order sweep, for the operation sequence used above. What are the results of a second order reverse sweep; i.e., what are the corresponding derivatives of @(@ f_5 , f_4 , \ldots , f_1 @)@.
  3. Create a modified version of 3.1.7.1: exp_2_rev2.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.7.1: exp_2_rev2.cpp .

Input File: introduction/exp_2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.7.1: exp_2: Verify Second Order Reverse Sweep
# include <cstddef>                 // define size_t
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
extern bool exp_2_for1(double *v1); // computes first order forward sweep
bool exp_2_rev2(void)
{     bool ok = true;

     // set the value of v0[j], v1[j] for j = 1 , ... , 5
     double v0[6], v1[6];
     ok &= exp_2_for0(v0);
     ok &= exp_2_for1(v1);

     // initial all partial derivatives as zero
     double f_v0[6], f_v1[6];
     size_t j;
     for(j = 0; j < 6; j++)
     {     f_v0[j] = 0.;
          f_v1[j] = 0.;
     }

     // set partial derivative for f_5
     f_v1[5] = 1.;
     ok &= std::fabs( f_v1[5] - 1. ) <= 1e-10; // partial f_5 w.r.t v_5^1

     // f_4 = f_5( v_1^0 , ... , v_4^1 , v_2^0 + v_4^0 , v_2^1 + v_4^1 )
     f_v0[2] += f_v0[5] * 1.;
     f_v0[4] += f_v0[5] * 1.;
     f_v1[2] += f_v1[5] * 1.;
     f_v1[4] += f_v1[5] * 1.;
     ok &= std::fabs( f_v0[2] - 0. ) <= 1e-10; // partial f_4 w.r.t. v_2^0
     ok &= std::fabs( f_v0[4] - 0. ) <= 1e-10; // partial f_4 w.r.t. v_4^0
     ok &= std::fabs( f_v1[2] - 1. ) <= 1e-10; // partial f_4 w.r.t. v_2^1
     ok &= std::fabs( f_v1[4] - 1. ) <= 1e-10; // partial f_4 w.r.t. v_4^1

     // f_3 = f_4( v_1^0 , ... , v_3^1, v_3^0 / 2 , v_3^1 / 2 )
     f_v0[3] += f_v0[4] / 2.;
     f_v1[3] += f_v1[4] / 2.;
     ok &= std::fabs( f_v0[3] - 0.  ) <= 1e-10; // partial f_3 w.r.t. v_3^0
     ok &= std::fabs( f_v1[3] - 0.5 ) <= 1e-10; // partial f_3 w.r.t. v_3^1

     // f_2 = f_3(  v_1^0 , ... , v_2^1, v_1^0 * v_1^0 , 2 * v_1^0 * v_1^1 )
     f_v0[1] += f_v0[3] * 2. * v0[1];
     f_v0[1] += f_v1[3] * 2. * v1[1];
     f_v1[1] += f_v1[3] * 2. * v0[1];
     ok &= std::fabs( f_v0[1] - 1.  ) <= 1e-10; // partial f_2 w.r.t. v_1^0
     ok &= std::fabs( f_v1[1] - 0.5 ) <= 1e-10; // partial f_2 w.r.t. v_1^1

     // f_1 = f_2( v_1^0 , v_1^1 , 1 + v_1^0 , v_1^1 )
     f_v0[1] += f_v0[2] * 1.;
     f_v1[1] += f_v1[2] * 1.;
     ok &= std::fabs( f_v0[1] - 1. ) <= 1e-10; // partial f_1 w.r.t. v_1^0
     ok &= std::fabs( f_v1[1] - 1.5) <= 1e-10; // partial f_1 w.r.t. v_1^1

     return ok;
}

Input File: introduction/exp_2_rev2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.1.8: exp_2: CppAD Forward and Reverse Sweeps
.

3.1.8.a: Purpose
Use CppAD forward and reverse modes to compute the partial derivative with respect to @(@ x @)@, at the point @(@ x = .5 @)@, of the function
     exp_2(
x)
as defined by the 3.1.1: exp_2.hpp include file.

3.1.8.b: Exercises
  1. Create and test a modified version of the routine below that computes the same order derivatives with respect to @(@ x @)@, at the point @(@ x = .1 @)@ of the function
         exp_2(
    x)
  2. Create a routine called
         exp_3(
    x)
    that evaluates the function @[@ f(x) = 1 + x^2 / 2 + x^3 / 6 @]@ Test a modified version of the routine below that computes the derivative of @(@ f(x) @)@ at the point @(@ x = .5 @)@.

# include <cppad/cppad.hpp>  // http://www.coin-or.org/CppAD/
# include "exp_2.hpp"        // second order exponential approximation
bool exp_2_cppad(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::vector;    // can use any simple vector template class
     using CppAD::NearEqual; // checks if values are nearly equal

     // domain space vector
     size_t n = 1; // dimension of the domain space
     vector< AD<double> > X(n);
     X[0] = .5;    // value of x for this operation sequence

     // declare independent variables and start recording operation sequence
     CppAD::Independent(X);

     // evaluate our exponential approximation
     AD<double> x   = X[0];
     AD<double> apx = exp_2(x);

     // range space vector
     size_t m = 1;  // dimension of the range space
     vector< AD<double> > Y(m);
     Y[0] = apx;    // variable that represents only range space component

     // Create f: X -> Y corresponding to this operation sequence
     // and stop recording. This also executes a zero order forward
     // sweep using values in X for x.
     CppAD::ADFun<double> f(X, Y);

     // first order forward sweep that computes
     // partial of exp_2(x) with respect to x
     vector<double> dx(n);  // differential in domain space
     vector<double> dy(m);  // differential in range space
     dx[0] = 1.;            // direction for partial derivative
     dy    = f.Forward(1, dx);
     double check = 1.5;
     ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

     // first order reverse sweep that computes the derivative
     vector<double>  w(m);   // weights for components of the range
     vector<double> dw(n);   // derivative of the weighted function
     w[0] = 1.;              // there is only one weight
     dw   = f.Reverse(1, w); // derivative of w[0] * exp_2(x)
     check = 1.5;            // partial of exp_2(x) with respect to x
     ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

     // second order forward sweep that computes
     // second partial of exp_2(x) with respect to x
     vector<double> x2(n);     // second order Taylor coefficients
     vector<double> y2(m);
     x2[0] = 0.;               // evaluate second partial .w.r.t. x
     y2    = f.Forward(2, x2);
     check = 0.5 * 1.;         // Taylor coef is 1/2 second derivative
     ok   &= NearEqual(y2[0], check, 1e-10, 1e-10);

     // second order reverse sweep that computes
     // derivative of partial of exp_2(x) w.r.t. x
     dw.resize(2 * n);         // space for first and second derivatives
     dw    = f.Reverse(2, w);
     check = 1.;               // result should be second derivative
     ok   &= NearEqual(dw[0*2+1], check, 1e-10, 1e-10);

     return ok;
}

Input File: introduction/exp_2_cppad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2: An Epsilon Accurate Exponential Approximation

3.2.a: Syntax
# include "exp_eps.hpp"
y = exp_eps(xepsilon)

3.2.b: Purpose
This is a an example algorithm that is used to demonstrate how Algorithmic Differentiation works with loops and boolean decision variables (see 3.1: exp_2 for a simpler example).

3.2.c: Mathematical Function
The exponential function can be defined by @[@ \exp (x) = 1 + x^1 / 1 ! + x^2 / 2 ! + \cdots @]@ We define @(@ k ( x, \varepsilon ) @)@ as the smallest non-negative integer such that @(@ \varepsilon \geq x^k / k ! @)@; i.e., @[@ k( x, \varepsilon ) = \min \{ k \in {\rm Z}_+ \; | \; \varepsilon \geq x^k / k ! \} @]@ The mathematical form for our approximation of the exponential function is @[@ \begin{array}{rcl} {\rm exp\_eps} (x , \varepsilon ) & = & \left\{ \begin{array}{ll} \frac{1}{ {\rm exp\_eps} (-x , \varepsilon ) } & {\rm if} \; x < 0 \\ 1 + x^1 / 1 ! + \cdots + x^{k( x, \varepsilon)} / k( x, \varepsilon ) ! & {\rm otherwise} \end{array} \right. \end{array} @]@

3.2.d: include
The include command in the syntax is relative to
     cppad-
yyyymmdd/introduction/exp_apx
where cppad-yyyymmdd is the distribution directory created during the beginning steps of the 2: installation of CppAD.

3.2.e: x
The argument x has prototype
     const 
Type &x
(see Type below). It specifies the point at which to evaluate the approximation for the exponential function.

3.2.f: epsilon
The argument epsilon has prototype
     const 
Type &epsilon
It specifies the accuracy with which to approximate the exponential function value; i.e., it is the value of @(@ \varepsilon @)@ in the exponential function approximation defined above.

3.2.g: y
The result y has prototype
     
Type y
It is the value of the exponential function approximation defined above.

3.2.h: Type
If u and v are Type objects and i is an int:
Operation Result Type Description
Type(i) Type object with value equal to i
Type u = v Type construct u with value equal to v
u > v bool true, if u greater than v , an false otherwise
u = v Type new u (and result) is value of v
u * v Type result is value of @(@ u * v @)@
u / v Type result is value of @(@ u / v @)@
u + v Type result is value of @(@ u + v @)@
-u Type result is value of @(@ - u @)@

3.2.i: Implementation
The file 3.2.1: exp_eps.hpp contains a C++ implementation of this function.

3.2.j: Test
The file 3.2.2: exp_eps.cpp contains a test of this implementation. It returns true for success and false for failure.

3.2.k: Exercises
  1. Using the definition of @(@ k( x, \varepsilon ) @)@ above, what is the value of @(@ k(.5, 1) @)@, @(@ k(.5, .1) @)@, and @(@ k(.5, .01) @)@ ?
  2. Suppose that we make the following call to exp_eps:
     
         double x       = 1.;
         double epsilon = .01;
         double y = exp_eps(x, epsilon);
    
    What is the value assigned to k, temp, term, and sum the first time through the while loop in 3.2.1: exp_eps.hpp ?
  3. Continuing the previous exercise, what is the value assigned to k, temp, term, and sum the second time through the while loop in 3.2.1: exp_eps.hpp ?

Input File: introduction/exp_eps.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.1: exp_eps: Implementation
template <class Type>
Type exp_eps(const Type &x, const Type &epsilon)
{     // abs_x = |x|
     Type abs_x = x;
     if( Type(0) > x )
          abs_x = - x;
     // initialize
     int  k    = 0;          // initial order
     Type term = 1.;         // term = |x|^k / k !
     Type sum  = term;       // initial sum
     while(term > epsilon)
     {     k         = k + 1;          // order for next term
          Type temp = term * abs_x;   // term = |x|^k / (k-1)!
          term      = temp / Type(k); // term = |x|^k / k !
          sum       = sum + term;     // sum  = 1 + ... + |x|^k / k !
     }
     // In the case where x is negative, use exp(x) = 1 / exp(-|x|)
     if( Type(0) > x )
          sum = Type(1) / sum;
     return sum;
}

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.2: exp_eps: Test of exp_eps
# include <cmath>             // for fabs function
# include "exp_eps.hpp"       // definition of exp_eps algorithm
bool exp_eps(void)
{     double x       = .5;
     double epsilon = .2;
     double check   = 1 + .5 + .125; // include 1 term less than epsilon
     bool   ok      = std::fabs( exp_eps(x, epsilon) - check ) <= 1e-10;
     return ok;
}

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep

3.2.3.a: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical form for the operation sequence corresponding to the exp_eps is @[@ f( x , \varepsilon ) = 1 + x + x^2 / 2 @]@ Note that, for these particular values of x and epsilon , this is the same as the mathematical form for 3.1.3.a: exp_2 .

3.2.3.b: Operation Sequence
We consider the 12.4.g.b: operation sequence corresponding to the algorithm 3.2.1: exp_eps.hpp with the argument x is equal to .5 and epsilon is equal to .2.

3.2.3.b.a: Variable
We refer to values that depend on the input variables x and epsilon as variables.

3.2.3.b.b: Parameter
We refer to values that do not depend on the input variables x or epsilon as parameters. Operations where the result is a parameter are not included in the zero order sweep below.

3.2.3.b.c: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.

3.2.3.b.d: Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.

3.2.3.b.e: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.2.3.b.f: Zero Order
The Zero Order column contains the 3.1.3.b: zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e., in order of increasing index.

3.2.3.b.g: Sweep
Index    Code    Operation    Zero Order
1    abs_x = x; @(@ v_1 = x @)@ @(@ v_1^{(0)} = 0.5 @)@
2    temp = term * abs_x; @(@ v_2 = 1 * v_1 @)@ @(@ v_2^{(0)} = 0.5 @)@
3    term = temp / Type(k); @(@ v_3 = v_2 / 1 @)@ @(@ v_3^{(0)} = 0.5 @)@
4    sum = sum + term; @(@ v_4 = 1 + v_3 @)@ @(@ v_4^{(0)} = 1.5 @)@
5    temp = term * abs_x; @(@ v_5 = v_3 * v_1 @)@ @(@ v_5^{(0)} = 0.25 @)@
6    term = temp / Type(k); @(@ v_6 = v_5 / 2 @)@ @(@ v_6^{(0)} = 0.125 @)@
7    sum = sum + term; @(@ v_7 = v_4 + v_6 @)@ @(@ v_7^{(0)} = 1.625 @)@
3.2.3.c: Return Value
The return value for this case is @[@ 1.625 = v_7^{(0)} = f ( x^{(0)} , \varepsilon^{(0)} ) @]@

3.2.3.d: Comparisons
If x were negative, or if epsilon were a much smaller or much larger value, the results of the following comparisons could be different:
 
     if( Type(0) > x )
     while(term > epsilon)
This in turn would result in a different operation sequence. Thus the operation sequence above only corresponds to 3.2.1: exp_eps.hpp for values of x and epsilon within a certain range. Note that there is a neighborhood of @(@ x = 0.5 @)@ for which the comparisons would have the same result and hence the operation sequence would be the same.

3.2.3.e: Verification
The file 3.2.3.1: exp_eps_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.2.3.f: Exercises
  1. Suppose that @(@ x^{(0)} = .1 @)@, what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for @(@ v_1^{(0)} , v_2^{(0)} , \ldots , v_7^{(0)} @)@.
  2. Create a modified version of 3.2.3.1: exp_eps_for0.cpp that verifies the values you obtained for the previous exercise.
  3. Create and run a main program that reports the result of calling the modified version of 3.2.3.1: exp_eps_for0.cpp in the previous exercise.

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
# include <cmath>                // for fabs function
bool exp_eps_for0(double *v0)    // double v0[8]
{     bool  ok = true;
     double x = .5;

     v0[1] = x;                                  // abs_x = x;
     ok  &= std::fabs( v0[1] - 0.5) < 1e-10;

     v0[2] = 1. * v0[1];                         // temp = term * abs_x;
     ok  &= std::fabs( v0[2] - 0.5) < 1e-10;

     v0[3] = v0[2] / 1.;                         // term = temp / Type(k);
     ok  &= std::fabs( v0[3] - 0.5) < 1e-10;

     v0[4] = 1. + v0[3];                         // sum = sum + term;
     ok  &= std::fabs( v0[4] - 1.5) < 1e-10;

     v0[5] = v0[3] * v0[1];                      // temp = term * abs_x;
     ok  &= std::fabs( v0[5] - 0.25) < 1e-10;

     v0[6] = v0[5] / 2.;                         // term = temp / Type(k);
     ok  &= std::fabs( v0[6] - 0.125) < 1e-10;

     v0[7] = v0[4] + v0[6];                      // sum = sum + term;
     ok  &= std::fabs( v0[7] - 1.625) < 1e-10;

     return ok;
}
bool exp_eps_for0(void)
{     double v0[8];
     return exp_eps_for0(v0);
}

Input File: introduction/exp_eps_for0.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.4: exp_eps: First Order Forward Sweep

3.2.4.a: First Order Expansion
We define @(@ x(t) @)@ and @(@ \varepsilon(t) ] @)@ near @(@ t = 0 @)@ by the first order expansions @[@ \begin{array}{rcl} x(t) & = & x^{(0)} + x^{(1)} * t \\ \varepsilon(t) & = & \varepsilon^{(0)} + \varepsilon^{(1)} * t \end{array} @]@ It follows that @(@ x^{(0)} @)@ (@(@ \varepsilon^{(0)} @)@) is the zero, and @(@ x^{(1)} @)@ (@(@ \varepsilon^{(1)} @)@) the first, order derivative of @(@ x(t) @)@ at @(@ t = 0 @)@ (@(@ \varepsilon (t) @)@) at @(@ t = 0 @)@.

3.2.4.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is @[@ f ( x , \varepsilon ) = 1 + x + x^2 / 2 @]@ The corresponding partial derivative with respect to @(@ x @)@, and the value of the derivative, are @[@ \partial_x f ( x , \varepsilon ) = 1 + x = 1.5 @]@

3.2.4.c: Operation Sequence

3.2.4.c.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.2.4.c.b: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.2.4.c.c: Zero Order
The Zero Order column contains the zero order derivatives for the corresponding variable in the operation sequence (see 3.1.4.d.f: zero order sweep ).

3.2.4.c.d: Derivative
The Derivative column contains the mathematical function corresponding to the derivative with respect to @(@ t @)@, at @(@ t = 0 @)@, for each variable in the sequence.

3.2.4.c.e: First Order
The First Order column contains the first order derivatives for the corresponding variable in the operation sequence; i.e., @[@ v_j (t) = v_j^{(0)} + v_j^{(1)} t @]@ We use @(@ x^{(1)} = 1 @)@ and @(@ \varepsilon^{(1)} = 0 @)@, so that differentiation with respect to @(@ t @)@, at @(@ t = 0 @)@, is the same partial differentiation with respect to @(@ x @)@ at @(@ x = x^{(0)} @)@.

3.2.4.c.f: Sweep
Index    Operation    Zero Order    Derivative    First Order
1    @(@ v_1 = x @)@ 0.5 @(@ v_1^{(1)} = x^{(1)} @)@ @(@ v_1^{(1)} = 1 @)@
2    @(@ v_2 = 1 * v_1 @)@ 0.5 @(@ v_2^{(1)} = 1 * v_1^{(1)} @)@ @(@ v_2^{(1)} = 1 @)@
3    @(@ v_3 = v_2 / 1 @)@ 0.5 @(@ v_3^{(1)} = v_2^{(1)} / 1 @)@ @(@ v_3^{(1)} = 1 @)@
4    @(@ v_4 = 1 + v_3 @)@ 1.5 @(@ v_4^{(1)} = v_3^{(1)} @)@ @(@ v_4^{(1)} = 1 @)@
5    @(@ v_5 = v_3 * v_1 @)@ 0.25 @(@ v_5^{(1)} = v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)} @)@ @(@ v_5^{(1)} = 1 @)@
6    @(@ v_6 = v_5 / 2 @)@ 0.125 @(@ v_6^{(1)} = v_5^{(1)} / 2 @)@ @(@ v_6^{(1)} = 0.5 @)@
7    @(@ v_7 = v_4 + v_6 @)@ 1.625 @(@ v_7^{(1)} = v_4^{(1)} + v_6^{(1)} @)@ @(@ v_7^{(1)} = 1.5 @)@
3.2.4.d: Return Value
The derivative of the return value for this case is @[@ \begin{array}{rcl} 1.5 & = & v_7^{(1)} = \left[ \D{v_7}{t} \right]_{t=0} = \left[ \D{}{t} f( x^{(0)} + x^{(1)} * t , \varepsilon^{(0)} ) \right]_{t=0} \\ & = & \partial_x f ( x^{(0)} , \varepsilon^{(0)} ) * x^{(1)} = \partial_x f ( x^{(0)} , \varepsilon^{(0)} ) \end{array} @]@ (We have used the fact that @(@ x^{(1)} = 1 @)@ and @(@ \varepsilon^{(1)} = 0 @)@.)

3.2.4.e: Verification
The file 3.2.4.1: exp_eps_for1.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.2.4.f: Exercises
  1. Suppose that @(@ x = .1 @)@, what are the results of a zero and first order forward mode sweep for the operation sequence above; i.e., what are the corresponding values for @(@ v_1^{(0)}, v_2^{(0)}, \cdots , v_7^{(0)} @)@ and @(@ v_1^{(1)}, v_2^{(1)}, \cdots , v_7^{(1)} @)@ ?
  2. Create a modified version of 3.2.4.1: exp_eps_for1.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.4.1: exp_eps_for1.cpp .
  3. Suppose that @(@ x = .1 @)@ and @(@ \epsilon = .2 @)@, what is the operation sequence corresponding to
         exp_eps(
    xepsilon)

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.4.1: exp_eps: Verify First Order Forward Sweep
# include <cmath>                     // for fabs function
extern bool exp_eps_for0(double *v0); // computes zero order forward sweep
bool exp_eps_for1(double *v1)         // double v[8]
{     bool ok = true;
     double v0[8];

     // set the value of v0[j] for j = 1 , ... , 7
     ok &= exp_eps_for0(v0);

     v1[1] = 1.;                                      // v1 = x
     ok    &= std::fabs( v1[1] - 1. ) <= 1e-10;

     v1[2] = 1. * v1[1];                              // v2 = 1 * v1
     ok    &= std::fabs( v1[2] - 1. ) <= 1e-10;

     v1[3] = v1[2] / 1.;                              // v3 = v2 / 1
     ok    &= std::fabs( v1[3] - 1. ) <= 1e-10;

     v1[4] = v1[3];                                   // v4 = 1 + v3
     ok    &= std::fabs( v1[4] - 1. ) <= 1e-10;

     v1[5] = v1[3] * v0[1] + v0[3] * v1[1];           // v5 = v3 * v1
     ok    &= std::fabs( v1[5] - 1. ) <= 1e-10;

     v1[6] = v1[5] / 2.;                              // v6 = v5 / 2
     ok    &= std::fabs( v1[6] - 0.5 ) <= 1e-10;

     v1[7] = v1[4] + v1[6];                           // v7 = v4 + v6
     ok    &= std::fabs( v1[7] - 1.5 ) <= 1e-10;

     return ok;
}
bool exp_eps_for1(void)
{     double v1[8];
     return exp_eps_for1(v1);
}

Input File: introduction/exp_eps_for1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.5: exp_eps: First Order Reverse Sweep

3.2.5.a: Purpose
First order reverse mode uses the 3.2.3.b: operation sequence , and zero order forward sweep values, to compute the first order derivative of one dependent variable with respect to all the independent variables. The computations are done in reverse of the order of the computations in the original algorithm.

3.2.5.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is @[@ f ( x , \varepsilon ) = 1 + x + x^2 / 2 @]@ The corresponding partial derivatives, and the value of the derivatives, are @[@ \begin{array}{rcl} \partial_x f ( x , \varepsilon ) & = & 1 + x = 1.5 \\ \partial_\varepsilon f ( x , \varepsilon ) & = & 0 \end{array} @]@

3.2.5.c: epsilon
Since @(@ \varepsilon @)@ is an independent variable, it could included as an argument to all of the @(@ f_j @)@ functions below. The result would be that all the partials with respect to @(@ \varepsilon @)@ would be zero and hence we drop it to simplify the presentation.

3.2.5.d: f_7
In reverse mode we choose one dependent variable and compute its derivative with respect to all the independent variables. For our example, we chose the value returned by 3.2.1: exp_eps.hpp which is @(@ v_7 @)@. We begin with the function @(@ f_7 @)@ where @(@ v_7 @)@ is both an argument and the value of the function; i.e., @[@ \begin{array}{rcl} f_7 ( v_1 , v_2 , v_3 , v_4 , v_5 , v_6 , v_7 ) & = & v_7 \\ \D{f_7}{v_7} & = & 1 \end{array} @]@ All the other partial derivatives of @(@ f_7 @)@ are zero.

3.2.5.e: Index 7: f_6
The last operation has index 7, @[@ v_7 = v_4 + v_6 @]@ We define the function @(@ f_6 ( v_1 , v_2 , v_3 , v_4 , v_5 , v_6 ) @)@ as equal to @(@ f_7 @)@ except that @(@ v_7 @)@ is eliminated using this operation; i.e. @[@ f_6 = f_7 [ v_1 , v_2 , v_3 , v_4 , v_5 , v_6 , v_7 ( v_4 , v_6 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_6}{v_4} & = & \D{f_7}{v_4} + \D{f_7}{v_7} * \D{v_7}{v_4} & = 1 \\ \D{f_6}{v_6} & = & \D{f_7}{v_6} + \D{f_7}{v_7} * \D{v_7}{v_6} & = 1 \end{array} @]@ All the other partial derivatives of @(@ f_6 @)@ are zero.

3.2.5.f: Index 6: f_5
The previous operation has index 6, @[@ v_6 = v_5 / 2 @]@ We define the function @(@ f_5 ( v_1 , v_2 , v_3 , v_4 , v_5 ) @)@ as equal to @(@ f_6 @)@ except that @(@ v_6 @)@ is eliminated using this operation; i.e., @[@ f_5 = f_6 [ v_1 , v_2 , v_3 , v_4 , v_5 , v_6 ( v_5 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_5}{v_4} & = & \D{f_6}{v_4} & = 1 \\ \D{f_5}{v_5} & = & \D{f_6}{v_5} + \D{f_6}{v_6} * \D{v_6}{v_5} & = 0.5 \end{array} @]@ All the other partial derivatives of @(@ f_5 @)@ are zero.

3.2.5.g: Index 5: f_4
The previous operation has index 5, @[@ v_5 = v_3 * v_1 @]@ We define the function @(@ f_4 ( v_1 , v_2 , v_3 , v_4 ) @)@ as equal to @(@ f_5 @)@ except that @(@ v_5 @)@ is eliminated using this operation; i.e., @[@ f_4 = f_5 [ v_1 , v_2 , v_3 , v_4 , v_5 ( v_3 , v_1 ) ] @]@ Given the information from the forward sweep, we have @(@ v_3 = 0.5 @)@ and @(@ v_1 = 0.5 @)@. It follows that @[@ \begin{array}{rcll} \D{f_4}{v_1} & = & \D{f_5}{v_1} + \D{f_5}{v_5} * \D{v_5}{v_1} & = 0.25 \\ \D{f_4}{v_2} & = & \D{f_5}{v_2} & = 0 \\ \D{f_4}{v_3} & = & \D{f_5}{v_3} + \D{f_5}{v_5} * \D{v_5}{v_3} & = 0.25 \\ \D{f_4}{v_4} & = & \D{f_5}{v_4} & = 1 \end{array} @]@

3.2.5.h: Index 4: f_3
The previous operation has index 4, @[@ v_4 = 1 + v_3 @]@ We define the function @(@ f_3 ( v_1 , v_2 , v_3 ) @)@ as equal to @(@ f_4 @)@ except that @(@ v_4 @)@ is eliminated using this operation; i.e., @[@ f_3 = f_4 [ v_1 , v_2 , v_3 , v_4 ( v_3 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_3}{v_1} & = & \D{f_4}{v_1} & = 0.25 \\ \D{f_3}{v_2} & = & \D{f_4}{v_2} & = 0 \\ \D{f_3}{v_3} & = & \D{f_4}{v_3} + \D{f_4}{v_4} * \D{v_4}{v_3} & = 1.25 \end{array} @]@

3.2.5.i: Index 3: f_2
The previous operation has index 3, @[@ v_3 = v_2 / 1 @]@ We define the function @(@ f_2 ( v_1 , v_2 ) @)@ as equal to @(@ f_3 @)@ except that @(@ v_3 @)@ is eliminated using this operation; i.e., @[@ f_2 = f_4 [ v_1 , v_2 , v_3 ( v_2 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_2}{v_1} & = & \D{f_3}{v_1} & = 0.25 \\ \D{f_2}{v_2} & = & \D{f_3}{v_2} + \D{f_3}{v_3} * \D{v_3}{v_2} & = 1.25 \end{array} @]@

3.2.5.j: Index 2: f_1
The previous operation has index 1, @[@ v_2 = 1 * v_1 @]@ We define the function @(@ f_1 ( v_1 ) @)@ as equal to @(@ f_2 @)@ except that @(@ v_2 @)@ is eliminated using this operation; i.e., @[@ f_1 = f_2 [ v_1 , v_2 ( v_1 ) ] @]@ It follows that @[@ \begin{array}{rcll} \D{f_1}{v_1} & = & \D{f_2}{v_1} + \D{f_2}{v_2} * \D{v_2}{v_1} & = 1.5 \end{array} @]@ Note that @(@ v_1 @)@ is equal to @(@ x @)@, so the derivative of exp_eps(xepsilon) at x equal to .5 and epsilon equal .2 is 1.5 in the x direction and zero in the epsilon direction. We also note that 3.2.4: forward forward mode gave the same result for the partial in the x direction.

3.2.5.k: Verification
The file 3.2.5.1: exp_eps_rev1.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of @(@ f_j @)@ that might not be equal to the corresponding partials of @(@ f_{j+1} @)@; i.e., the other partials of @(@ f_j @)@ must be equal to the corresponding partials of @(@ f_{j+1} @)@.

3.2.5.l: Exercises
  1. Consider the case where @(@ x = .1 @)@ and we first preform a zero order forward mode sweep for the operation sequence used above (in reverse order). What are the results of a first order reverse mode sweep; i.e., what are the corresponding values for @(@ \D{f_j}{v_k} @)@ for all @(@ j, k @)@ such that @(@ \D{f_j}{v_k} \neq 0 @)@.
  2. Create a modified version of 3.2.5.1: exp_eps_rev1.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.5.1: exp_eps_rev1.cpp .

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.5.1: exp_eps: Verify First Order Reverse Sweep
# include <cstddef>                     // define size_t
# include <cmath>                       // for fabs function
extern bool exp_eps_for0(double *v0);   // computes zero order forward sweep
bool exp_eps_rev1(void)
{     bool ok = true;

     // set the value of v0[j] for j = 1 , ... , 7
     double v0[8];
     ok &= exp_eps_for0(v0);

     // initial all partial derivatives as zero
     double f_v[8];
     size_t j;
     for(j = 0; j < 8; j++)
          f_v[j] = 0.;

     // set partial derivative for f7
     f_v[7] = 1.;
     ok    &= std::fabs( f_v[7] - 1. ) <= 1e-10;     // f7_v7

     // f6( v1 , v2 , v3 , v4 , v5 , v6 )
     f_v[4] += f_v[7] * 1.;
     f_v[6] += f_v[7] * 1.;
     ok     &= std::fabs( f_v[4] - 1.  ) <= 1e-10;   // f6_v4
     ok     &= std::fabs( f_v[6] - 1.  ) <= 1e-10;   // f6_v6

     // f5( v1 , v2 , v3 , v4 , v5 )
     f_v[5] += f_v[6] / 2.;
     ok     &= std::fabs( f_v[5] - 0.5 ) <= 1e-10;   // f5_v5

     // f4( v1 , v2 , v3 , v4 )
     f_v[1] += f_v[5] * v0[3];
     f_v[3] += f_v[5] * v0[1];
     ok     &= std::fabs( f_v[1] - 0.25) <= 1e-10;   // f4_v1
     ok     &= std::fabs( f_v[3] - 0.25) <= 1e-10;   // f4_v3

     // f3( v1 , v2 , v3 )
     f_v[3] += f_v[4] * 1.;
     ok     &= std::fabs( f_v[3] - 1.25) <= 1e-10;   // f3_v3

     // f2( v1 , v2 )
     f_v[2] += f_v[3] / 1.;
     ok     &= std::fabs( f_v[2] - 1.25) <= 1e-10;   // f2_v2

     // f1( v1 )
     f_v[1] += f_v[2] * 1.;
     ok     &= std::fabs( f_v[1] - 1.5 ) <= 1e-10;   // f1_v2

     return ok;
}

Input File: introduction/exp_eps_rev1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.6: exp_eps: Second Order Forward Mode

3.2.6.a: Second Order Expansion
We define @(@ x(t) @)@ and @(@ \varepsilon(t) ] @)@ near @(@ t = 0 @)@ by the second order expansions @[@ \begin{array}{rcl} x(t) & = & x^{(0)} + x^{(1)} * t + x^{(2)} * t^2 / 2 \\ \varepsilon(t) & = & \varepsilon^{(0)} + \varepsilon^{(1)} * t + \varepsilon^{(2)} * t^2 / 2 \end{array} @]@ It follows that for @(@ k = 0 , 1 , 2 @)@, @[@ \begin{array}{rcl} x^{(k)} & = & \dpow{k}{t} x (0) \\ \varepsilon^{(k)} & = & \dpow{k}{t} \varepsilon (0) \end{array} @]@

3.2.6.b: Purpose
In general, a second order forward sweep is given the 3.1.4.a: first order expansion for all of the variables in an operation sequence, and the second order derivatives for the independent variables. It uses these to compute the second order derivative, and thereby obtain the second order expansion, for all the variables in the operation sequence.

3.2.6.c: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is @[@ f ( x , \varepsilon ) = 1 + x + x^2 / 2 @]@ The corresponding second partial derivative with respect to @(@ x @)@, and the value of the derivative, are @[@ \Dpow{2}{x} f ( x , \varepsilon ) = 1. @]@

3.2.6.d: Operation Sequence

3.2.6.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.2.6.d.b: Zero
The Zero column contains the zero order sweep results for the corresponding variable in the operation sequence (see 3.1.3.c.e: zero order sweep ).

3.2.6.d.c: Operation
The Operation column contains the first order sweep operation for this variable.

3.2.6.d.d: First
The First column contains the first order sweep results for the corresponding variable in the operation sequence (see 3.1.4.d.f: first order sweep ).

3.2.6.d.e: Derivative
The Derivative column contains the mathematical function corresponding to the second derivative with respect to @(@ t @)@, at @(@ t = 0 @)@, for each variable in the sequence.

3.2.6.d.f: Second
The Second column contains the second order derivatives for the corresponding variable in the operation sequence; i.e., the second order expansion for the i-th variable is given by @[@ v_i (t) = v_i^{(0)} + v_i^{(1)} * t + v_i^{(2)} * t^2 / 2 @]@ We use @(@ x^{(1)} = 1 @)@, @(@ x^{(2)} = 0 @)@, use @(@ \varepsilon^{(1)} = 1 @)@, and @(@ \varepsilon^{(2)} = 0 @)@ so that second order differentiation with respect to @(@ t @)@, at @(@ t = 0 @)@, is the same as the second partial differentiation with respect to @(@ x @)@ at @(@ x = x^{(0)} @)@.

3.2.6.d.g: Sweep
Index    Zero    Operation    First    Derivative    Second
1 0.5 @(@ v_1^{(1)} = x^{(1)} @)@ 1 @(@ v_2^{(2)} = x^{(2)} @)@ 0
2 0.5 @(@ v_2^{(1)} = 1 * v_1^{(1)} @)@ 1 @(@ v_2^{(2)} = 1 * v_1^{(2)} @)@ 0
3 0.5 @(@ v_3^{(1)} = v_2^{(1)} / 1 @)@ 1 @(@ v_3^{(2)} = v_2^{(2)} / 1 @)@ 0
4 1.5 @(@ v_4^{(1)} = v_3^{(1)} @)@ 1 @(@ v_4^{(2)} = v_3^{(2)} @)@ 0
5 0.25 @(@ v_5^{(1)} = v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)} @)@ 1 @(@ v_5^{(2)} = v_3^{(2)} * v_1^{(0)} + 2 * v_3^{(1)} * v_1^{(1)} + v_3^{(0)} * v_1^{(2)} @)@ 2
6 0.125 @(@ v_6^{(1)} = v_5^{(1)} / 2 @)@ 0.5 @(@ v_6^{(2)} = v_5^{(2)} / 2 @)@ 1
7 1.625 @(@ v_7^{(1)} = v_4^{(1)} + v_6^{(1)} @)@ 1.5 @(@ v_7^{(2)} = v_4^{(2)} + v_6^{(2)} @)@ 1
3.2.6.e: Return Value
The second derivative of the return value for this case is @[@ \begin{array}{rcl} 1 & = & v_7^{(2)} = \left[ \Dpow{2}{t} v_7 \right]_{t=0} = \left[ \Dpow{2}{t} f( x^{(0)} + x^{(1)} * t , \varepsilon^{(0)} ) \right]_{t=0} \\ & = & x^{(1)} * \Dpow{2}{x} f ( x^{(0)} , \varepsilon^{(0)} ) * x^{(1)} = \Dpow{2}{x} f ( x^{(0)} , \varepsilon^{(0)} ) \end{array} @]@ (We have used the fact that @(@ x^{(1)} = 1 @)@, @(@ x^{(2)} = 0 @)@, @(@ \varepsilon^{(1)} = 1 @)@, and @(@ \varepsilon^{(2)} = 0 @)@.)

3.2.6.f: Verification
The file 3.2.6.1: exp_eps_for2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.2.6.g: Exercises
  1. Which statement in the routine defined by 3.2.6.1: exp_eps_for2.cpp uses the values that are calculated by the routine defined by 3.2.4.1: exp_eps_for1.cpp ?
  2. Suppose that @(@ x = .1 @)@, what are the results of a zero, first, and second order forward sweep for the operation sequence above; i.e., what are the corresponding values for @(@ v_i^{(k)} @)@ for @(@ i = 1, \ldots , 7 @)@ and @(@ k = 0, 1, 2 @)@.
  3. Create a modified version of 3.2.6.1: exp_eps_for2.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.6.1: exp_eps_for2.cpp .

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.6.1: exp_eps: Verify Second Order Forward Sweep
# include <cmath>                     // for fabs function
extern bool exp_eps_for0(double *v0); // computes zero order forward sweep
extern bool exp_eps_for1(double *v1); // computes first order forward sweep
bool exp_eps_for2(void)
{     bool ok = true;
     double v0[8], v1[8], v2[8];

     // set the value of v0[j], v1[j] for j = 1 , ... , 7
     ok &= exp_eps_for0(v0);
     ok &= exp_eps_for1(v1);

     v2[1] = 0.;                                      // v1 = x
     ok    &= std::fabs( v2[1] - 0. ) <= 1e-10;

     v2[2] = 1. * v2[1];                              // v2 = 1 * v1
     ok    &= std::fabs( v2[2] - 0. ) <= 1e-10;

     v2[3] = v2[2] / 1.;                              // v3 = v2 / 1
     ok    &= std::fabs( v2[3] - 0. ) <= 1e-10;

     v2[4] = v2[3];                                   // v4 = 1 + v3
     ok    &= std::fabs( v2[4] - 0. ) <= 1e-10;

     v2[5] = v2[3] * v0[1] + 2. * v1[3] * v1[1]       // v5 = v3 * v1
           + v0[3] * v2[1];
     ok    &= std::fabs( v2[5] - 2. ) <= 1e-10;

     v2[6] = v2[5] / 2.;                              // v6 = v5 / 2
     ok    &= std::fabs( v2[6] - 1. ) <= 1e-10;

     v2[7] = v2[4] + v2[6];                           // v7 = v4 + v6
     ok    &= std::fabs( v2[7] - 1. ) <= 1e-10;

     return ok;
}

Input File: introduction/exp_eps_for2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.7: exp_eps: Second Order Reverse Sweep

3.2.7.a: Purpose
In general, a second order reverse sweep is given the 3.2.4.a: first order expansion for all of the variables in an operation sequence. Given a choice of a particular variable, it computes the derivative, of that variables first order expansion coefficient, with respect to all of the independent variables.

3.2.7.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is @[@ f ( x , \varepsilon ) = 1 + x + x^2 / 2 @]@ The corresponding derivative of the partial derivative with respect to @(@ x @)@ is @[@ \begin{array}{rcl} \Dpow{2}{x} f ( x , \varepsilon ) & = & 1 \\ \partial_\varepsilon \partial_x f ( x , \varepsilon ) & = & 0 \end{array} @]@

3.2.7.c: epsilon
Since @(@ \varepsilon @)@ is an independent variable, it could included as an argument to all of the @(@ f_j @)@ functions below. The result would be that all the partials with respect to @(@ \varepsilon @)@ would be zero and hence we drop it to simplify the presentation.

3.2.7.d: f_7
In reverse mode we choose one dependent variable and compute its derivative with respect to all the independent variables. For our example, we chose the value returned by 3.2.1: exp_eps.hpp which is @(@ v_7 @)@. We begin with the function @(@ f_7 @)@ where @(@ v_7 @)@ is both an argument and the value of the function; i.e., @[@ \begin{array}{rcl} f_7 \left( v_1^{(0)} , v_1^{(1)} , \ldots , v_7^{(0)} , v_7^{(1)} \right) & = & v_7^{(1)} \\ \D{f_7}{v_7^{(1)}} & = & 1 \end{array} @]@ All the other partial derivatives of @(@ f_7 @)@ are zero.

3.2.7.e: Index 7: f_6
The last operation has index 7, @[@ \begin{array}{rcl} v_7^{(0)} & = & v_4^{(0)} + v_6^{(0)} \\ v_7^{(1)} & = & v_4^{(1)} + v_6^{(1)} \end{array} @]@ We define the function @(@ f_6 \left( v_1^{(0)} , \ldots , v_6^{(1)} \right) @)@ as equal to @(@ f_7 @)@ except that @(@ v_7^{(0)} @)@ and @(@ v_7^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_6 = f_7 \left[ v_1^{(0)} , \ldots , v_6^{(1)} , v_7^{(0)} \left( v_4^{(0)} , v_6^{(0)} \right) , v_7^{(1)} \left( v_4^{(1)} , v_6^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_6}{v_4^{(1)}} & = & \D{f_7}{v_4^{(1)}} + \D{f_7}{v_7^{(1)}} * \D{v_7^{(1)}}{v_4^{(1)}} & = 1 \\ \D{f_6}{v_6^{(1)}} & = & \D{f_7}{v_6^{(1)}} + \D{f_7}{v_7^{(1)}} * \D{v_7^{(1)}}{v_6^{(1)}} & = 1 \end{array} @]@ All the other partial derivatives of @(@ f_6 @)@ are zero.

3.2.7.f: Index 6: f_5
The previous operation has index 6, @[@ \begin{array}{rcl} v_6^{(0)} & = & v_5^{(0)} / 2 \\ v_6^{(1)} & = & v_5^{(1)} / 2 \end{array} @]@ We define the function @(@ f_5 \left( v_1^{(0)} , \ldots , v_5^{(1)} \right) @)@ as equal to @(@ f_6 @)@ except that @(@ v_6^{(0)} @)@ and @(@ v_6^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_5 = f_6 \left[ v_1^{(0)} , \ldots , v_5^{(1)} , v_6^{(0)} \left( v_5^{(0)} \right) , v_6^{(1)} \left( v_5^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_5}{v_4^{(1)}} & = & \D{f_6}{v_4^{(1)}} & = 1 \\ \D{f_5}{v_5^{(1)}} & = & \D{f_6}{v_5} + \D{f_6}{v_6^{(1)}} * \D{v_6^{(1)}}{v_5^{(1)}} & = 0.5 \end{array} @]@ All the other partial derivatives of @(@ f_5 @)@ are zero.

3.2.7.g: Index 5: f_4
The previous operation has index 5, @[@ \begin{array}{rcl} v_5^{(0)} & = & v_3^{(0)} * v_1^{(0)} \\ v_5^{(1)} & = & v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)} \end{array} @]@ We define the function @(@ f_4 \left( v_1^{(0)} , \ldots , v_4^{(1)} \right) @)@ as equal to @(@ f_5 @)@ except that @(@ v_5^{(0)} @)@ and @(@ v_5^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_4 = f_5 \left[ v_1^{(0)} , \ldots , v_4^{(1)} , v_5^{(0)} \left( v_1^{(0)}, v_3^{(0)} \right) , v_5^{(1)} \left( v_1^{(0)}, v_1^{(1)}, v_3^{(0)} , v_3^{(1)} \right) , \right] @]@ Given the information from the forward sweep, we have @(@ v_1^{(0)} = 0.5 @)@, @(@ v_3^{(0)} = 0.5 @)@, @(@ v_1^{(1)} = 1 @)@, @(@ v_3^{(1)} = 1 @)@, and the fact that the partial of @(@ f_5 @)@ with respect to @(@ v_5^{(0)} @)@ is zero, we have @[@ \begin{array}{rcll} \D{f_4}{v_1^{(0)}} & = & \D{f_5}{v_1^{(0)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_1^{(0)}} & = 0.5 \\ \D{f_4}{v_1^{(1)}} & = & \D{f_5}{v_1^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_1^{(1)}} & = 0.25 \\ \D{f_4}{v_3^{(0)}} & = & \D{f_5}{v_3^{(0)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_3^{(0)}} & = 0.5 \\ \D{f_4}{v_3^{(1)}} & = & \D{f_3}{v_1^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_3^{(1)}} & = 0.25 \\ \D{f_4}{v_4^{(1)}} & = & \D{f_5}{v_4^{(1)}} & = 1 \end{array} @]@ All the other partial derivatives of @(@ f_5 @)@ are zero.

3.2.7.h: Index 4: f_3
The previous operation has index 4, @[@ \begin{array}{rcl} v_4^{(0)} = 1 + v_3^{(0)} \\ v_4^{(1)} = v_3^{(1)} \end{array} @]@ We define the function @(@ f_3 \left( v_1^{(0)} , \ldots , v_3^{(1)} \right) @)@ as equal to @(@ f_4 @)@ except that @(@ v_4^{(0)} @)@ and @(@ v_4^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_3 = f_4 \left[ v_1^{(0)} , \ldots , v_3^{(1)} , v_4^{(0)} \left( v_3^{(0)} \right) , v_4^{(1)} \left( v_3^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_3}{v_1^{(0)}} & = & \D{f_4}{v_1^{(0)}} & = 0.5 \\ \D{f_3}{v_1^{(1)}} & = & \D{f_4}{v_1^{(1)}} & = 0.25 \\ \D{f_3}{v_2^{(0)}} & = & \D{f_4}{v_2^{(0)}} & = 0 \\ \D{f_3}{v_2^{(1)}} & = & \D{f_4}{v_2^{(1)}} & = 0 \\ \D{f_3}{v_3^{(0)}} & = & \D{f_4}{v_3^{(0)}} + \D{f_4}{v_4^{(0)}} * \D{v_4^{(0)}}{v_3^{(0)}} & = 0.5 \\ \D{f_3}{v_3^{(1)}} & = & \D{f_4}{v_3^{(1)}} + \D{f_4}{v_4^{(1)}} * \D{v_4^{(1)}}{v_3^{(1)}} & = 1.25 \end{array} @]@

3.2.7.i: Index 3: f_2
The previous operation has index 3, @[@ \begin{array}{rcl} v_3^{(0)} & = & v_2^{(0)} / 1 \\ v_3^{(1)} & = & v_2^{(1)} / 1 \end{array} @]@ We define the function @(@ f_2 \left( v_1^{(0)} , \ldots , v_2^{(1)} \right) @)@ as equal to @(@ f_3 @)@ except that @(@ v_3^{(0)} @)@ and @(@ v_3^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_2 = f_3 \left[ v_1^{(0)} , \ldots , v_2^{(1)} , v_3^{(0)} \left( v_2^{(0)} \right) , v_3^{(1)} \left( v_2^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_2}{v_1^{(0)}} & = & \D{f_3}{v_1^{(0)}} & = 0.5 \\ \D{f_2}{v_1^{(1)}} & = & \D{f_3}{v_1^{(1)}} & = 0.25 \\ \D{f_2}{v_2^{(0)}} & = & \D{f_3}{v_2^{(0)}} + \D{f_3}{v_3^{(0)}} * \D{v_3^{(0)}}{v_2^{(0)}} & = 0.5 \\ \D{f_2}{v_2^{(1)}} & = & \D{f_3}{v_2^{(1)}} + \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_2^{(0)}} & = 1.25 \end{array} @]@

3.2.7.j: Index 2: f_1
The previous operation has index 1, @[@ \begin{array}{rcl} v_2^{(0)} & = & 1 * v_1^{(0)} \\ v_2^{(1)} & = & 1 * v_1^{(1)} \end{array} @]@ We define the function @(@ f_1 \left( v_1^{(0)} , v_1^{(1)} \right) @)@ as equal to @(@ f_2 @)@ except that @(@ v_2^{(0)} @)@ and @(@ v_2^{(1)} @)@ are eliminated using this operation; i.e. @[@ f_1 = f_2 \left[ v_1^{(0)} , v_1^{(1)} , v_2^{(0)} \left( v_1^{(0)} \right) , v_2^{(1)} \left( v_1^{(1)} \right) \right] @]@ It follows that @[@ \begin{array}{rcll} \D{f_1}{v_1^{(0)}} & = & \D{f_2}{v_1^{(0)}} + \D{f_2}{v_2^{(0)}} * \D{v_2^{(0)}}{v_1^{(0)}} & = 1 \\ \D{f_1}{v_1^{(1)}} & = & \D{f_2}{v_1^{(1)}} + \D{f_2}{v_2^{(1)}} * \D{v_2^{(1)}}{v_1^{(1)}} & = 1.5 \end{array} @]@ Note that @(@ v_1 @)@ is equal to @(@ x @)@, so the second partial derivative of exp_eps(xepsilon) at x equal to .5 and epsilon equal .2 is @[@ \Dpow{2}{x} v_7^{(0)} = \D{v_7^{(1)}}{x} = \D{f_1}{v_1^{(0)}} = 1 @]@ There is a theorem about algorithmic differentiation that explains why the other partial of @(@ f_1 @)@ is equal to the first partial of exp_eps(xepsilon) with respect to @(@ x @)@.

3.2.7.k: Verification
The file 3.2.7.1: exp_eps_rev2.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of @(@ f_j @)@ that might not be equal to the corresponding partials of @(@ f_{j+1} @)@; i.e., the other partials of @(@ f_j @)@ must be equal to the corresponding partials of @(@ f_{j+1} @)@.

3.2.7.l: Exercises
  1. Consider the case where @(@ x = .1 @)@ and we first preform a zero order forward mode sweep for the operation sequence used above (in reverse order). What are the results of a first order reverse mode sweep; i.e., what are the corresponding values for @(@ \D{f_j}{v_k} @)@ for all @(@ j, k @)@ such that @(@ \D{f_j}{v_k} \neq 0 @)@.
  2. Create a modified version of 3.2.7.1: exp_eps_rev2.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.7.1: exp_eps_rev2.cpp .

Input File: introduction/exp_eps.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
# include <cstddef>                     // define size_t
# include <cmath>                       // for fabs function
extern bool exp_eps_for0(double *v0);   // computes zero order forward sweep
extern bool exp_eps_for1(double *v1);   // computes first order forward sweep
bool exp_eps_rev2(void)
{     bool ok = true;

     // set the value of v0[j], v1[j] for j = 1 , ... , 7
     double v0[8], v1[8];
     ok &= exp_eps_for0(v0);
     ok &= exp_eps_for1(v1);

     // initial all partial derivatives as zero
     double f_v0[8], f_v1[8];
     size_t j;
     for(j = 0; j < 8; j++)
     {     f_v0[j] = 0.;
          f_v1[j] = 0.;
     }

     // set partial derivative for f_7
     f_v1[7] = 1.;
     ok &= std::fabs( f_v1[7] - 1.  ) <= 1e-10; // partial f_7 w.r.t. v_7^1

     // f_6 = f_7( v_1^0 , ... , v_6^1 , v_4^0 + v_6^0, v_4^1 , v_6^1 )
     f_v0[4] += f_v0[7];
     f_v0[6] += f_v0[7];
     f_v1[4] += f_v1[7];
     f_v1[6] += f_v1[7];
     ok &= std::fabs( f_v0[4] - 0.  ) <= 1e-10; // partial f_6 w.r.t. v_4^0
     ok &= std::fabs( f_v0[6] - 0.  ) <= 1e-10; // partial f_6 w.r.t. v_6^0
     ok &= std::fabs( f_v1[4] - 1.  ) <= 1e-10; // partial f_6 w.r.t. v_4^1
     ok &= std::fabs( f_v1[6] - 1.  ) <= 1e-10; // partial f_6 w.r.t. v_6^1

     // f_5 = f_6( v_1^0 , ... , v_5^1 , v_5^0 / 2 , v_5^1 / 2 )
     f_v0[5] += f_v0[6] / 2.;
     f_v1[5] += f_v1[6] / 2.;
     ok &= std::fabs( f_v0[5] - 0.  ) <= 1e-10; // partial f_5 w.r.t. v_5^0
     ok &= std::fabs( f_v1[5] - 0.5 ) <= 1e-10; // partial f_5 w.r.t. v_5^1

     // f_4 = f_5( v_1^0 , ... , v_4^1 , v_3^0 * v_1^0 ,
     //            v_3^1 * v_1^0 + v_3^0 * v_1^1 )
     f_v0[1] += f_v0[5] * v0[3] + f_v1[5] * v1[3];
     f_v0[3] += f_v0[5] * v0[1] + f_v1[5] * v1[1];
     f_v1[1] += f_v1[5] * v0[3];
     f_v1[3] += f_v1[5] * v0[1];
     ok &= std::fabs( f_v0[1] - 0.5  ) <= 1e-10; // partial f_4 w.r.t. v_1^0
     ok &= std::fabs( f_v0[3] - 0.5  ) <= 1e-10; // partial f_4 w.r.t. v_3^0
     ok &= std::fabs( f_v1[1] - 0.25 ) <= 1e-10; // partial f_4 w.r.t. v_1^1
     ok &= std::fabs( f_v1[3] - 0.25 ) <= 1e-10; // partial f_4 w.r.t. v_3^1

     // f_3 = f_4(  v_1^0 , ... , v_3^1 , 1 + v_3^0 , v_3^1 )
     f_v0[3] += f_v0[4];
     f_v1[3] += f_v1[4];
     ok &= std::fabs( f_v0[3] - 0.5 ) <= 1e-10;  // partial f_3 w.r.t. v_3^0
     ok &= std::fabs( f_v1[3] - 1.25) <= 1e-10;  // partial f_3 w.r.t. v_3^1

     // f_2 = f_3( v_1^0 , ... , v_2^1 , v_2^0 , v_2^1 )
     f_v0[2] += f_v0[3];
     f_v1[2] += f_v1[3];
     ok &= std::fabs( f_v0[2] - 0.5 ) <= 1e-10;  // partial f_2 w.r.t. v_2^0
     ok &= std::fabs( f_v1[2] - 1.25) <= 1e-10;  // partial f_2 w.r.t. v_2^1

     // f_1 = f_2 ( v_1^0 , v_2^0 , v_1^0 , v_2^0 )
     f_v0[1] += f_v0[2];
     f_v1[1] += f_v1[2];
     ok &= std::fabs( f_v0[1] - 1.  ) <= 1e-10;  // partial f_1 w.r.t. v_1^0
     ok &= std::fabs( f_v1[1] - 1.5 ) <= 1e-10;  // partial f_1 w.r.t. v_1^1

     return ok;
}

Input File: introduction/exp_eps_rev2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
.

3.2.8.a: Purpose
Use CppAD forward and reverse modes to compute the partial derivative with respect to @(@ x @)@, at the point @(@ x = .5 @)@ and @(@ \varepsilon = .2 @)@, of the function
     exp_eps(
xepsilon)
as defined by the 3.2.1: exp_eps.hpp include file.

3.2.8.b: Exercises
  1. Create and test a modified version of the routine below that computes the same order derivatives with respect to @(@ x @)@, at the point @(@ x = .1 @)@ and @(@ \varepsilon = .2 @)@, of the function
         exp_eps(
    xepsilon)
  2. Create and test a modified version of the routine below that computes partial derivative with respect to @(@ x @)@, at the point @(@ x = .1 @)@ and @(@ \varepsilon = .2 @)@, of the function corresponding to the operation sequence for @(@ x = .5 @)@ and @(@ \varepsilon = .2 @)@. Hint: you could define a vector u with two components and use
         
    f.Forward(0, u)
    to run zero order forward mode at a point different form the point where the operation sequence corresponding to f was recorded.
# include <cppad/cppad.hpp>  // http://www.coin-or.org/CppAD/
# include "exp_eps.hpp"      // our example exponential function approximation
bool exp_eps_cppad(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::vector;    // can use any simple vector template class
     using CppAD::NearEqual; // checks if values are nearly equal

     // domain space vector
     size_t n = 2; // dimension of the domain space
     vector< AD<double> > U(n);
     U[0] = .5;    // value of x for this operation sequence
     U[1] = .2;    // value of e for this operation sequence

     // declare independent variables and start recording operation sequence
     CppAD::Independent(U);

     // evaluate our exponential approximation
     AD<double> x       = U[0];
     AD<double> epsilon = U[1];
     AD<double> apx = exp_eps(x, epsilon);

     // range space vector
     size_t m = 1;  // dimension of the range space
     vector< AD<double> > Y(m);
     Y[0] = apx;    // variable that represents only range space component

     // Create f: U -> Y corresponding to this operation sequence
     // and stop recording. This also executes a zero order forward
     // mode sweep using values in U for x and e.
     CppAD::ADFun<double> f(U, Y);

     // first order forward mode sweep that computes partial w.r.t x
     vector<double> du(n);      // differential in domain space
     vector<double> dy(m);      // differential in range space
     du[0] = 1.;                // x direction in domain space
     du[1] = 0.;
     dy    = f.Forward(1, du);  // partial w.r.t. x
     double check = 1.5;
     ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

     // first order reverse mode sweep that computes the derivative
     vector<double>  w(m);     // weights for components of the range
     vector<double> dw(n);     // derivative of the weighted function
     w[0] = 1.;                // there is only one weight
     dw   = f.Reverse(1, w);   // derivative of w[0] * exp_eps(x, epsilon)
     check = 1.5;              // partial w.r.t. x
     ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);
     check = 0.;               // partial w.r.t. epsilon
     ok   &= NearEqual(dw[1], check, 1e-10, 1e-10);

     // second order forward sweep that computes
     // second partial of exp_eps(x, epsilon) w.r.t. x
     vector<double> x2(n);     // second order Taylor coefficients
     vector<double> y2(m);
     x2[0] = 0.;               // evaluate partial w.r.t x
     x2[1] = 0.;
     y2    = f.Forward(2, x2);
     check = 0.5 * 1.;         // Taylor coef is 1/2 second derivative
     ok   &= NearEqual(y2[0], check, 1e-10, 1e-10);

     // second order reverse sweep that computes
     // derivative of partial of exp_eps(x, epsilon) w.r.t. x
     dw.resize(2 * n);         // space for first and second derivative
     dw    = f.Reverse(2, w);
     check = 1.;               // result should be second derivative
     ok   &= NearEqual(dw[0*2+1], check, 1e-10, 1e-10);

     return ok;
}

Input File: introduction/exp_eps_cppad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
3.3: Correctness Tests For Exponential Approximation in Introduction

3.3.a: Running Tests
To build this program and run its correctness tests see 2.3: cmake_check .

3.3.b: Source


// system include files used for I/O
# include <iostream>

// memory allocation routine
# include <cppad/utility/thread_alloc.hpp>

// test runner
# include <cppad/utility/test_boolofvoid.hpp>

// external complied tests
extern bool exp_2(void);
extern bool exp_2_cppad(void);
extern bool exp_2_for1(void);
extern bool exp_2_for2(void);
extern bool exp_2_rev1(void);
extern bool exp_2_rev2(void);
extern bool exp_2_for0(void);
extern bool exp_eps(void);
extern bool exp_eps_cppad(void);
extern bool exp_eps_for1(void);
extern bool exp_eps_for2(void);
extern bool exp_eps_for0(void);
extern bool exp_eps_rev1(void);
extern bool exp_eps_rev2(void);

// main program that runs all the tests
int main(void)
{     std::string group = "introduction";
     size_t      width = 20;
     CppAD::test_boolofvoid Run(group, width);

     // This comment is used by OneTest

     // external compiled tests
     Run( exp_2,           "exp_2"          );
     Run( exp_2_cppad,     "exp_2_cppad"    );
     Run( exp_2_for0,      "exp_2_for0"     );
     Run( exp_2_for1,      "exp_2_for1"     );
     Run( exp_2_for2,      "exp_2_for2"     );
     Run( exp_2_rev1,      "exp_2_rev1"     );
     Run( exp_2_rev2,      "exp_2_rev2"     );
     Run( exp_eps,         "exp_eps"        );
     Run( exp_eps_cppad,   "exp_eps_cppad"  );
     Run( exp_eps_for0,    "exp_eps_for0"   );
     Run( exp_eps_for1,    "exp_eps_for1"   );
     Run( exp_eps_for2,    "exp_eps_for2"   );
     Run( exp_eps_rev1,    "exp_eps_rev1"   );
     Run( exp_eps_rev2,    "exp_eps_rev2"   );
     //
     // check for memory leak
     bool memory_ok = CppAD::thread_alloc::free_all();
     // print summary at end
     bool ok = Run.summary(memory_ok);
     //
     return static_cast<int>( ! ok );
}

Input File: introduction/introduction.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4: AD Objects

4.a: Purpose
The sections listed below describe the operations that are available to 12.4.b: AD of Base objects. These objects are used to 12.4.k: tape an AD of Base 12.4.g.b: operation sequence . This operation sequence can be transferred to an 5: ADFun object where it can be used to evaluate the corresponding function and derivative values.

4.b: Base Type Requirements
The Base requirements are provided by the CppAD package for the following base types: float, double, std::complex<float>, std::complex<double>. Otherwise, see 4.7: base_require .

4.c: Contents
ad_ctor: 4.1AD Constructors
ad_assign: 4.2AD Assignment Operator
Convert: 4.3Conversion and I/O of AD Objects
ADValued: 4.4AD Valued Operations and Functions
BoolValued: 4.5Bool Valued Operations and Functions with AD Arguments
VecAD: 4.6AD Vectors that Record Index Operations
base_require: 4.7AD<Base> Requirements for a CppAD Base Type

Input File: cppad/core/user_ad.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.1: AD Constructors

4.1.a: Syntax
AD<Basey()
AD<Basey(x)

4.1.b: Purpose
creates a new AD<Base> object y and initializes its value as equal to x .

4.1.c: x

4.1.c.a: implicit
There is an implicit constructor where x has one of the following prototypes:
     const 
Base&        x
     const VecAD<
Base>& x

4.1.c.b: explicit
There is an explicit constructor where x has prototype
     const 
Type&        x
for any type that has an explicit constructor of the form Base(x) .

4.1.d: y
The target y has prototype
     AD<
Basey

4.1.e: Example
The files 4.1.1: ad_ctor.cpp contain examples and tests of these operations. It test returns true if it succeeds and false otherwise.
Input File: cppad/core/ad_ctor.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.1.1: AD Constructors: Example and Test

# include <cppad/cppad.hpp>

bool ad_ctor(void)
{     bool ok = true;   // initialize test result flag
     using CppAD::AD;  // so can use AD in place of CppAD::AD

     // default constructor
     AD<double> a;
     a = 0.;
     ok &= a == 0.;

     // constructor from base type
     AD<double> b(1.);
     ok &= b == 1.;

     // constructor from another type that converts to the base type
     AD<double> c(2);
     ok &= c == 2.;

     // constructor from AD<Base>
     AD<double> d(c);
     ok &= d == 2.;

     // constructor from a VecAD<Base> element
     CppAD::VecAD<double> v(1);
     v[0] = 3.;
     AD<double> e( v[0] );
     ok &= e == 3.;

     return ok;
}

Input File: example/general/ad_ctor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.2: AD Assignment Operator

4.2.a: Syntax
y = x

4.2.b: Purpose
Assigns the value in x to the object y . In either case,

4.2.c: x
The argument x has prototype
     const 
Type &x
where Type is VecAD<Base>::reference , AD<Base> , Base , or any type that has an implicit constructor of the form Base(x) .

4.2.d: y
The target y has prototype
     AD<
Basey

4.2.e: Example
The file 4.2.1: ad_assign.cpp contain examples and tests of these operations. It test returns true if it succeeds and false otherwise.
Input File: cppad/core/ad_assign.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.2.1: AD Assignment: Example and Test

# include <cppad/cppad.hpp>

bool ad_assign(void)
{     bool ok = true;   // initialize test result flag
     using CppAD::AD;  // so can use AD in place of CppAD::AD

     // assignment to base value
     AD<double> a;
     a = 1.;
     ok &= a == 1.;

     // assignment to a value that converts to the base type
     a = 2;
     ok &= a == 2.;

     // assignment to an AD<Base>
     AD<double> b(3.);
     a = b;
     ok &= a == 3.;

     // assignment to an VecAD<Base> element
     CppAD::VecAD<double> v(1);
     v[0] = 4.;
     a = v[0];
     ok &= a == 4.;

     return ok;
}

Input File: example/general/ad_assign.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3: Conversion and I/O of AD Objects
4.3.1: Value Convert From an AD Type to its Base Type
4.3.2: Integer Convert From AD to Integer
4.3.5: ad_output AD Output Stream Operator
4.3.6: PrintFor Printing AD Values During Forward Mode
4.3.7: Var2Par Convert an AD Variable to a Parameter

Input File: cppad/core/convert.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.1: Convert From an AD Type to its Base Type

4.3.1.a: Syntax
b = Value(x)

4.3.1.b: See Also
4.3.7: var2par

4.3.1.c: Purpose
Converts from an AD type to the corresponding 12.4.e: base type .

4.3.1.d: x
The argument x has prototype
     const AD<
Base> &x

4.3.1.e: b
The return value b has prototype
     
Base b

4.3.1.f: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.1.g: Restriction
If the argument x is a 12.4.m: variable its dependency information would not be included in the Value result (see above). For this reason, the argument x must be a 12.4.h: parameter ; i.e., it cannot depend on the current 12.4.k.c: independent variables .

4.3.1.h: Example
The file 4.3.1.1: value.cpp contains an example and test of this operation.
Input File: cppad/core/value.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.1.1: Convert From AD to its Base Type: Example and Test

# include <cppad/cppad.hpp>

bool Value(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::Value;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0] = 3.;
     x[1] = 4.;

     // check value before recording
     ok &= (Value(x[0]) == 3.);
     ok &= (Value(x[1]) == 4.);

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = - x[1];

     // cannot call Value(x[j]) or Value(y[0]) here (currently variables)
     AD<double> p = 5.;        // p is a parameter (does not depend on x)
     ok &= (Value(p) == 5.);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // can call Value(x[j]) or Value(y[0]) here (currently parameters)
     ok &= (Value(x[0]) ==  3.);
     ok &= (Value(x[1]) ==  4.);
     ok &= (Value(y[0]) == -4.);

     return ok;
}

Input File: example/general/value.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.2: Convert From AD to Integer

4.3.2.a: Syntax
i = Integer(x)

4.3.2.b: Purpose
Converts from an AD type to the corresponding integer value.

4.3.2.c: i
The result i has prototype
     int 
i

4.3.2.d: x

4.3.2.d.a: Real Types
If the argument x has either of the following prototypes:
     const float                
  &x
     const double               
  &x
the fractional part is dropped to form the integer value. For example, if x is 1.5, i is 1. In general, if @(@ x \geq 0 @)@, i is the greatest integer less than or equal x . If @(@ x \leq 0 @)@, i is the smallest integer greater than or equal x .

4.3.2.d.b: Complex Types
If the argument x has either of the following prototypes:
     const std::complex<float>  
  &x
     const std::complex<double> 
  &x
The result i is given by
     
i = Integer(x.real())

4.3.2.d.c: AD Types
If the argument x has either of the following prototypes:
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x
Base must support the Integer function and the conversion has the same meaning as for Base .

4.3.2.e: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.2.f: Example
The file 4.3.2.1: integer.cpp contains an example and test of this operation.
Input File: cppad/core/integer.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.2.1: Convert From AD to Integer: Example and Test

# include <cppad/cppad.hpp>

bool Integer(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::Integer;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0] = 3.5;
     x[1] = 4.5;

     // check integer before recording
     ok &= (Integer(x[0]) == 3);
     ok &= (Integer(x[1]) == 4);

     // start recording

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // check integer during recording
     ok &= (Integer(x[0]) == 3);
     ok &= (Integer(x[1]) == 4);

     // check integer for VecAD element
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = 2;
     ok &= (Integer(v[zero]) == 2);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = - x[1];

     // create f: x -> y and stop recording
     CppAD::ADFun<double> f(x, y);

     // check integer after recording
     ok &= (Integer(x[0]) ==  3.);
     ok &= (Integer(x[1]) ==  4.);
     ok &= (Integer(y[0]) == -4.);

     return ok;
}

Input File: example/general/integer.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.3: Convert An AD or Base Type to String

4.3.3.a: Syntax
s = to_string(value) .

4.3.3.b: See Also
8.25: to_string , 4.7.7: base_to_string

4.3.3.c: value
The argument value has prototype
     const AD<
Base>& value
     const 
Base&     value
where Base is a type that supports the 4.7.7: base_to_string type requirement.

4.3.3.d: s
The return value has prototype
     std::string 
s
and contains a representation of the specified value . If value is an AD type, the result has the same precision as for the Base type.

4.3.3.e: Example
The file 8.25.1: to_string.cpp includes an example and test of to_string with AD types. It returns true if it succeeds and false otherwise.
Input File: cppad/core/ad_to_string.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.4: AD Output Stream Operator

4.3.4.a: Syntax
is >> x

4.3.4.b: Purpose
Sets x to a 12.4.h: parameter with value b corresponding to
     
is >> b
where b is a Base object. It is assumed that this Base input operation returns a reference to is .

4.3.4.c: is
The operand is has prototype
     std::istream& 
is

4.3.4.d: x
The operand x has one of the following prototypes
     AD<
Base>&               x

4.3.4.e: Result
The result of this operation can be used as a reference to is . For example, if the operand y has prototype
     AD<
Basey
then the syntax
     
is >> x >> y
will first read the Base value of x from is , and then read the Base value to y .

4.3.4.f: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.4.g: Example
The file 4.3.4.1: ad_input.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/ad_io.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.4.1: AD Output Operator: Example and Test

# include <cppad/cppad.hpp>

# include <sstream>  // std::istringstream
# include <string>   // std::string

bool ad_input(void)
{     bool ok = true;

     // create the input string stream is.
     std::string str ("123 456");
     std::istringstream is(str);

     // start and AD<double> recording
     CPPAD_TESTVECTOR( CppAD::AD<double> ) x(1), y(1);
     x[0] = 1.0;
     CppAD::Independent(x);
     CppAD::AD<double> z = x[0];
     ok &= Variable(z);

     // read first number into z and second into y[0]
     is >> z >> y[0];
     ok   &= Parameter(z);
     ok   &= (z == 123.);
     ok   &= Parameter(y[0]);
     ok   &= (y[0] == 456.);
     //
     // terminate recording starting by call to Independent
     CppAD::ADFun<double> f(x, y);

     return ok;
}

Input File: example/general/ad_input.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.5: AD Output Stream Operator

4.3.5.a: Syntax
os << x

4.3.5.b: Purpose
Writes the Base value, corresponding to x , to the output stream os .

4.3.5.c: Assumption
If b is a Base object,
     
os << b
returns a reference to os .

4.3.5.d: os
The operand os has prototype
     std::ostream& 
os

4.3.5.e: x
The operand x has one of the following prototypes
     const AD<
Base>&               x
     const VecAD<
Base>::reference& x

4.3.5.f: Result
The result of this operation can be used as a reference to os . For example, if the operand y has prototype
     AD<
Basey
then the syntax
     
os << x << y
will output the value corresponding to x followed by the value corresponding to y .

4.3.5.g: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.5.h: Example
The file 4.3.5.1: ad_output.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/ad_io.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.5.1: AD Output Operator: Example and Test

# include <cppad/cppad.hpp>

# include <sstream>  // std::ostringstream
# include <string>   // std::string
# include <iomanip>  // std::setprecision, setw, setfill, right

namespace {
     template <class S>
     void set_ostream(S &os)
     {     os
          << std::setprecision(4) // 4 digits of precision
          << std::setw(6)         // 6 characters per field
          << std::setfill(' ')    // fill with spaces
          << std::right;          // adjust value to the right
     }
}

bool ad_output(void)
{     bool ok = true;

     // This output stream is an ostringstream for testing purposes.
     // You can use << with other types of streams; i.e., std::cout.
     std::ostringstream stream;

     // ouput an AD<double> object
     CppAD::AD<double>  pi = 4. * atan(1.); // 3.1415926536
     set_ostream(stream);
     stream << pi;

     // ouput a VecAD<double>::reference object
     CppAD::VecAD<double> v(1);
     CppAD::AD<double> zero(0);
     v[zero]   = exp(1.);                  // 2.7182818285
     set_ostream(stream);
     stream << v[zero];

     // convert output from stream to string
     std::string str = stream.str();

     // check the output
     ok      &= (str == " 3.142 2.718");

     return ok;
}

Input File: example/general/ad_output.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.6: Printing AD Values During Forward Mode

4.3.6.a: Syntax
f.Forward(0, x)
PrintFor(beforevar)
PrintFor(posbeforevarafter)

4.3.6.b: Purpose
The 5.3.1: zero order forward mode command
     
f.Forward(0, x)
assigns the 12.4.k.c: independent variable vector equal to x . It then computes a value for all of the dependent variables in the 12.4.g.b: operation sequence corresponding to f . Putting a PrintFor in the operation sequence will cause the value of var , corresponding to x , to be printed during zero order forward operations.

4.3.6.c: f.Forward(0, x)
The objects f , x , and the purpose for this operation, are documented in 5.3: Forward .

4.3.6.d: pos
If present, the argument pos has one of the following prototypes
     const AD<
Base>&               pos
     const VecAD<
Base>::reference& pos
In this case the text and var will be printed if and only if pos is not greater than zero and a finite number.

4.3.6.e: before
The argument before has prototype
     const char* 
before
This text is written to std::cout before var .

4.3.6.f: var
The argument var has one of the following prototypes
     const AD<
Base>&               var
     const VecAD<
Base>::reference& var
The value of var , that corresponds to x , is written to std::cout during the execution of
     
f.Forward(0, x)
Note that var may be a 12.4.m: variable or 12.4.h: parameter . (A parameters value does not depend on the value of the independent variable vector x .)

4.3.6.g: after
The argument after has prototype
     const char* 
after
This text is written to std::cout after var .

4.3.6.h: Redirecting Output
You can redirect this output to any standard output stream; see the 5.3.4.h: s in the forward mode documentation.

4.3.6.i: Discussion
This is helpful for understanding why tape evaluations have trouble. For example, if one of the operations in f is log(var) and var <= 0 , the corresponding result will be 8.11: nan .

4.3.6.j: Alternative
The 4.3.5: ad_output section describes the normal printing of values; i.e., printing when the corresponding code is executed.

4.3.6.k: Example
The program 4.3.6.1: print_for_cout.cpp is an example and test that prints to standard output. The output of this program states the conditions for passing and failing the test. The function 4.3.6.2: print_for_string.cpp is an example and test that prints to an standard string stream. This function automatically check for correct output.
Input File: cppad/core/print_for.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.6.1: Printing During Forward Mode: Example and Test

4.3.6.1.a: Running
To build this program and run its correctness test see 2.3: cmake_check .

4.3.6.1.b: Source Code
# include <cppad/cppad.hpp>

namespace {
     using std::cout;
     using std::endl;
     using CppAD::AD;

     // use of PrintFor to check for invalid function arguments
     AD<double> check_log(const AD<double>& y)
     {     // check during recording
          if( y <= 0. )
               cout << "check_log: y = " << y << " is <= 0" << endl;

          // check during zero order forward calculation
          PrintFor(y, "check_log: y == ", y , " which is <= 0\n");

          return log(y);
     }
}

void print_for(void)
{     using CppAD::PrintFor;

     // independent variable vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 1.;
     Independent(ax);

     // print a VecAD<double>::reference object that is a parameter
     CppAD::VecAD<double> av(1);
     AD<double> Zero(0);
     av[Zero] = 0.;
     PrintFor("v[0] = ", av[Zero]);

     // Print a newline to separate this from previous output,
     // then print an AD<double> object that is a variable.
     PrintFor("\nv[0] + x[0] = ", av[0] + ax[0]);

     // A conditional print that will not generate output when x[0] = 2.
     PrintFor(ax[0], "\n  2. + x[0] = ",   2. + ax[0], "\n");

     // A conditional print that will generate output when x[0] = 2.
     PrintFor(ax[0] - 2., "\n  3. + x[0] = ",   3. + ax[0], "\n");

     // A log evaluations that will result in an error message when x[0] = 2.
     AD<double> var     = 2. - ax[0];
     AD<double> log_var = check_log(var);

     // dependent variable vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = av[Zero] + ax[0];

     // define f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // zero order forward with x[0] = 2
     CPPAD_TESTVECTOR(double) x(n);
     x[0] = 2.;

     cout << "v[0] = 0" << endl;
     cout << "v[0] + x[0] = 2" << endl;
     cout << "  3. + x[0] = 5" << endl;
     cout << "check_log: y == 0 which is <= 0" << endl;
     // ./makefile.am expects "Test passes" at beginning of next output line
     cout << "Test passes if four lines above repeat below:" << endl;
     f.Forward(0, x);

     return;
}
int main(void)
{     print_for();

     return 0;
}

4.3.6.1.c: Output
Executing the program above generates the following output:
 
     v[0] = 0
     v[0] + x[0] = 2
     Test passes if two lines above repeat below:
     v[0] = 0
     v[0] + x[0] = 2

Input File: example/print_for/print_for.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.6.2: Print During Zero Order Forward Mode: Example and Test
# include <cppad/cppad.hpp>

namespace {
     using std::endl;
     using CppAD::AD;

     // use of PrintFor to check for invalid function arguments
     AD<double> check_log(const AD<double>& y, std::ostream& s_out)
     {     // check AD<double> value during recording
          if( y <= 0 )
               s_out << "check_log: y == " << y << " which is <= 0\n";

          // check double value during zero order forward calculation
          PrintFor(y, "check_log: y == ", y , " which is <= 0\n");

          return log(y);
     }
}

bool print_for(void)
{     bool ok = true;
     using CppAD::PrintFor;
     std::stringstream stream_out;

     // independent variable vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 1.;         // value of the independent variable during recording
     Independent(ax);

     // A log evaluations that is OK when x[0] = 1 but not when x[0] = 2.
     AD<double> var     = 2. - ax[0];
     AD<double> log_var = check_log(var, stream_out);
     ok &= stream_out.str() == "";

     // dependent variable vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0]    = log_var;

     // define f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // zero order forward with x[0] = 2
     CPPAD_TESTVECTOR(double) x(n);
     x[0] = 2.;
     f.Forward(0, x, stream_out);

     std::string string_out = stream_out.str();
     ok &= stream_out.str() == "check_log: y == 0 which is <= 0\n";

     return ok;
}

Input File: example/general/print_for.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.7: Convert an AD Variable to a Parameter

4.3.7.a: Syntax
y = Var2Par(x)

4.3.7.b: See Also
4.3.1: value

4.3.7.c: Purpose
Returns a 12.4.h: parameter y with the same value as the 12.4.m: variable x .

4.3.7.d: x
The argument x has prototype
     const AD<
Base> &x
The argument x may be a variable or parameter.

4.3.7.e: y
The result y has prototype
     AD<
Base> &y
The return value y will be a parameter.

4.3.7.f: Example
The file 4.3.7.1: var2par.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/var2par.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.3.7.1: Convert an AD Variable to a Parameter: Example and Test

# include <cppad/cppad.hpp>


bool Var2Par(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::Value;
     using CppAD::Var2Par;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0] = 3.;
     x[1] = 4.;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = - x[1] * Var2Par(x[0]);    // same as y[0] = -x[1] * 3.;

     // cannot call Value(x[j]) or Value(y[0]) here (currently variables)
     ok &= ( Value( Var2Par(x[0]) ) == 3. );
     ok &= ( Value( Var2Par(x[1]) ) == 4. );
     ok &= ( Value( Var2Par(y[0]) ) == -12. );

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // can call Value(x[j]) or Value(y[0]) here (currently parameters)
     ok &= (Value(x[0]) ==  3.);
     ok &= (Value(x[1]) ==  4.);
     ok &= (Value(y[0]) == -12.);

     // evaluate derivative of y w.r.t x
     CPPAD_TESTVECTOR(double) w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0] = 1.;
     dw   = f.Reverse(1, w);
     ok  &= (dw[0] == 0.);  // derivative of y[0] w.r.t x[0] is zero
     ok  &= (dw[1] == -3.); // derivative of y[0] w.r.t x[1] is 3

     return ok;
}

Input File: example/general/var2par.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4: AD Valued Operations and Functions

4.4.a: Contents
Arithmetic: 4.4.1AD Arithmetic Operators and Compound Assignments
unary_standard_math: 4.4.2The Unary Standard Math Functions
binary_math: 4.4.3The Binary Math Functions
CondExp: 4.4.4AD Conditional Expressions
Discrete: 4.4.5Discrete AD Functions
numeric_limits: 4.4.6Numeric Limits For an AD and Base Types
atomic: 4.4.7Atomic AD Functions

Input File: cppad/core/ad_valued.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1: AD Arithmetic Operators and Compound Assignments

4.4.1.a: Contents
UnaryPlus: 4.4.1.1AD Unary Plus Operator
UnaryMinus: 4.4.1.2AD Unary Minus Operator
ad_binary: 4.4.1.3AD Binary Arithmetic Operators
compound_assign: 4.4.1.4AD Compound Assignment Operators

Input File: cppad/core/arithmetic.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.1: AD Unary Plus Operator

4.4.1.1.a: Syntax
y = + x

4.4.1.1.b: Purpose
Performs the unary plus operation (the result y is equal to the operand x ).

4.4.1.1.c: x
The operand x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.1.1.d: y
The result y has type
     AD<
Basey
It is equal to the operand x .

4.4.1.1.e: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.1.f: Derivative
If @(@ f @)@ is a 12.4.d: Base function , @[@ \D{[ + f(x) ]}{x} = \D{f(x)}{x} @]@

4.4.1.1.g: Example
The file 4.4.1.1.1: unary_plus.cpp contains an example and test of this operation.
Input File: cppad/core/unary_plus.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.1.1: AD Unary Plus Operator: Example and Test

# include <cppad/cppad.hpp>

bool UnaryPlus(void)
{     bool ok = true;
     using CppAD::AD;


     // domain space vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = 3.;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = + x[0];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check values
     ok &= ( y[0] == 3. );

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     size_t p = 1;
     dx[0]    = 1.;
     dy       = f.Forward(p, dx);
     ok      &= ( dy[0] == 1. );   // dy[0] / dx[0]

     // reverse computation of dertivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0] = 1.;
     dw   = f.Reverse(p, w);
     ok &= ( dw[0] == 1. );       // dy[0] / dx[0]

     // use a VecAD<Base>::reference object with unary plus
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = x[0];
     AD<double> result = + v[zero];
     ok     &= (result == y[0]);

     return ok;
}

Input File: example/general/unary_plus.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.2: AD Unary Minus Operator

4.4.1.2.a: Syntax
y = - x

4.4.1.2.b: Purpose
Computes the negative of x .

4.4.1.2.c: Base
The operation in the syntax above must be supported for the case where the operand is a const Base object.

4.4.1.2.d: x
The operand x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.1.2.e: y
The result y has type
     AD<
Basey
It is equal to the negative of the operand x .

4.4.1.2.f: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.2.g: Derivative
If @(@ f @)@ is a 12.4.d: Base function , @[@ \D{[ - f(x) ]}{x} = - \D{f(x)}{x} @]@

4.4.1.2.h: Example
The file 4.4.1.2.1: unary_minus.cpp contains an example and test of this operation.
Input File: cppad/core/unary_minus.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.2.1: AD Unary Minus Operator: Example and Test

# include <cppad/cppad.hpp>

bool UnaryMinus(void)
{     bool ok = true;
     using CppAD::AD;


     // domain space vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = 3.;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = - x[0];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check values
     ok &= ( y[0] == -3. );

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     size_t p = 1;
     dx[0]    = 1.;
     dy       = f.Forward(p, dx);
     ok      &= ( dy[0] == -1. );   // dy[0] / dx[0]

     // reverse computation of dertivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0] = 1.;
     dw   = f.Reverse(p, w);
     ok &= ( dw[0] == -1. );       // dy[0] / dx[0]

     // use a VecAD<Base>::reference object with unary minus
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = x[0];
     AD<double> result = - v[zero];
     ok     &= (result == y[0]);

     return ok;
}

Input File: example/general/unary_minus.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.3: AD Binary Arithmetic Operators

4.4.1.3.a: Syntax
z = x Op y

4.4.1.3.b: Purpose
Performs arithmetic operations where either x or y has type AD<Base> or 4.6.d: VecAD<Base>::reference .

4.4.1.3.c: Op
The operator Op is one of the following
Op Meaning
+ z is x plus y
- z is x minus y
* z is x times y
/ z is x divided by y

4.4.1.3.d: Base
The type Base is determined by the operand that has type AD<Base> or VecAD<Base>::reference .

4.4.1.3.e: x
The operand x has the following prototype
     const 
Type &x
where Type is VecAD<Base>::reference , AD<Base> , Base , or double.

4.4.1.3.f: y
The operand y has the following prototype
     const 
Type &y
where Type is VecAD<Base>::reference , AD<Base> , Base , or double.

4.4.1.3.g: z
The result z has the following prototype
     
Type z
where Type is AD<Base> .

4.4.1.3.h: Operation Sequence
This is an 12.4.g.a: atomic 12.4.b: AD of Base operation and hence it is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.3.i: Example
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.
4.4.1.3.1: add.cpp AD Binary Addition: Example and Test
4.4.1.3.2: sub.cpp AD Binary Subtraction: Example and Test
4.4.1.3.3: mul.cpp AD Binary Multiplication: Example and Test
4.4.1.3.4: div.cpp AD Binary Division: Example and Test

4.4.1.3.j: Derivative
If @(@ f @)@ and @(@ g @)@ are 12.4.d: Base functions

4.4.1.3.j.a: Addition
@[@ \D{[ f(x) + g(x) ]}{x} = \D{f(x)}{x} + \D{g(x)}{x} @]@
4.4.1.3.j.b: Subtraction
@[@ \D{[ f(x) - g(x) ]}{x} = \D{f(x)}{x} - \D{g(x)}{x} @]@
4.4.1.3.j.c: Multiplication
@[@ \D{[ f(x) * g(x) ]}{x} = g(x) * \D{f(x)}{x} + f(x) * \D{g(x)}{x} @]@
4.4.1.3.j.d: Division
@[@ \D{[ f(x) / g(x) ]}{x} = [1/g(x)] * \D{f(x)}{x} - [f(x)/g(x)^2] * \D{g(x)}{x} @]@
Input File: cppad/core/ad_binary.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.3.1: AD Binary Addition: Example and Test
# include <cppad/cppad.hpp>

bool Add(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // some binary addition operations
     AD<double> a = x[0] + 1.; // AD<double> + double
     AD<double> b = a    + 2;  // AD<double> + int
     AD<double> c = 3.   + b;  // double     + AD<double>
     AD<double> d = 4    + c;  // int        + AD<double>

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = d + x[0];          // AD<double> + AD<double>

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , 2. * x0 + 10, eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 2., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 2., eps99, eps99);

     // use a VecAD<Base>::reference object with addition
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = a;
     AD<double> result = v[zero] + 2;
     ok     &= (result == b);

     return ok;
}

Input File: example/general/add.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.3.2: AD Binary Subtraction: Example and Test
# include <cppad/cppad.hpp>

bool Sub(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t  n =  1;
     double x0 = .5;
     CPPAD_TESTVECTOR(AD<double>) x(1);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     AD<double> a = 2. * x[0] - 1.; // AD<double> - double
     AD<double> b = a  - 2;         // AD<double> - int
     AD<double> c = 3. - b;         // double     - AD<double>
     AD<double> d = 4  - c;         // int        - AD<double>

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = x[0] - d;              // AD<double> - AD<double>

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0], x0-4.+3.+2.-2.*x0+1., eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], -1., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], -1., eps99, eps99);

     // use a VecAD<Base>::reference object with subtraction
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = b;
     AD<double> result = 3. - v[zero];
     ok     &= (result == c);

     return ok;
}

Input File: example/general/sub.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.3.3: AD Binary Multiplication: Example and Test
# include <cppad/cppad.hpp>

bool Mul(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = .5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // some binary multiplication operations
     AD<double> a = x[0] * 1.; // AD<double> * double
     AD<double> b = a    * 2;  // AD<double> * int
     AD<double> c = 3.   * b;  // double     * AD<double>
     AD<double> d = 4    * c;  // int        * AD<double>

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = x[0] * d;          // AD<double> * AD<double>

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0*(4.*3.*2.*1.)*x0,  eps99 , eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], (4.*3.*2.*1.)*2.*x0, eps99 , eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], (4.*3.*2.*1.)*2.*x0, eps99 , eps99);

     // use a VecAD<Base>::reference object with multiplication
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = c;
     AD<double> result = 4 * v[zero];
     ok     &= (result == d);

     return ok;
}

Input File: example/general/mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.3.4: AD Binary Division: Example and Test
# include <cppad/cppad.hpp>

bool Div(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();


     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // some binary division operations
     AD<double> a = x[0] / 1.; // AD<double> / double
     AD<double> b = a  / 2;    // AD<double> / int
     AD<double> c = 3. / b;    // double     / AD<double>
     AD<double> d = 4  / c;    // int        / AD<double>

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = (x[0] * x[0]) / d;   // AD<double> / AD<double>

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0], x0*x0*3.*2.*1./(4.*x0), eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 3.*2.*1./4., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 3.*2.*1./4., eps99, eps99);

     // use a VecAD<Base>::reference object with division
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = d;
     AD<double> result = (x[0] * x[0]) / v[zero];
     ok     &= (result == y[0]);

     return ok;
}

Input File: example/general/div.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.4: AD Compound Assignment Operators

4.4.1.4.a: Syntax
x Op y

4.4.1.4.b: Purpose
Performs compound assignment operations where either x has type AD<Base> .

4.4.1.4.c: Op
The operator Op is one of the following
Op Meaning
+= x is assigned x plus y
-= x is assigned x minus y
*= x is assigned x times y
/= x is assigned x divided by y

4.4.1.4.d: Base
The type Base is determined by the operand x .

4.4.1.4.e: x
The operand x has the following prototype
     AD<
Base> &x

4.4.1.4.f: y
The operand y has the following prototype
     const 
Type &y
where Type is VecAD<Base>::reference , AD<Base> , Base , or double.

4.4.1.4.g: Result
The result of this assignment can be used as a reference to x . For example, if z has the following type
     AD<
Basez
then the syntax
     
z = x += y
will compute x plus y and then assign this value to both x and z .

4.4.1.4.h: Operation Sequence
This is an 12.4.g.a: atomic 12.4.b: AD of Base operation and hence it is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.4.i: Example
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.
4.4.1.4.1: AddEq.cpp AD Compound Assignment Addition: Example and Test
4.4.1.4.2: sub_eq.cpp AD Compound Assignment Subtraction: Example and Test
4.4.1.4.3: mul_eq.cpp AD Compound Assignment Multiplication: Example and Test
4.4.1.4.4: div_eq.cpp AD Compound Assignment Division: Example and Test

4.4.1.4.j: Derivative
If @(@ f @)@ and @(@ g @)@ are 12.4.d: Base functions

4.4.1.4.j.a: Addition
@[@ \D{[ f(x) + g(x) ]}{x} = \D{f(x)}{x} + \D{g(x)}{x} @]@
4.4.1.4.j.b: Subtraction
@[@ \D{[ f(x) - g(x) ]}{x} = \D{f(x)}{x} - \D{g(x)}{x} @]@
4.4.1.4.j.c: Multiplication
@[@ \D{[ f(x) * g(x) ]}{x} = g(x) * \D{f(x)}{x} + f(x) * \D{g(x)}{x} @]@
4.4.1.4.j.d: Division
@[@ \D{[ f(x) / g(x) ]}{x} = [1/g(x)] * \D{f(x)}{x} - [f(x)/g(x)^2] * \D{g(x)}{x} @]@
Input File: cppad/core/compound_assign.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.4.1: AD Compound Assignment Addition: Example and Test
# include <cppad/cppad.hpp>

bool AddEq(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t  n = 1;
     double x0 = .5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = x[0];         // initial value
     y[0] += 2;           // AD<double> += int
     y[0] += 4.;          // AD<double> += double
     y[1] = y[0] += x[0]; // use the result of a compound assignment

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0+2.+4.+x0, eps99, eps99);
     ok &= NearEqual(y[1] ,        y[0], eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 2., eps99, eps99);
     ok   &= NearEqual(dy[1], 2., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     w[1]  = 0.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 2., eps99, eps99);

     // use a VecAD<Base>::reference object with computed addition
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     AD<double> result = 1;
     v[zero] = 2;
     result += v[zero];
     ok     &= (result == 3);

     return ok;
}

Input File: example/general/add_eq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
# include <cppad/cppad.hpp>

bool SubEq(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t  n = 1;
     double x0 = .5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = 3. * x[0];    // initial value
     y[0] -= 2;           // AD<double> -= int
     y[0] -= 4.;          // AD<double> -= double
     y[1] = y[0] -= x[0]; // use the result of a compound assignment

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , 3.*x0-(2.+4.+x0), eps99, eps99);
     ok &= NearEqual(y[1] ,             y[0], eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 2., eps99, eps99);
     ok   &= NearEqual(dy[1], 2., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     w[1]  = 0.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 2., eps99, eps99);

     // use a VecAD<Base>::reference object with computed subtraction
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     AD<double> result = 1;
     v[zero] = 2;
     result -= v[zero];
     ok     &= (result == -1);

     return ok;
}

Input File: example/general/sub_eq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
# include <cppad/cppad.hpp>

bool MulEq(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t  n = 1;
     double x0 = .5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = x[0];         // initial value
     y[0] *= 2;           // AD<double> *= int
     y[0] *= 4.;          // AD<double> *= double
     y[1] = y[0] *= x[0]; // use the result of a compound assignment

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0*2.*4.*x0, eps99, eps99);
     ok &= NearEqual(y[1] ,        y[0], eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 8.*2.*x0, eps99, eps99);
     ok   &= NearEqual(dy[1], 8.*2.*x0, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     w[1]  = 0.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 8.*2.*x0, eps99, eps99);

     // use a VecAD<Base>::reference object with computed multiplication
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     AD<double> result = 1;
     v[zero] = 2;
     result *= v[zero];
     ok     &= (result == 2);

     return ok;
}

Input File: example/general/mul_eq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.1.4.4: AD Compound Assignment Division: Example and Test
# include <cppad/cppad.hpp>

bool DivEq(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t  n = 1;
     double x0 = .5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = x[0] * x[0];  // initial value
     y[0] /= 2;           // AD<double> /= int
     y[0] /= 4.;          // AD<double> /= double
     y[1] = y[0] /= x[0]; // use the result of a compound assignment

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0*x0/(2.*4.*x0), eps99, eps99);
     ok &= NearEqual(y[1] ,             y[0], eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1./8., eps99, eps99);
     ok   &= NearEqual(dy[1], 1./8., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     w[1]  = 0.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1./8., eps99, eps99);

     // use a VecAD<Base>::reference object with computed division
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     AD<double> result = 2;
     v[zero] = 1;
     result /= v[zero];
     ok     &= (result == 2);

     return ok;
}

Input File: example/general/div_eq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2: The Unary Standard Math Functions

4.4.2.a: Syntax
y = fun(x)

4.4.2.b: Purpose
Evaluates the standard math function fun .

4.4.2.c: Possible Types

4.4.2.c.a: Base
If Base satisfies the 4.7: base type requirements and argument x has prototype
     const 
Basex
then the result y has prototype
     
Base y

4.4.2.c.b: AD<Base>
If the argument x has prototype
     const AD<
Base>& x
then the result y has prototype
     AD<
Basey

4.4.2.c.c: VecAD<Base>
If the argument x has prototype
     const VecAD<
Base>::reference& x
then the result y has prototype
     AD<
Basey

4.4.2.d: fun
The possible values for fun are
 fun    Description
4.4.2.14: abs AD Absolute Value Functions: abs, fabs
4.4.2.1: acos Inverse Sine Function: acos
4.4.2.15: acosh The Inverse Hyperbolic Cosine Function: acosh
4.4.2.2: asin Inverse Sine Function: asin
4.4.2.16: asinh The Inverse Hyperbolic Sine Function: asinh
4.4.2.3: atan Inverse Tangent Function: atan
4.4.2.17: atanh The Inverse Hyperbolic Tangent Function: atanh
4.4.2.4: cos The Cosine Function: cos
4.4.2.5: cosh The Hyperbolic Cosine Function: cosh
4.4.2.18: erf The Error Function
4.4.2.6: exp The Exponential Function: exp
4.4.2.19: expm1 The Exponential Function Minus One: expm1
4.4.2.14: fabs AD Absolute Value Functions: abs, fabs
4.4.2.8: log10 The Base 10 Logarithm Function: log10
4.4.2.20: log1p The Logarithm of One Plus Argument: log1p
4.4.2.7: log The Exponential Function: log
4.4.2.21: sign The Sign: sign
4.4.2.9: sin The Sine Function: sin
4.4.2.10: sinh The Hyperbolic Sine Function: sinh
4.4.2.11: sqrt The Square Root Function: sqrt
4.4.2.12: tan The Tangent Function: tan
4.4.2.13: tanh The Hyperbolic Tangent Function: tanh

Input File: cppad/core/standard_math.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.1: Inverse Sine Function: acos

4.4.2.1.a: Syntax
y = acos(x)

4.4.2.1.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.1.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.1.d: Derivative
@[@ \begin{array}{lcr} \R{acos}^{(1)} (x) & = & - (1 - x * x)^{-1/2} \end{array} @]@
4.4.2.1.e: Example
The file 4.4.2.1.1: acos.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.1.1: The AD acos Function: Example and Test

# include <cppad/cppad.hpp>

bool acos(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> cos_of_x0 = CppAD::cos(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::acos(cos_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps, eps);

     // use a VecAD<Base>::reference object with acos
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = cos_of_x0;
     AD<double> result = CppAD::acos(v[zero]);
     ok     &= NearEqual(result, x0, eps, eps);

     return ok;
}

Input File: example/general/acos.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.2: Inverse Sine Function: asin

4.4.2.2.a: Syntax
y = asin(x)

4.4.2.2.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.2.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.2.d: Derivative
@[@ \begin{array}{lcr} \R{asin}^{(1)} (x) & = & (1 - x * x)^{-1/2} \end{array} @]@
4.4.2.2.e: Example
The file 4.4.2.2.1: asin.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.2.1: The AD asin Function: Example and Test

# include <cppad/cppad.hpp>

bool asin(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> sin_of_x0 = CppAD::sin(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::asin(sin_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps, eps);

     // use a VecAD<Base>::reference object with asin
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = sin_of_x0;
     AD<double> result = CppAD::asin(v[zero]);
     ok     &= NearEqual(result, x0, eps, eps);

     return ok;
}

Input File: example/general/asin.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.3: Inverse Tangent Function: atan

4.4.2.3.a: Syntax
y = atan(x)

4.4.2.3.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.3.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.3.d: Derivative
@[@ \begin{array}{lcr} \R{atan}^{(1)} (x) & = & \frac{1}{1 + x^2} \end{array} @]@
4.4.2.3.e: Example
The file 4.4.2.3.1: atan.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.3.1: The AD atan Function: Example and Test

# include <cppad/cppad.hpp>

bool atan(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> tan_of_x0 = CppAD::tan(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::atan(tan_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps99, eps99);

     // use a VecAD<Base>::reference object with atan
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = tan_of_x0;
     AD<double> result = CppAD::atan(v[zero]);
     ok     &= NearEqual(result, x0, eps99, eps99);

     return ok;
}

Input File: example/general/atan.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.4: The Cosine Function: cos

4.4.2.4.a: Syntax
y = cos(x)

4.4.2.4.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.4.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.4.d: Derivative
@[@ \begin{array}{lcr} \R{cos}^{(1)} (x) & = & - \sin(x) \end{array} @]@
4.4.2.4.e: Example
The file 4.4.2.4.1: cos.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.4.1: The AD cos Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool Cos(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::cos(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::cos(x0);
     ok &= NearEqual(y[0] , check, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = - std::sin(x0);
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps99, eps99);

     // use a VecAD<Base>::reference object with cos
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::cos(v[zero]);
     check = std::cos(x0);
     ok   &= NearEqual(result, check, eps99, eps99);

     return ok;
}

Input File: example/general/cos.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.5: The Hyperbolic Cosine Function: cosh

4.4.2.5.a: Syntax
y = cosh(x)

4.4.2.5.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.5.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.5.d: Derivative
@[@ \begin{array}{lcr} \R{cosh}^{(1)} (x) & = & \sinh(x) \end{array} @]@
4.4.2.5.e: Example
The file 4.4.2.5.1: cosh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.5.1: The AD cosh Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool Cosh(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::cosh(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::cosh(x0);
     ok &= NearEqual(y[0] , check, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = std::sinh(x0);
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps99, eps99);

     // use a VecAD<Base>::reference object with cosh
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::cosh(v[zero]);
     check = std::cosh(x0);
     ok   &= NearEqual(result, check, eps99, eps99);

     return ok;
}

Input File: example/general/cosh.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.6: The Exponential Function: exp

4.4.2.6.a: Syntax
y = exp(x)

4.4.2.6.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.6.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.6.d: Derivative
@[@ \begin{array}{lcr} \R{exp}^{(1)} (x) & = & \exp(x) \end{array} @]@
4.4.2.6.e: Example
The file 4.4.2.6.1: exp.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.6.1: The AD exp Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool exp(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = CppAD::exp(ax[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // check value
     double check = std::exp(x0);
     ok &= NearEqual(ay[0], check,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], check, eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps, eps);

     // use a VecAD<Base>::reference object with exp
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::exp(v[zero]);
     ok   &= NearEqual(result, check, eps, eps);

     return ok;
}

Input File: example/general/exp.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.7: The Exponential Function: log

4.4.2.7.a: Syntax
y = log(x)

4.4.2.7.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.7.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.7.d: Derivative
@[@ \begin{array}{lcr} \R{log}^{(1)} (x) & = & \frac{1}{x} \end{array} @]@
4.4.2.7.e: Example
The file 4.4.2.7.1: log.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.7.1: The AD log Function: Example and Test

# include <cppad/cppad.hpp>

bool log(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> exp_of_x0 = CppAD::exp(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::log(exp_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps99, eps99);

     // use a VecAD<Base>::reference object with log
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = exp_of_x0;
     AD<double> result = CppAD::log(v[zero]);
     ok   &= NearEqual(result, x0, eps99, eps99);

     return ok;
}

Input File: example/general/log.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.8: The Base 10 Logarithm Function: log10

4.4.2.8.a: Syntax
y = log10(x)

4.4.2.8.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.8.c: Method
CppAD uses the representation @[@ \begin{array}{lcr} {\rm log10} (x) & = & \log(x) / \log(10) \end{array} @]@

4.4.2.8.d: Example
The file 4.4.2.8.1: log10.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.8.1: The AD log10 Function: Example and Test

# include <cppad/cppad.hpp>

bool log10(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // ten raised to the x0 power
     AD<double> ten = 10.;
     AD<double> pow_10_x0 = CppAD::pow(ten, x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::log10(pow_10_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps99, eps99);

     // use a VecAD<Base>::reference object with log10
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = pow_10_x0;
     AD<double> result = CppAD::log10(v[zero]);
     ok   &= NearEqual(result, x0, eps99, eps99);

     return ok;
}

Input File: example/general/log10.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.9: The Sine Function: sin

4.4.2.9.a: Syntax
y = sin(x)

4.4.2.9.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.9.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.9.d: Derivative
@[@ \begin{array}{lcr} \R{sin}^{(1)} (x) & = & \cos(x) \end{array} @]@
4.4.2.9.e: Example
The file 4.4.2.9.1: sin.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.9.1: The AD sin Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool Sin(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::sin(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::sin(x0);
     ok &= NearEqual(y[0] , check, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = std::cos(x0);
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps99, eps99);

     // use a VecAD<Base>::reference object with sin
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::sin(v[zero]);
     check = std::sin(x0);
     ok   &= NearEqual(result, check, eps99, eps99);

     return ok;
}

Input File: example/general/sin.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.10: The Hyperbolic Sine Function: sinh

4.4.2.10.a: Syntax
y = sinh(x)

4.4.2.10.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.10.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.10.d: Derivative
@[@ \begin{array}{lcr} \R{sinh}^{(1)} (x) & = & \cosh(x) \end{array} @]@
4.4.2.10.e: Example
The file 4.4.2.10.1: sinh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.10.1: The AD sinh Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool Sinh(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::sinh(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::sinh(x0);
     ok &= NearEqual(y[0] , check, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = std::cosh(x0);
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps99, eps99);

     // use a VecAD<Base>::reference object with sinh
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::sinh(v[zero]);
     check = std::sinh(x0);
     ok   &= NearEqual(result, check, eps99, eps99);

     return ok;
}

Input File: example/general/sinh.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.11: The Square Root Function: sqrt

4.4.2.11.a: Syntax
y = sqrt(x)

4.4.2.11.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.11.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.11.d: Derivative
@[@ \begin{array}{lcr} \R{sqrt}^{(1)} (x) & = & \frac{1}{2 \R{sqrt} (x) } \end{array} @]@
4.4.2.11.e: Example
The file 4.4.2.11.1: sqrt.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.11.1: The AD sqrt Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool Sqrt(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::sqrt(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::sqrt(x0);
     ok &= NearEqual(y[0] , check, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = 1. / (2. * std::sqrt(x0) );
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps99, eps99);

     // use a VecAD<Base>::reference object with sqrt
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::sqrt(v[zero]);
     check = std::sqrt(x0);
     ok   &= NearEqual(result, check, eps99, eps99);

     return ok;
}

Input File: example/general/sqrt.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.12: The Tangent Function: tan

4.4.2.12.a: Syntax
y = tan(x)

4.4.2.12.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.12.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.12.d: Derivative
@[@ \begin{array}{lcr} \R{tan}^{(1)} (x) & = & 1 + \tan (x)^2 \end{array} @]@
4.4.2.12.e: Example
The file 4.4.2.12.1: tan.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.12.1: The AD tan Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>
# include <limits>

bool Tan(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::tan(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::tan(x0);
     ok &= NearEqual(y[0] , check,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = 1. + std::tan(x0) * std::tan(x0);
     ok   &= NearEqual(dy[0], check, eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps, eps);

     // use a VecAD<Base>::reference object with tan
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::tan(v[zero]);
     check = std::tan(x0);
     ok   &= NearEqual(result, check, eps, eps);

     return ok;
}

Input File: example/general/tan.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.13: The Hyperbolic Tangent Function: tanh

4.4.2.13.a: Syntax
y = tanh(x)

4.4.2.13.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.13.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.13.d: Derivative
@[@ \begin{array}{lcr} \R{tanh}^{(1)} (x) & = & 1 - \tanh (x)^2 \end{array} @]@
4.4.2.13.e: Example
The file 4.4.2.13.1: tanh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/std_math_98.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.13.1: The AD tanh Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>
# include <limits>

bool Tanh(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::tanh(x[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check = std::tanh(x0);
     ok &= NearEqual(y[0] , check,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     check = 1. - std::tanh(x0) * std::tanh(x0);
     ok   &= NearEqual(dy[0], check, eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check, eps, eps);

     // use a VecAD<Base>::reference object with tan
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::tanh(v[zero]);
     check = std::tanh(x0);
     ok   &= NearEqual(result, check, eps, eps);

     return ok;
}

Input File: example/general/tanh.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.14: AD Absolute Value Functions: abs, fabs

4.4.2.14.a: Syntax
y = abs(x)
y = fabs(x)

4.4.2.14.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.14.c: Atomic
In the case where x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.14.d: Complex Types
The functions abs and fabs are not defined for the base types std::complex<float> or std::complex<double> because the complex abs function is not complex differentiable (see 12.1.d: complex types faq ).

4.4.2.14.e: Derivative
CppAD defines the derivative of the abs function is the 4.4.2.21: sign function; i.e., @[@ {\rm abs}^{(1)} ( x ) = {\rm sign} (x ) = \left\{ \begin{array}{rl} +1 & {\rm if} \; x > 0 \\ 0 & {\rm if} \; x = 0 \\ -1 & {\rm if} \; x < 0 \end{array} \right. @]@ The result for x == 0 used to be a directional derivative.

4.4.2.14.f: Example
The file 4.4.2.14.1: fabs.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/abs.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.14.1: AD Absolute Value Function: Example and Test

# include <cppad/cppad.hpp>

bool fabs(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]     = 0.;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 6;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0]     = fabs(x[0] - 1.);
     y[1]     = fabs(x[0]);
     y[2]     = fabs(x[0] + 1.);
     //
     y[3]     = fabs(x[0] - 1.);
     y[4]     = fabs(x[0]);
     y[5]     = fabs(x[0] + 1.);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check values
     ok &= (y[0] == 1.);
     ok &= (y[1] == 0.);
     ok &= (y[2] == 1.);
     //
     ok &= (y[3] == 1.);
     ok &= (y[4] == 0.);
     ok &= (y[5] == 1.);

     // forward computation of partials w.r.t. a positive x[0] direction
     size_t p = 1;
     CPPAD_TESTVECTOR(double) dx(n), dy(m);
     dx[0] = 1.;
     dy    = f.Forward(p, dx);
     ok  &= (dy[0] == - dx[0]);
     ok  &= (dy[1] ==   0.   ); // used to be (dy[1] == + dx[0]);
     ok  &= (dy[2] == + dx[0]);
     //
     ok  &= (dy[3] == - dx[0]);
     ok  &= (dy[4] ==   0.   ); // used to be (dy[1] == + dx[0]);
     ok  &= (dy[5] == + dx[0]);

     // forward computation of partials w.r.t. a negative x[0] direction
     dx[0] = -1.;
     dy    = f.Forward(p, dx);
     ok  &= (dy[0] == - dx[0]);
     ok  &= (dy[1] ==   0.   ); // used to be (dy[1] == - dx[0]);
     ok  &= (dy[2] == + dx[0]);
     //
     ok  &= (dy[3] == - dx[0]);
     ok  &= (dy[4] ==   0.   ); // used to be (dy[1] == - dx[0]);
     ok  &= (dy[5] == + dx[0]);

     // reverse computation of derivative of y[0]
     p    = 1;
     CPPAD_TESTVECTOR(double)  w(m), dw(n);
     w[0] = 1.; w[1] = 0.; w[2] = 0.; w[3] = 0.; w[4] = 0.; w[5] = 0.;
     dw   = f.Reverse(p, w);
     ok  &= (dw[0] == -1.);

     // reverse computation of derivative of y[1]
     w[0] = 0.; w[1] = 1.;
     dw   = f.Reverse(p, w);
     ok  &= (dw[0] == 0.);

     // reverse computation of derivative of y[5]
     w[1] = 0.; w[5] = 1.;
     dw   = f.Reverse(p, w);
     ok  &= (dw[0] == 1.);

     // use a VecAD<Base>::reference object with abs and fabs
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = -1;
     AD<double> result = fabs(v[zero]);
     ok    &= NearEqual(result, 1., eps99, eps99);
     result = fabs(v[zero]);
     ok    &= NearEqual(result, 1., eps99, eps99);

     return ok;
}

Input File: example/general/fabs.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh

4.4.2.15.a: Syntax
y = acosh(x)

4.4.2.15.b: Description
The inverse hyperbolic cosine function is defined by x == cosh(y) .

4.4.2.15.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.15.d: CPPAD_USE_CPLUSPLUS_2011

4.4.2.15.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.15.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation @[@ \R{acosh} (x) = \log \left( x + \sqrt{ x^2 - 1 } \right) @]@ to compute this function.

4.4.2.15.e: Example
The file 4.4.2.15.1: acosh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/acosh.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.15.1: The AD acosh Function: Example and Test

# include <cppad/cppad.hpp>

bool acosh(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> cosh_of_x0 = CppAD::cosh(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::acosh(cosh_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps, eps);

     // use a VecAD<Base>::reference object with acosh
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = cosh_of_x0;
     AD<double> result = CppAD::acosh(v[zero]);
     ok     &= NearEqual(result, x0, eps, eps);

     return ok;
}

Input File: example/general/acosh.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.16: The Inverse Hyperbolic Sine Function: asinh

4.4.2.16.a: Syntax
y = asinh(x)

4.4.2.16.b: Description
The inverse hyperbolic sine function is defined by x == sinh(y) .

4.4.2.16.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.16.d: CPPAD_USE_CPLUSPLUS_2011

4.4.2.16.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.16.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation @[@ \R{asinh} (x) = \log \left( x + \sqrt{ 1 + x^2 } \right) @]@ to compute this function.

4.4.2.16.e: Example
The file 4.4.2.16.1: asinh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/asinh.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.16.1: The AD asinh Function: Example and Test

# include <cppad/cppad.hpp>

bool asinh(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> sinh_of_x0 = CppAD::sinh(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::asinh(sinh_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps, eps);

     // use a VecAD<Base>::reference object with asinh
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = sinh_of_x0;
     AD<double> result = CppAD::asinh(v[zero]);
     ok     &= NearEqual(result, x0, eps, eps);

     return ok;
}

Input File: example/general/asinh.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh

4.4.2.17.a: Syntax
y = atanh(x)

4.4.2.17.b: Description
The inverse hyperbolic tangent function is defined by x == tanh(y) .

4.4.2.17.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.17.d: CPPAD_USE_CPLUSPLUS_2011

4.4.2.17.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.17.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation @[@ \R{atanh} (x) = \frac{1}{2} \log \left( \frac{1 + x}{1 - x} \right) @]@ to compute this function.

4.4.2.17.e: Example
The file 4.4.2.17.1: atanh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/atanh.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.17.1: The AD atanh Function: Example and Test

# include <cppad/cppad.hpp>

bool atanh(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> tanh_of_x0 = CppAD::tanh(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::atanh(tanh_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps, eps);

     // use a VecAD<Base>::reference object with atanh
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = tanh_of_x0;
     AD<double> result = CppAD::atanh(v[zero]);
     ok     &= NearEqual(result, x0, eps, eps);

     return ok;
}

Input File: example/general/atanh.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.18: The Error Function

4.4.2.18.a: Syntax
y = erf(x)

4.4.2.18.b: Description
Returns the value of the error function which is defined by @[@ {\rm erf} (x) = \frac{2}{ \sqrt{\pi} } \int_0^x \exp( - t * t ) \; {\bf d} t @]@

4.4.2.18.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.18.d: CPPAD_USE_CPLUSPLUS_2011

4.4.2.18.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.18.d.b: false
If this preprocessor symbol is false (0), CppAD uses a fast approximation (few numerical operations) with relative error bound @(@ 4 \times 10^{-4} @)@; see Vedder, J.D., Simple approximations for the error function and its inverse , American Journal of Physics, v 55, n 8, 1987, p 762-3.

4.4.2.18.e: Example
The file 4.4.2.18.1: erf.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/erf.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.18.1: The AD erf Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>
# include <limits>

bool Erf(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = CppAD::erf(ax[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // check relative erorr
     double erf_x0 = 0.5204998778130465;
     ok &= NearEqual(ay[0] , erf_x0,  0.,    4e-4);
# if CPPAD_USE_CPLUSPLUS_2011
     double tmp = std::max(1e-15, eps);
     ok &= NearEqual(ay[0] , erf_x0,  0.,    tmp);
# endif

     // value of derivative of erf at x0
     double pi     = 4. * std::atan(1.);
     double factor = 2. / sqrt(pi);
     double check  = factor * std::exp(-x0 * x0);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], check,  0.,  1e-3);
# if CPPAD_USE_CPLUSPLUS_2011
     ok   &= NearEqual(dy[0], check,  0.,  eps);
# endif

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], check,  0., 1e-1);
# if CPPAD_USE_CPLUSPLUS_2011
     ok   &= NearEqual(dw[0], check,  0., eps);
# endif

     // use a VecAD<Base>::reference object with erf
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::erf(v[zero]);
     ok   &= NearEqual(result, ay[0], eps, eps);

     // use a double with erf
     ok   &= NearEqual(CppAD::erf(x0), ay[0], eps, eps);

     return ok;
}

Input File: example/general/erf.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.19: The Exponential Function Minus One: expm1

4.4.2.19.a: Syntax
y = expm1(x)

4.4.2.19.b: Description
Returns the value of the exponential function minus one which is defined by y == exp(x) - 1 .

4.4.2.19.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.19.d: CPPAD_USE_CPLUSPLUS_2011

4.4.2.19.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.19.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation @[@ \R{expm1} (x) = \exp(x) - 1 @]@ to compute this function.

4.4.2.19.e: Example
The file 4.4.2.19.1: expm1.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/expm1.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.19.1: The AD exp Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool expm1(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = CppAD::expm1(ax[0]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // expx0 value
     double expx0 = std::exp(x0);
     ok &= NearEqual(ay[0], expx0-1.0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], expx0, eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], expx0, eps, eps);

     // use a VecAD<Base>::reference object with exp
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = x0;
     AD<double> result = CppAD::expm1(v[zero]);
     ok   &= NearEqual(result, expx0-1.0, eps, eps);

     return ok;
}

Input File: example/general/expm1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.20: The Logarithm of One Plus Argument: log1p

4.4.2.20.a: Syntax
y = log1p(x)

4.4.2.20.b: Description
Returns the value of the logarithm of one plus argument which is defined by y == log(1 + x) .

4.4.2.20.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.20.d: CPPAD_USE_CPLUSPLUS_2011

4.4.2.20.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.20.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation @[@ \R{log1p} (x) = \log(1 + x) @]@ to compute this function.

4.4.2.20.e: Example
The file 4.4.2.20.1: log1p.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/log1p.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.20.1: The AD log1p Function: Example and Test

# include <cppad/cppad.hpp>

bool log1p(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> expm1_of_x0 = CppAD::expm1(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::log1p(expm1_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0,  eps, eps);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps, eps);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps, eps);

     // use a VecAD<Base>::reference object with log1p
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero] = expm1_of_x0;
     AD<double> result = CppAD::log1p(v[zero]);
     ok     &= NearEqual(result, x0, eps, eps);

     return ok;
}

Input File: example/general/log1p.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.21: The Sign: sign

4.4.2.21.a: Syntax
y = sign(x)

4.4.2.21.b: Description
Evaluates the sign function which is defined by @[@ {\rm sign} (x) = \left\{ \begin{array}{rl} +1 & {\rm if} \; x > 0 \\ 0 & {\rm if} \; x = 0 \\ -1 & {\rm if} \; x < 0 \end{array} \right. @]@

4.4.2.21.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.21.d: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.21.e: Derivative
CppAD computes the derivative of the sign function as zero for all argument values x . The correct mathematical derivative is different and is given by @[@ {\rm sign}^{(1)} (x) = 2 \delta (x) @]@ where @(@ \delta (x) @)@ is the Dirac Delta function.

4.4.2.21.f: Example
The file 4.4.2.21.1: sign.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/sign.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.2.21.1: Sign Function: Example and Test

# include <cppad/cppad.hpp>

bool sign(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;

     // create f: x -> y where f(x) = sign(x)
     size_t n = 1;
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ax(n), ay(m);
     ax[0]     = 0.;
     CppAD::Independent(ax);
     ay[0]     = sign(ax[0]);
     CppAD::ADFun<double> f(ax, ay);

     // check value during recording
     ok &= (ay[0] == 0.);

     // use f(x) to evaluate the sign function and its derivatives
     CPPAD_TESTVECTOR(double) x(n), y(m), dx(n), dy(m), w(m), dw(n);
     dx[0] = 1.;
     w[0] = 1.;
     //
     x[0]  = 2.;
     y     = f.Forward(0, x);
     ok   &= (y[0] == 1.);
     dy    = f.Forward(1, dx);
     ok   &= (dy[0] == 0.);
     dw   = f.Reverse(1, w);
     ok  &= (dw[0] == 0.);
     //
     x[0]  = 0.;
     y     = f.Forward(0, x);
     ok   &= (y[0] == 0.);
     dy    = f.Forward(1, dx);
     ok   &= (dy[0] == 0.);
     dw   = f.Reverse(1, w);
     ok  &= (dw[0] == 0.);
     //
     x[0]  = -2.;
     y     = f.Forward(0, x);
     ok   &= (y[0] == -1.);
     dy    = f.Forward(1, dx);
     ok   &= (dy[0] == 0.);
     dw   = f.Reverse(1, w);
     ok  &= (dw[0] == 0.);

     // use a VecAD<Base>::reference object with sign
     CppAD::VecAD<double> v(1);
     AD<double> zero(0);
     v[zero]           = 2.;
     AD<double> result = sign(v[zero]);
     ok   &= (result == 1.);

     return ok;
}

Input File: example/general/sign.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3: The Binary Math Functions

4.4.3.a: Contents
atan2: 4.4.3.1AD Two Argument Inverse Tangent Function
pow: 4.4.3.2The AD Power Function
azmul: 4.4.3.3Absolute Zero Multiplication

Input File: cppad/core/standard_math.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3.1: AD Two Argument Inverse Tangent Function

4.4.3.1.a: Syntax
theta = atan2(yx)

4.4.3.1.b: Purpose
Determines an angle @(@ \theta \in [ - \pi , + \pi ] @)@ such that @[@ \begin{array}{rcl} \sin ( \theta ) & = & y / \sqrt{ x^2 + y^2 } \\ \cos ( \theta ) & = & x / \sqrt{ x^2 + y^2 } \end{array} @]@

4.4.3.1.c: y
The argument y has one of the following prototypes
     const AD<
Base>               &y
     const VecAD<
Base>::reference &y

4.4.3.1.d: x
The argument x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.3.1.e: theta
The result theta has prototype
     AD<
Basetheta

4.4.3.1.f: Operation Sequence
The AD of Base operation sequence used to calculate theta is 12.4.g.d: independent of x and y .

4.4.3.1.g: Example
The file 4.4.3.1.1: atan2.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/atan2.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3.1.1: The AD atan2 Function: Example and Test

# include <cppad/cppad.hpp>

bool atan2(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // a temporary value
     AD<double> sin_of_x0 = CppAD::sin(x[0]);
     AD<double> cos_of_x0 = CppAD::cos(x[0]);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = CppAD::atan2(sin_of_x0, cos_of_x0);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0, eps99, eps99);

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 1., eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 1., eps99, eps99);

     // use a VecAD<Base>::reference object with atan2
     CppAD::VecAD<double> v(2);
     AD<double> zero(0);
     AD<double> one(1);
     v[zero]           = sin_of_x0;
     v[one]            = cos_of_x0;
     AD<double> result = CppAD::atan2(v[zero], v[one]);
     ok               &= NearEqual(result, x0, eps99, eps99);

     return ok;
}

Input File: example/general/atan2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3.2: The AD Power Function

4.4.3.2.a: Syntax
z = pow(xy)

4.4.3.2.b: See Also
8.12: pow_int

4.4.3.2.c: Purpose
Determines the value of the power function which is defined by @[@ {\rm pow} (x, y) = x^y @]@ This version of the pow function may use logarithms and exponentiation to compute derivatives. This will not work if x is less than or equal zero. If the value of y is an integer, the 8.12: pow_int function is used to compute this value using only multiplication (and division if y is negative). (This will work even if x is less than or equal zero.)

4.4.3.2.d: x
The argument x has one of the following prototypes
     const 
Base&                    x
     const AD<
Base>&                x
     const VecAD<
Base>::reference&  x

4.4.3.2.e: y
The argument y has one of the following prototypes
     const 
Base&                    y
     const AD<
Base>&                y
     const VecAD<
Base>::reference&  y

4.4.3.2.f: z
If both x and y are Base objects, the result z is also a Base object. Otherwise, it has prototype
     AD<
Basez

4.4.3.2.g: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.3.2.h: Example
The file 4.4.3.2.1: pow.cpp is an examples and tests of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/pow.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3.2.1: The AD Power Function: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool pow(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 2;
     double x = 0.5;
     double y = 2.;
     CPPAD_TESTVECTOR(AD<double>) axy(n);
     axy[0]      = x;
     axy[1]      = y;

     // declare independent variables and start tape recording
     CppAD::Independent(axy);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) az(m);
     az[0] = CppAD::pow(axy[0], axy[1]); // pow(variable, variable)
     az[1] = CppAD::pow(axy[0], y);      // pow(variable, parameter)
     az[2] = CppAD::pow(x,     axy[1]);  // pow(parameter, variable)

     // create f: axy -> az and stop tape recording
     CppAD::ADFun<double> f(axy, az);

     // check value
     double check = std::pow(x, y);
     size_t i;
     for(i = 0; i < m; i++)
          ok &= NearEqual(az[i] , check,  eps, eps);

     // forward computation of first partial w.r.t. x
     CPPAD_TESTVECTOR(double) dxy(n);
     CPPAD_TESTVECTOR(double) dz(m);
     dxy[0] = 1.;
     dxy[1] = 0.;
     dz    = f.Forward(1, dxy);
     check = y * std::pow(x, y-1.);
     ok   &= NearEqual(dz[0], check, eps, eps);
     ok   &= NearEqual(dz[1], check, eps, eps);
     ok   &= NearEqual(dz[2],    0., eps, eps);

     // forward computation of first partial w.r.t. y
     dxy[0] = 0.;
     dxy[1] = 1.;
     dz    = f.Forward(1, dxy);
     check = std::log(x) * std::pow(x, y);
     ok   &= NearEqual(dz[0], check, eps, eps);
     ok   &= NearEqual(dz[1],    0., eps, eps);
     ok   &= NearEqual(dz[2], check, eps, eps);

     // reverse computation of derivative of z[0] + z[1] + z[2]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     w[1]  = 1.;
     w[2]  = 1.;
     dw    = f.Reverse(1, w);
     check = y * std::pow(x, y-1.);
     ok   &= NearEqual(dw[0], 2. * check, eps, eps);
     check = std::log(x) * std::pow(x, y);
     ok   &= NearEqual(dw[1], 2. * check, eps, eps);

     // use a VecAD<Base>::reference object with pow
     CppAD::VecAD<double> v(2);
     AD<double> zero(0);
     AD<double> one(1);
     v[zero]           = axy[0];
     v[one]            = axy[1];
     AD<double> result = CppAD::pow(v[zero], v[one]);
     ok               &= NearEqual(result, az[0], eps, eps);

     return ok;
}

Input File: example/general/pow.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3.3: Absolute Zero Multiplication

4.4.3.3.a: Syntax
z = azmul(xy)

4.4.3.3.b: Purpose
Evaluates multiplication with an absolute zero for any of the possible types listed below. The result is given by @[@ z = \left\{ \begin{array}{ll} 0 & {\rm if} \; x = 0 \\ x \cdot y & {\rm otherwise} \end{array} \right. @]@ Note if x is zero and y is infinity, ieee multiplication would result in not a number whereas z would be zero.

4.4.3.3.c: Base
If Base satisfies the 4.7: base type requirements and arguments x , y have prototypes
     const 
Basex
     const 
Basey
then the result z has prototype
     
Base z

4.4.3.3.d: AD<Base>
If the arguments x , y have prototype
     const AD<
Base>& x
     const AD<
Base>& y
then the result z has prototype
     AD<
Basez

4.4.3.3.e: VecAD<Base>
If the arguments x , y have prototype
     const VecAD<
Base>::reference& x
     const VecAD<
Base>::reference& y
then the result z has prototype
     AD<
Basez

4.4.3.3.f: Example
The file 4.4.3.3.1: azmul.cpp is an examples and tests of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/azmul.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool azmul(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double inf = std::numeric_limits<double>::infinity();
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 2;
     double x = 0.5;
     double y = 2.0;
     CPPAD_TESTVECTOR(AD<double>) axy(n);
     axy[0]      = x;
     axy[1]      = y;

     // declare independent variables and start tape recording
     CppAD::Independent(axy);

     // range space vector
     size_t m = 5;
     CPPAD_TESTVECTOR(AD<double>) az(m);
     az[0] = CppAD::azmul(axy[0], axy[1]); // azmul(variable, variable)
     az[1] = CppAD::azmul(axy[0], inf);    // azmul(variable, parameter=inf)
     az[2] = CppAD::azmul(axy[0], 3.0);    // azmul(variable, parameter=3.0)
     az[3] = CppAD::azmul(0.0, axy[1]);    // azmul(parameter=0.0, variable)
     az[4] = CppAD::azmul(4.0, axy[1]);    // azmul(parameter=4.0, variable)

     // create f: axy -> az and stop tape recording
     CppAD::ADFun<double> f(axy, az);

     // check value when x is not zero
     ok &= NearEqual(az[0] , x * y,  eps, eps);
     ok &= az[1] == inf;
     ok &= NearEqual(az[2] , x * 3.0,  eps, eps);
     ok &= az[3] == 0.0;
     ok &= NearEqual(az[4] , 4.0 * y,  eps, eps);


     // check value x is zero and y is infinity
     CPPAD_TESTVECTOR(double) xy(n), z(m);
     xy[0] = 0.0;
     xy[1] = inf;
     z     = f.Forward(0, xy);
     ok &= z[0] == 0.0;
     ok &= z[1] == 0.0;
     ok &= z[2] == 0.0;
     ok &= z[3] == 0.0;
     ok &= z[4] == inf;

     return ok;
}

Input File: example/general/azmul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.4: AD Conditional Expressions

4.4.4.a: Syntax
result = CondExpRel(leftrightif_trueif_false)

4.4.4.b: Purpose
Record, as part of an AD of Base 12.4.g.b: operation sequence , the conditional result
     if( 
left Cop right )
          
result = if_true
     else 
result = if_false
The relational Rel and comparison operator Cop above have the following correspondence:
     
Rel   Lt   Le   Eq   Ge   Gt
     
Cop    <   <=   ==   >=   >
If f is the 5: ADFun object corresponding to the AD operation sequence, the assignment choice for result in an AD conditional expression is made each time 5.3: f.Forward is used to evaluate the zero order Taylor coefficients with new values for the 12.4.k.c: independent variables . This is in contrast to the 4.5.1: AD comparison operators which are boolean valued and not included in the AD operation sequence.

4.4.4.c: Rel
In the syntax above, the relation Rel represents one of the following two characters: Lt, Le, Eq, Ge, Gt. As in the table above, Rel determines which comparison operator Cop is used when comparing left and right .

4.4.4.d: Type
These functions are defined in the CppAD namespace for arguments of Type is float , double, or any type of the form AD<Base> . (Note that all four arguments must have the same type.)

4.4.4.e: left
The argument left has prototype
     const 
Typeleft
It specifies the value for the left side of the comparison operator.

4.4.4.f: right
The argument right has prototype
     const 
Typeright
It specifies the value for the right side of the comparison operator.

4.4.4.g: if_true
The argument if_true has prototype
     const 
Typeif_true
It specifies the return value if the result of the comparison is true.

4.4.4.h: if_false
The argument if_false has prototype
     const 
Typeif_false
It specifies the return value if the result of the comparison is false.

4.4.4.i: result
The result has prototype
     
Typeif_false

4.4.4.j: Optimize
The 5.7: optimize method will optimize conditional expressions in the following way: During 5.3.1: zero order forward mode , once the value of the left and right have been determined, it is known if the true or false case is required. From this point on, values corresponding to the case that is not required are not computed. This optimization is done for the rest of zero order forward mode as well as forward and reverse derivatives calculations.

4.4.4.k: Deprecate 2005-08-07
Previous versions of CppAD used
     CondExp(
flagif_trueif_false)
for the same meaning as
     CondExpGt(
flagType(0), if_trueif_false)
Use of CondExp is deprecated, but continues to be supported.

4.4.4.l: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.4.m: Example

4.4.4.n: Test
The file 4.4.4.1: cond_exp.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.

4.4.4.o: Atan2
The following implementation of the AD 4.4.3.1: atan2 function is a more complex example of using conditional expressions:
template <class Base>
AD<Base> atan2 (const AD<Base> &y, const AD<Base> &x)
{     AD<Base> alpha;
     AD<Base> beta;
     AD<Base> theta;

     AD<Base> zero(0.);
     AD<Base> pi2(2. * atan(1.));
     AD<Base> pi(2. * pi2);

     AD<Base> ax = fabs(x);
     AD<Base> ay = fabs(y);

     // if( ax > ay )
     //     theta = atan(ay / ax);
     // else     theta = pi2 - atan(ax / ay);
     alpha = atan(ay / ax);
     beta  = pi2 - atan(ax / ay);
     theta = CondExpGt(ax, ay, alpha, beta);         // use of CondExp

     // if( x <= 0 )
     //     theta = pi - theta;
     theta = CondExpLe(x, zero, pi - theta, theta);  // use of CondExp

     // if( y <= 0 )
     //     theta = - theta;
     theta = CondExpLe(y, zero, -theta, theta);      // use of CondExp

     return theta;
}

Input File: cppad/core/cond_exp.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.4.1: Conditional Expressions: Example and Test

4.4.4.1.a: See Also
5.7.5: optimize_conditional_skip.cpp

4.4.4.1.b: Description
Use CondExp to compute @[@ f(x) = \sum_{j=0}^{m-1} x_j \log( x_j ) @]@ and its derivative at various argument values ( where @(@ x_j \geq 0 @)@ ) with out having to re-tape; i.e., using only one 5: ADFun object. Note that @(@ x_j \log ( x_j ) \rightarrow 0 @)@ as @(@ x_j \downarrow 0 @)@ and we need to handle the case @(@ x_j = 0 @)@ in a special way to avoid multiplying zero by infinity.

# include <cppad/cppad.hpp>
# include <limits>

bool CondExp(void)
{     bool ok = true;

     using CppAD::isnan;
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::log;
     double eps  = 100. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 5;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     size_t j;
     for(j = 0; j < n; j++)
          ax[j] = 1.;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     AD<double> asum  = 0.;
     AD<double> azero = 0.;
     for(j = 0; j < n; j++)
     {     // if x_j > 0, add x_j * log( x_j ) to the sum
          asum += CppAD::CondExpGt(ax[j], azero, ax[j] * log(ax[j]), azero);
     }

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = asum;

     // create f: x -> ay and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // vectors for arguments to the function object f
     CPPAD_TESTVECTOR(double) x(n);   // argument values
     CPPAD_TESTVECTOR(double) y(m);   // function values
     CPPAD_TESTVECTOR(double) w(m);   // function weights
     CPPAD_TESTVECTOR(double) dw(n);  // derivative of weighted function

     // a case where x[j] > 0 for all j
     double check  = 0.;
     for(j = 0; j < n; j++)
     {     x[j]   = double(j + 1);
          check += x[j] * log( x[j] );
     }

     // function value
     y  = f.Forward(0, x);
     ok &= NearEqual(y[0], check, eps, eps);

     // compute derivative of y[0]
     w[0] = 1.;
     dw   = f.Reverse(1, w);
     for(j = 0; j < n; j++)
          ok &= NearEqual(dw[j], log(x[j]) + 1., eps, eps);

     // a case where x[3] is equal to zero
     check -= x[3] * log( x[3] );
     x[3]   = 0.;

     // function value
     y   = f.Forward(0, x);
     ok &= NearEqual(y[0], check, eps, eps);

     // check derivative of y[0]
     f.check_for_nan(false);
     w[0] = 1.;
     dw   = f.Reverse(1, w);
     for(j = 0; j < n; j++)
     {     if( x[j] > 0 )
               ok &= NearEqual(dw[j], log(x[j]) + 1., eps, eps);
          else
          {     // Note that in case where dw has type AD<double> and is a variable
               // this dw[j] can be nan (zero times nan is not zero).
               ok &= NearEqual(dw[j], 0.0, eps, eps);
          }
     }

     return ok;
}

Input File: example/general/cond_exp.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.5: Discrete AD Functions

4.4.5.a: Syntax
CPPAD_DISCRETE_FUNCTION(Basename)
y  = name(x)
ay = name(ax)

4.4.5.b: Purpose
Record the evaluation of a discrete function as part of an AD<Base> 12.4.g.b: operation sequence . The value of a discrete function can depend on the 12.4.k.c: independent variables , but its derivative is identically zero. For example, suppose that the integer part of a 12.4.m: variable x is the index into an array of values.

4.4.5.c: Base
This is the 4.7: base type corresponding to the operations sequence; i.e., use of the name with arguments of type AD<Base> can be recorded in an operation sequence.

4.4.5.d: name
This is the name of the function (as it is used in the source code). The user must provide a version of name where the argument has type Base . CppAD uses this to create a version of name where the argument has type AD<Base> .

4.4.5.e: x
The argument x has prototype
     const 
Basex
It is the value at which the user provided version of name is to be evaluated.

4.4.5.f: y
The result y has prototype
     
Base y
It is the return value for the user provided version of name .

4.4.5.g: ax
The argument ax has prototype
     const AD<
Base>& ax
It is the value at which the CppAD provided version of name is to be evaluated.

4.4.5.h: ay
The result ay has prototype
     AD<
Baseay
It is the return value for the CppAD provided version of name .

4.4.5.i: Create AD Version
The preprocessor macro invocation
     CPPAD_DISCRETE_FUNCTION(
Basename)
defines the AD<Base> version of name . This can be with in a namespace (not the CppAD namespace) but must be outside of any routine.

4.4.5.j: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.5.k: Derivatives
During a zero order 5.3: Forward operation, an 5: ADFun object will compute the value of name using the user provided Base version of this routine. All the derivatives of name will be evaluated as zero.

4.4.5.l: Parallel Mode
The first call to
     
ay = name(ax)
must not be in 8.23.4: parallel execution mode.

4.4.5.m: Example
The file 4.4.5.1: tape_index.cpp contains an example and test that uses a discrete function to vary an array index during 5.3: Forward mode calculations. The file 4.4.5.2: interp_onetape.cpp contains an example and test that uses discrete functions to avoid retaping a calculation that requires interpolation. (The file 4.4.5.3: interp_retape.cpp shows how interpolation can be done with retaping.)

4.4.5.n: CppADCreateDiscrete Deprecated 2007-07-28
The preprocessor symbol CppADCreateDiscrete is defined to be the same as CPPAD_DISCRETE_FUNCTION but its use is deprecated.
Input File: cppad/core/discrete.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.5.1: Taping Array Index Operation: Example and Test
# include <cppad/cppad.hpp>

namespace {
     double Array(const double &index)
     {     static double array[] = {
               5.,
               4.,
               3.,
               2.,
               1.
          };
          static size_t number = sizeof(array) / sizeof(array[0]);
          if( index < 0. )
               return array[0];

          size_t i = static_cast<size_t>(index);
          if( i >= number )
               return array[number-1];

          return array[i];
     }
     // in empty namespace and outside any other routine
     CPPAD_DISCRETE_FUNCTION(double, Array)
}

bool TapeIndex(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) X(n);
     X[0] = 2.;   // array index value
     X[1] = 3.;   // multiplier of array index value

     // declare independent variables and start tape recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = X[1] * Array( X[0] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // vectors for arguments to the function object f
     CPPAD_TESTVECTOR(double) x(n);   // argument values
     CPPAD_TESTVECTOR(double) y(m);   // function values
     CPPAD_TESTVECTOR(double) w(m);   // function weights
     CPPAD_TESTVECTOR(double) dw(n);  // derivative of weighted function

     // check function value
     x[0] = Value(X[0]);
     x[1] = Value(X[1]);
     y[0] = Value(Y[0]);
     ok  &= y[0] == x[1] * Array(x[0]);

     // evaluate f where x has different values
     x[0] = x[0] + 1.;  // new array index value
     x[1] = x[1] + 1.;  // new multiplier value
     y    = f.Forward(0, x);
     ok  &= y[0] == x[1] * Array(x[0]);

     // evaluate derivaitve of y[0]
     w[0] = 1.;
     dw   = f.Reverse(1, w);
     ok   &= dw[0] == 0.;              // partial w.r.t array index
     ok   &= dw[1] == Array(x[0]);     // partial w.r.t multiplier

     return ok;
}

Input File: example/general/tape_index.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.5.2: Interpolation With Out Retaping: Example and Test

4.4.5.2.a: See Also
4.4.5.3: interp_retape.cpp

# include <cppad/cppad.hpp>
# include <cassert>
# include <cmath>

namespace {
     double ArgumentValue[] = {
          .0 ,
          .2 ,
          .4 ,
          .8 ,
          1.
     };
     double FunctionValue[] = {
          std::sin( ArgumentValue[0] ) ,
          std::sin( ArgumentValue[1] ) ,
          std::sin( ArgumentValue[2] ) ,
          std::sin( ArgumentValue[3] ) ,
          std::sin( ArgumentValue[4] )
     };
     size_t TableLength = 5;

     size_t Index(const double &x)
     {     // determine the index j such that x is between
          // ArgumentValue[j] and ArgumentValue[j+1]
          static size_t j = 0;
          while ( x < ArgumentValue[j] && j > 0 )
               j--;
          while ( x > ArgumentValue[j+1] && j < TableLength - 2)
               j++;
          // assert conditions that must be true given logic above
          assert( j >= 0 && j < TableLength - 1 );
          return j;
     }

     double Argument(const double &x)
     {     size_t j = Index(x);
          return ArgumentValue[j];
     }
     double Function(const double &x)
     {     size_t j = Index(x);
          return FunctionValue[j];
     }

     double Slope(const double &x)
     {     size_t j  = Index(x);
          double dx = ArgumentValue[j+1] - ArgumentValue[j];
          double dy = FunctionValue[j+1] - FunctionValue[j];
          return dy / dx;
     }
     CPPAD_DISCRETE_FUNCTION(double, Argument)
     CPPAD_DISCRETE_FUNCTION(double, Function)
     CPPAD_DISCRETE_FUNCTION(double, Slope)
}


bool interp_onetape(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) X(n);
     X[0] = .4 * ArgumentValue[1] + .6 * ArgumentValue[2];

     // declare independent variables and start tape recording
     CppAD::Independent(X);

     // evaluate piecewise linear interpolant at X[0]
     AD<double> A = Argument(X[0]);
     AD<double> F = Function(X[0]);
     AD<double> S = Slope(X[0]);
     AD<double> I = F + (X[0] - A) * S;

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = I;

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // vectors for arguments to the function object f
     CPPAD_TESTVECTOR(double) x(n);   // argument values
     CPPAD_TESTVECTOR(double) y(m);   // function values
     CPPAD_TESTVECTOR(double) dx(n);  // differentials in x space
     CPPAD_TESTVECTOR(double) dy(m);  // differentials in y space

     // to check function value we use the fact that X[0] is between
     // ArgumentValue[1] and ArgumentValue[2]
     x[0]          = Value(X[0]);
     double delta  = ArgumentValue[2] - ArgumentValue[1];
     double check  = FunctionValue[2] * (x[0] - ArgumentValue[1]) / delta
                   + FunctionValue[1] * (ArgumentValue[2] - x[0]) / delta;
     ok  &= NearEqual(Y[0], check, eps99, eps99);

     // evaluate f where x has different value
     x[0]   = .7 * ArgumentValue[2] + .3 * ArgumentValue[3];
     y      = f.Forward(0, x);

     // check function value
     delta  = ArgumentValue[3] - ArgumentValue[2];
     check  = FunctionValue[3] * (x[0] - ArgumentValue[2]) / delta
                   + FunctionValue[2] * (ArgumentValue[3] - x[0]) / delta;
     ok  &= NearEqual(y[0], check, eps99, eps99);

     // evaluate partials w.r.t. x[0]
     dx[0] = 1.;
     dy    = f.Forward(1, dx);

     // check that the derivative is the slope
     check = (FunctionValue[3] - FunctionValue[2])
           / (ArgumentValue[3] - ArgumentValue[2]);
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     return ok;
}

Input File: example/general/interp_onetape.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.5.3: Interpolation With Retaping: Example and Test

4.4.5.3.a: See Also
4.4.5.2: interp_onetape.cpp

# include <cppad/cppad.hpp>
# include <cassert>
# include <cmath>

namespace {
     double ArgumentValue[] = {
          .0 ,
          .2 ,
          .4 ,
          .8 ,
          1.
     };
     double FunctionValue[] = {
          std::sin( ArgumentValue[0] ) ,
          std::sin( ArgumentValue[1] ) ,
          std::sin( ArgumentValue[2] ) ,
          std::sin( ArgumentValue[3] ) ,
          std::sin( ArgumentValue[4] )
     };
     size_t TableLength = 5;

     size_t Index(const CppAD::AD<double> &x)
     {     // determine the index j such that x is between
          // ArgumentValue[j] and ArgumentValue[j+1]
          static size_t j = 0;
          while ( x < ArgumentValue[j] && j > 0 )
               j--;
          while ( x > ArgumentValue[j+1] && j < TableLength - 2)
               j++;
          // assert conditions that must be true given logic above
          assert( j >= 0 && j < TableLength - 1 );
          return j;
     }
     double Argument(const CppAD::AD<double> &x)
     {     size_t j = Index(x);
          return ArgumentValue[j];
     }
     double Function(const CppAD::AD<double> &x)
     {     size_t j = Index(x);
          return FunctionValue[j];
     }
     double Slope(const CppAD::AD<double> &x)
     {     size_t j  = Index(x);
          double dx = ArgumentValue[j+1] - ArgumentValue[j];
          double dy = FunctionValue[j+1] - FunctionValue[j];
          return dy / dx;
     }
}

bool interp_retape(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) X(n);

     // loop over argument values
     size_t k;
     for(k = 0; k < TableLength - 1; k++)
     {
          X[0] = .4 * ArgumentValue[k] + .6 * ArgumentValue[k+1];

          // declare independent variables and start tape recording
          // (use a different tape for each argument value)
          CppAD::Independent(X);

          // evaluate piecewise linear interpolant at X[0]
          AD<double> A = Argument(X[0]);
          AD<double> F = Function(X[0]);
          AD<double> S = Slope(X[0]);
          AD<double> I = F + (X[0] - A) * S;

          // range space vector
          size_t m = 1;
          CPPAD_TESTVECTOR(AD<double>) Y(m);
          Y[0] = I;

          // create f: X -> Y and stop tape recording
          CppAD::ADFun<double> f(X, Y);

          // vectors for arguments to the function object f
          CPPAD_TESTVECTOR(double) x(n);   // argument values
          CPPAD_TESTVECTOR(double) y(m);   // function values
          CPPAD_TESTVECTOR(double) dx(n);  // differentials in x space
          CPPAD_TESTVECTOR(double) dy(m);  // differentials in y space

          // to check function value we use the fact that X[0] is between
          // ArgumentValue[k] and ArgumentValue[k+1]
          double delta, check;
          x[0]   = Value(X[0]);
          delta  = ArgumentValue[k+1] - ArgumentValue[k];
          check  = FunctionValue[k+1] * (x[0]-ArgumentValue[k]) / delta
                    + FunctionValue[k] * (ArgumentValue[k+1]-x[0]) / delta;
          ok    &= NearEqual(Y[0], check, eps99, eps99);

          // evaluate partials w.r.t. x[0]
          dx[0] = 1.;
          dy    = f.Forward(1, dx);

          // check that the derivative is the slope
          check = (FunctionValue[k+1] - FunctionValue[k])
                / (ArgumentValue[k+1] - ArgumentValue[k]);
          ok   &= NearEqual(dy[0], check, eps99, eps99);
     }
     return ok;
}

Input File: example/general/interp_retape.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.6: Numeric Limits For an AD and Base Types

4.4.6.a: Syntax
eps = numeric_limits<Float>::epsilon()
min = numeric_limits<Float>::min()
max = numeric_limits<Float>::max()
nan = numeric_limits<Float>::quiet_NaN()
numeric_limits<Float>::digits10

4.4.6.b: CppAD::numeric_limits
These functions and have the prototype
     static 
Float CppAD::numeric_limits<Float>::fun(void)
where fun is epsilon, min, max, and quiet_NaN. (Note that digits10 is member variable and not a function.)

4.4.6.c: std::numeric_limits
CppAD does not use a specialization of std::numeric_limits because this would be to restrictive. The C++ standard specifies that Non-fundamental standard types, such as 4.7.9.6: std::complex<double> shall not have specializations of std::numeric_limits; see Section 18.2 of ISO/IEC 14882:1998(E). In addition, since C++11, a only literal types can have a specialization of std::numeric_limits.

4.4.6.d: Float
These functions are defined for all AD<Base> , and for all corresponding Base types; see Base type 4.7.6: base_limits .

4.4.6.e: epsilon
The result eps is equal to machine epsilon and has prototype
     
Float eps
The file 4.4.6.1: num_limits.cpp tests the value eps by checking that the following are true
     1 != 1 + 
eps
     1 == 1 + 
eps / 2
where all the values, and calculations, are done with the precision corresponding to Float .

4.4.6.f: min
The result min is equal to the minimum positive normalized value and has prototype
     
Float min
The file 4.4.6.1: num_limits.cpp tests the value min by checking that the following are true
     abs( ((
min / 100) * 100) / min - 1 ) > 3 * eps
     abs( ((
min * 100) / 100) / min - 1 ) < 3 * eps
where all the values, and calculations, are done with the precision corresponding to Float .

4.4.6.g: max
The result max is equal to the maximum finite value and has prototype
     
Float max
The file 4.4.6.1: num_limits.cpp tests the value max by checking that the following are true
     abs( ((
max * 100) / 100) / max - 1 ) > 3 * eps
     abs( ((
max / 100) * 100) / max - 1 ) < 3 * eps
where all the values, and calculations, are done with the precision corresponding to Float .

4.4.6.h: quiet_NaN
The result nan is not a number and has prototype
     
Float nan
The file 4.4.6.1: num_limits.cpp tests the value nan by checking that the following is true
     
nan != nan

4.4.6.i: digits10
The member variable digits10 has prototype
     static const int numeric_limits<
Float>::digits10
It is the number of decimal digits that can be represented by a Float value. A number with this many decimal digits can be converted to Float and back to a string, without change due to rounding or overflow.

4.4.6.j: Example
The file 4.4.6.1: num_limits.cpp contains an example and test of these functions.
Input File: cppad/core/numeric_limits.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.6.1: Numeric Limits: Example and Test

# ifdef _MSC_VER
// Supress Microsoft compiler warning about possible loss of precision,
// in the constructors (when converting to std::complex<float>)
//     Float one = 1
//     Float two = 2
// 1 and 2 are small enough so no loss of precision when converting to float.
# pragma warning(disable:4244)
# endif

# include <cppad/cppad.hpp>
# include <complex>

namespace {
     typedef CppAD::AD<double> Float;
     //
     // -----------------------------------------------------------------
     bool check_epsilon(void)
     {     bool ok    = true;
          Float eps   = CppAD::numeric_limits<Float>::epsilon();
          Float eps2  = eps / 2.0;
          Float check = 1.0 + eps;
          ok         &= 1.0 !=  check;
          check       = 1.0 + eps2;
          ok         &= 1.0 == check;
          return ok;
     }
     // -----------------------------------------------------------------
     bool check_min(void)
     {     bool ok     = true;
          Float min   = CppAD::numeric_limits<Float>::min();
          Float eps   = CppAD::numeric_limits<Float>::epsilon();
          //
          Float match = (min / 100.) * 100.;
          ok         &= fabs(match / min - 1.0)  > 3.0 * eps;
          //
          match       = (min * 100.) / 100.;
          ok         &= fabs(match / min - 1.0)  < 3.0 * eps;
          return ok;
     }
     // -----------------------------------------------------------------
     bool check_max(void)
     {     bool ok     = true;
          Float max   = CppAD::numeric_limits<Float>::max();
          Float eps   = CppAD::numeric_limits<Float>::epsilon();
          //
          Float match = (max * 100.) / 100.;
          ok         &= fabs(match / max - 1.0) > 3.0 * eps;
          //
          match       = (max / 100.) * 100.;
          ok         &= fabs(match / max - 1.0) < 3.0 * eps;
          return ok;
     }
     // -----------------------------------------------------------------
     bool check_nan(void)
     {     bool ok     = true;
          Float nan   = CppAD::numeric_limits<Float>::quiet_NaN();
          ok         &= nan != nan;
          return ok;
     }
     // -----------------------------------------------------------------
     bool check_digits10(void)
     {     bool ok     = true;
          Float neg_log_eps =
               - log10( CppAD::numeric_limits<Float>::epsilon() );
          int ceil_neg_log_eps =
               Integer( neg_log_eps );
          ok &= ceil_neg_log_eps == CppAD::numeric_limits<Float>::digits10;
          return ok;
     }
}

bool num_limits(void)
{     bool ok = true;

     ok &= check_epsilon();
     ok &= check_min();
     ok &= check_max();
     ok &= check_nan();
     ok &= check_digits10();

     return ok;
}

Input File: example/general/num_limits.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7: Atomic AD Functions

4.4.7.a: Contents
checkpoint: 4.4.7.1Checkpointing Functions
atomic_base: 4.4.7.2User Defined Atomic AD Functions

Input File: omh/atomic.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.1: Checkpointing Functions

4.4.7.1.a: Syntax
checkpoint<Baseatom_fun(
     
namealgoaxaysparsityoptimize
)
sv = atom_fun.size_var()
atom_fun.option(option_value)
algo(axay)
atom_fun(axay)
checkpoint<
Base>::clear()


4.4.7.1.b: See Also
5.4.3.2: reverse_checkpoint.cpp

4.4.7.1.c: Purpose

4.4.7.1.c.a: Reduce Memory
You can reduce the size of the tape and memory required for AD by checkpointing functions of the form @(@ y = f(x) @)@ where @(@ f : B^n \rightarrow B^m @)@.

4.4.7.1.c.b: Faster Recording
It may also reduce the time to make a recording the same function for different values of the independent variable. Note that the operation sequence for a recording that uses @(@ f(x) @)@ may depend on its independent variables.

4.4.7.1.c.c: Repeating Forward
Normally, CppAD store 5.3: forward mode results until they freed using 5.3.8: capacity_order or the corresponding 5: ADFun object is deleted. This is not true for checkpoint functions because a checkpoint function may be used repeatedly with different arguments in the same tape. Thus, forward mode results are recomputed each time a checkpoint function is used during a forward or reverse mode sweep.

4.4.7.1.c.d: Restriction
The 12.4.g.b: operation sequence representing @(@ f(x) @)@ cannot depend on the value of @(@ x @)@. The approach in the 5.4.3.2: reverse_checkpoint.cpp example case be applied when the operation sequence depends on @(@ x @)@.

4.4.7.1.c.e: Multiple Level AD
If Base is an AD type, it is possible to record Base operations. Note that atom_fun will treat algo as an atomic operation while recording AD<Base> operations, but not while recording Base operations. See the 4.4.7.1.2: atomic_mul_level.cpp example.

4.4.7.1.d: Method
The checkpoint class is derived from atomic_base and makes this easy. It implements all the atomic_base 4.4.7.2.c: virtual functions and hence its source code cppad/core/checkpoint.hpp provides an example implementation of 4.4.7.2: atomic_base . The difference is that checkpoint.hpp uses AD instead of user provided derivatives.

4.4.7.1.e: constructor
The syntax for the checkpoint constructor is
     checkpoint<
Baseatom_fun(namealgoaxay)
  1. This constructor cannot be called in 8.23.4: parallel mode.
  2. You cannot currently be recording AD<Base> operations when the constructor is called.
  3. This object atom_fun must not be destructed for as long as any ADFun<Base> object uses its atomic operation.
  4. This class is implemented as a derived class of 4.4.7.2.1.c: atomic_base and hence some of its error message will refer to atomic_base.


4.4.7.1.f: Base
The type Base specifies the base type for AD operations.

4.4.7.1.g: ADVector
The type ADVector must be a 8.9: simple vector class with elements of type AD<Base> .

4.4.7.1.h: name
This checkpoint constructor argument has prototype
     const char* 
name
It is the name used for error reporting. The suggested value for name is atom_fun ; i.e., the same name as used for the object being constructed.

4.4.7.1.i: ax
This argument has prototype
     const 
ADVectorax
and size must be equal to n . It specifies vector @(@ x \in B^n @)@ at which an AD<Base> version of @(@ y = f(x) @)@ is to be evaluated.

4.4.7.1.j: ay
This argument has prototype
     
ADVectoray
Its input size must be equal to m and does not change. The input values of its elements do not matter. Upon return, it is an AD<Base> version of @(@ y = f(x) @)@.

4.4.7.1.k: sparsity
This argument has prototype
     atomic_base<
Base>::option_enum sparsity
It specifies 4.4.7.2.1.c.d: sparsity in the atomic_base constructor and must be either atomic_base<Base>::pack_sparsity_enum , atomic_base<Base>::bool_sparsity_enum , or atomic_base<Base>::set_sparsity_enum . This argument is optional and its default value is unspecified.

4.4.7.1.l: optimize
This argument has prototype
     bool 
optimize
It specifies if the recording corresponding to the atomic function should be 5.7: optimized . One expects to use a checkpoint function many times, so it should be worth the time to optimize its operation sequence. For debugging purposes, it may be useful to use the original operation sequence (before optimization) because it corresponds more closely to algo . This argument is optional and its default value is true.

4.4.7.1.m: size_var
This size_var member function return value has prototype
     size_t 
sv
It is the 5.1.5.g: size_var for the ADFun<Base> object is used to store the operation sequence corresponding to algo .

4.4.7.1.n: option
The option syntax can be used to set the type of sparsity pattern used by atom_fun . This is an atomic_base<Base> function and its documentation can be found at 4.4.7.2.2: atomic_option .

4.4.7.1.o: algo
The type of algo is arbitrary, except for the fact that the syntax
     
algo(axay)
must evaluate the function @(@ y = f(x) @)@ using AD<Base> operations. In addition, we assume that the 12.4.g.b: operation sequence does not depend on the value of ax .

4.4.7.1.p: atom_fun
Given ax it computes the corresponding value of ay using the operation sequence corresponding to algo . If AD<Base> operations are being recorded, it enters the computation as single operation in the recording see 5.1.1.c: start recording . (Currently each use of atom_fun actually corresponds to m+n+2 operations and creates m new variables, but this is not part of the CppAD specifications and my change.)

4.4.7.1.q: clear
The atomic_base class holds onto static work space in order to increase speed by avoiding system memory allocation calls. This call makes to work space 8.23.11: available to for other uses by the same thread. This should be called when you are done using the user atomic functions for a specific value of Base .

4.4.7.1.q.a: Restriction
The clear routine cannot be called while in 8.23.4: parallel execution mode.

4.4.7.1.r: Example
The file 4.4.7.1.1: checkpoint.cpp contains an example and test of these operations. It returns true if it succeeds and false if it fails.
Input File: cppad/core/checkpoint.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.1.1: Simple Checkpointing: Example and Test

4.4.7.1.1.a: Purpose
Break a large computation into pieces and only store values at the interface of the pieces. In actual applications, there may be many functions, but for this example there are only two. The functions @(@ F : \B{R}^2 \rightarrow \B{R}^2 @)@ and @(@ G : \B{R}^2 \rightarrow \B{R}^2 @)@ defined by @[@ F(y) = \left( \begin{array}{c} y_0 + y_0 + y_0 \\ y_1 + y_1 + y_1 \end{array} \right) \; , \; G(x) = \left( \begin{array}{c} x_0 \cdot x_0 \cdot x_0 \\ x_1 \cdot x_1 \cdot x_1 \end{array} \right) @]@

# include <cppad/cppad.hpp>

namespace {
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(AD<double>)            ADVector;
     typedef CppAD::atomic_base<double>::option_enum option_enum;

     void f_algo(const ADVector& y, ADVector& z)
     {     z[0] = 0.0;
          z[1] = 0.0;
          for(size_t k = 0; k < 3; k++)
          {     z[0] += y[0];
               z[1] += y[1];
          }
          return;
     }
     void g_algo(const ADVector& x, ADVector& y)
     {     y[0] = 1.0;
          y[1] = 1.0;
          for(size_t k = 0; k < 3; k++)
          {     y[0] *= x[0];
               y[1] *= x[1];
          }
          return;
     }
     bool test_case(
          option_enum f_sparsity, option_enum g_sparsity, bool optimize )
     {     bool ok = true;
          using CppAD::checkpoint;
          using CppAD::ADFun;
          using CppAD::NearEqual;
          size_t i, j, k, n = 2, m = n;
          double eps = 10. * std::numeric_limits<double>::epsilon();

          // checkpoint version of the function F(x)
          ADVector ax(n), ay(n), az(m);
          for(j = 0; j < n; j++)
               ax[j] = double(j + 1);
          // could also use bool_sparsity_enum or set_sparsity_enum
          checkpoint<double> atom_f("atom_f", f_algo, ax, ay, f_sparsity);
          checkpoint<double> atom_g("atom_g", g_algo, ay, az, g_sparsity);

          // Record a version of z = g[f(x)] without checkpointing
          Independent(ax);
          f_algo(ax, ay);
          g_algo(ay, az);
          ADFun<double> check_not(ax, az);

          // Record a version of z = g[f(x)] with checkpointing
          Independent(ax);
          atom_f(ax, ay);
          atom_g(ay, az);
          ADFun<double> check_yes(ax, az);

          // checkpointing should use fewer operations
          ok &= check_yes.size_var() < check_not.size_var();

          // this does not really save space because f and g are only used once
          ok &= check_not.size_var() <=
               check_yes.size_var() + atom_f.size_var() + atom_g.size_var();

          // compare forward mode results for orders 0, 1, 2
          size_t q = 2;
          CPPAD_TESTVECTOR(double) x_q(n*(q+1)), z_not(m*(q+1)), z_yes(m*(q+1));
          for(j = 0; j < n; j++)
          {     for(k = 0; k <= q; k++)
                    x_q[ j * (q+1) + k ] = 1.0 / double(q + 1 - k);
          }
          z_not = check_not.Forward(q, x_q);
          z_yes = check_yes.Forward(q, x_q);
          for(i = 0; i < m; i++)
          {     for(k = 0; k <= q; k++)
               {     double zik_not = z_not[ i * (q+1) + k];
                    double zik_yes = z_yes[ i * (q+1) + k];
                    ok &= NearEqual(zik_not, zik_yes, eps, eps);
               }
          }

          // compare reverse mode results
          CPPAD_TESTVECTOR(double) w(m*(q+1)), dw_not(n*(q+1)), dw_yes(n*(q+1));
          for(i = 0; i < m * (q + 1); i++)
               w[i] = 1.0 / double(i + 1);
          dw_not = check_not.Reverse(q+1, w);
          dw_yes = check_yes.Reverse(q+1, w);
          for(j = 0; j < n; j++)
          {     for(k = 0; k <= q; k++)
               {     double dwjk_not = dw_not[ j * (q+1) + k];
                    double dwjk_yes = dw_yes[ j * (q+1) + k];
                    ok &= NearEqual(dwjk_not, dwjk_yes, eps, eps);
               }
          }

          // compare forward mode Jacobian sparsity patterns
          CppAD::vector< std::set<size_t> > r(n), s_not(m), s_yes(m);
          for(j = 0; j < n; j++)
               r[j].insert(j);
          s_not = check_not.ForSparseJac(n, r);
          s_yes = check_yes.ForSparseJac(n, r);
          for(i = 0; i < m; i++)
               ok &= s_not[i] == s_yes[i];

          // compare reverse mode Jacobian sparsity patterns
          CppAD::vector< std::set<size_t> > s(m), r_not(m), r_yes(m);
          for(i = 0; i < m; i++)
               s[i].insert(i);
          r_not = check_not.RevSparseJac(m, s);
          r_yes = check_yes.RevSparseJac(m, s);
          for(i = 0; i < m; i++)
               ok &= r_not[i] == r_yes[i];


          // compare reverse mode Hessian sparsity patterns
          CppAD::vector< std::set<size_t> > s_one(1), h_not(n), h_yes(n);
          for(i = 0; i < m; i++)
               s_one[0].insert(i);
          h_not = check_not.RevSparseHes(n, s_one);
          h_yes = check_yes.RevSparseHes(n, s_one);
          for(i = 0; i < n; i++)
               ok &= h_not[i] == h_yes[i];

          return ok;
     }
}

bool checkpoint(void)
{     bool ok = true;

     // different types of sparsity
     option_enum pack_sparsity = CppAD::atomic_base<double>::pack_sparsity_enum;
     option_enum bool_sparsity = CppAD::atomic_base<double>::bool_sparsity_enum;
     option_enum set_sparsity  = CppAD::atomic_base<double>::set_sparsity_enum;

     // test some different cases
     ok &= test_case(pack_sparsity, pack_sparsity, true);
     ok &= test_case(pack_sparsity, bool_sparsity, false);
     ok &= test_case(bool_sparsity, set_sparsity,  true);
     ok &= test_case(set_sparsity,  set_sparsity,  false);

     return ok;
}

Input File: example/atomic/checkpoint.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test

4.4.7.1.2.a: Discussion
One can use 4.4.7.1: checkpoint or 4.4.7.2: atomic_base to code an AD<Base> operation as atomic. This means that derivative computations that use the type Base will call the corresponding atomic_base member functions. On the other hand, if Base is AD<Other> the operations recorded at the Base level will not be atomic. This is demonstrated in this example.

# include <cppad/cppad.hpp>

namespace {
     using CppAD::AD;
     typedef AD<double>                      a1double;
     typedef AD<a1double>                    a2double;
     typedef CPPAD_TESTVECTOR(a1double)      a1vector;
     typedef CPPAD_TESTVECTOR(a2double)      a2vector;

     void f_algo(const a2vector& x, a2vector& y)
     {     size_t n = x.size();
          y[0] = 0.0;
          for(size_t j = 1; j < n; j++)
               y[0] += x[j-1] * x[j];
          return;
     }
}
//
bool mul_level(void)
{     bool ok = true;
     using CppAD::checkpoint;
     using CppAD::ADFun;
     using CppAD::Independent;

     // domain dimension for this problem
     size_t n = 10;
     size_t m = 1;

     // checkpoint version of the function F(x)
     a2vector a2x(n), a2y(m);
     for(size_t j = 0; j < n; j++)
          a2x[j] = a2double(j + 1);
     //
     // could also use bool_sparsity_enum or set_sparsity_enum
     checkpoint<a1double> atom_f("atom_f", f_algo, a2x, a2y);
     //
     // Record a version of y = f(x) without checkpointing
     Independent(a2x);
     f_algo(a2x, a2y);
     ADFun<a1double> check_not(a2x, a2y);
     //
     // number of variables in a tape of f_algo that does not use checkpointing
     size_t size_not = check_not.size_var();
     //
     // Record a version of y = f(x) with checkpointing
     Independent(a2x);
     atom_f(a2x, a2y);
     ADFun<a1double> check_yes(a2x, a2y);
     //
     // f_algo is represented by one atomic operation in this tape
     ok &= check_yes.size_var() < size_not;
     //
     // now record operations at a1double level
     a1vector a1x(n), a1y(m);
     for(size_t j = 0; j < n; j++)
          a1x[j] = a1double(j + 1);
     //
     // without checkpointing
     Independent(a1x);
     a1y = check_not.Forward(0, a1x);
     ADFun<double> with_not(a1x, a1y);
     //
     // should have the same size
     ok &= with_not.size_var() == size_not;
     //
     // with checkpointing
     Independent(a1x);
     a1y = check_yes.Forward(0, a1x);
     ADFun<double> with_yes(a1x, a1y);
     //
     // f_algo is nolonger represented by one atomic operation in this tape
     ok &= with_yes.size_var() == size_not;
     //
     return ok;
}

Input File: example/atomic/mul_level.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.1.3: Checkpointing an ODE Solver: Example and Test

4.4.7.1.3.a: See Also
4.4.7.1.4: checkpoint_extended_ode.cpp ,

4.4.7.1.3.b: Purpose
In this example we 4.4.7.1: checkpoint one step of an ODE solver.

4.4.7.1.3.c: Problem
We consider the initial value problem with parameter @(@ x @)@ defined by, @(@ z(0, x) = z_0 (x) @)@, @[@ \partial_t z(t, x ) = h [ x , z(t, x) ] @]@ Note that if @(@ t @)@ needs to be in the equation, one can define the first component of @(@ z(t, x) @)@ to be equal to @(@ t @)@.

4.4.7.1.3.d: ODE Solver
For this example, we consider the Fourth order Runge-Kutta ODE solver. Given an approximation solution at time @(@ t_k @)@ denoted by @(@ \tilde{z}_k (x) @)@, and @(@ \Delta t = t_{k+1} - t_k @)@, it defines the approximation solution @(@ \tilde{z}_{k+1} (x) @)@ at time @(@ t_{k+1} @)@ by @[@ \begin{array}{rcl} h_1 & = & h [ x , \tilde{z}_k (x) ] \\ h_2 & = & h [ x , \tilde{z}_k (x) + \Delta t \; h_1 / 2 ] \\ h_3 & = & h [ x , \tilde{z}_k (x) + \Delta t \; h_2 / 2 ] \\ h_4 & = & h [ x , \tilde{z}_k (x) + \Delta t \; h_3 ] \\ \tilde{z}_{k+1} (x) & = & \tilde{z}_k (x) + \Delta t \; ( h_1 + 2 h_2 + 2 h_3 + h_4 ) / 6 \end{array} @]@ If @(@ \tilde{z}_k (x) = z_k (x) @)@, @(@ \tilde{z}_{k+1} (x) = z_{k+1} (x) + O( \Delta t^5 ) @)@. Other ODE solvers can use a similar method to the one used below.

4.4.7.1.3.e: ODE
For this example the ODE is defined by @(@ z(0, x) = 0 @)@ and @[@ h[ x, z(t, x) ] = \left( \begin{array}{c} x_0 \\ x_1 z_0 (t, x) \\ \vdots \\ x_{n-1} z_{n-2} (t, x) \end{array} \right) = \left( \begin{array}{c} \partial_t z_0 (t , x) \\ \partial_t z_1 (t , x) \\ \vdots \\ \partial_t z_{n-1} (t , x) \end{array} \right) @]@

4.4.7.1.3.f: Solution
The solution of the ODE for this example, which is used to check the results, can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain @[@ z(t, x) = \left( \begin{array}{c} x_0 t \\ x_1 x_0 t^2 / 2 \\ \vdots \\ x_{n-1} x_{n-2} \ldots x_0 t^n / n ! \end{array} \right) @]@

# include <cppad/cppad.hpp>

namespace {
     using CppAD::AD;
     typedef AD<double>                     a1double;
     typedef AD<a1double>                   a2double;
     //
     typedef CPPAD_TESTVECTOR(   double )   a0vector;
     typedef CPPAD_TESTVECTOR( a1double )   a1vector;
     typedef CPPAD_TESTVECTOR( a2double )   a2vector;
     //
     // set once by main and kept that way
     double delta_t_ = std::numeric_limits<double>::quiet_NaN();
     size_t n_       = 0;
     //
     // The function h( x , y)
     template <class FloatVector>
     FloatVector h(const FloatVector& x, const FloatVector& y)
     {     assert( size_t( x.size() ) == n_ );
          assert( size_t( y.size() ) == n_ );
          FloatVector result(n_);
          result[0] = x[0];
          for(size_t i = 1; i < n_; i++)
               result[i] = x[i] * y[i-1];
          return result;
     }

     // The 4-th Order Runge-Kutta Step
     template <class FloatVector>
     FloatVector Runge4(const FloatVector& x, const FloatVector& z0
     )
     {     assert( size_t( x.size() ) == n_ );
          assert( size_t( z0.size() ) == n_ );
          //
          typedef typename FloatVector::value_type Float;
          //
          Float  dt = Float(delta_t_);
          size_t m  = z0.size();
          //
          FloatVector h1(m), h2(m), h3(m), h4(m), result(m);
          h1 = h( x, z0 );
          //
          for(size_t i = 0; i < m; i++)
               h2[i] = z0[i] + dt * h1[i] / 2.0;
          h2 = h( x, h2 );
          //
          for(size_t i = 0; i < m; i++)
               h3[i] = z0[i] + dt * h2[i] / 2.0;
          h3 = h( x, h3 );
          //
          for(size_t i = 0; i < m; i++)
               h4[i] = z0[i] + dt * h3[i];
          h4 = h( x, h4 );
          //
          for(size_t i = 0; i < m; i++)
          {     Float dz = dt * ( h1[i] + 2.0*h2[i] + 2.0*h3[i] + h4[i] ) / 6.0;
               result[i] = z0[i] + dz;
          }
          return result;
     }

     // pack x and z into an ode_info vector
     template <class FloatVector>
     void pack(
          FloatVector&         ode_info ,
          const FloatVector&   x        ,
          const FloatVector&   z        )
     {     assert( size_t( ode_info.size() ) == n_ + n_ );
          assert( size_t( x.size()        ) == n_      );
          assert( size_t( z.size()        ) == n_      );
          //
          size_t offset = 0;
          for(size_t i = 0; i < n_; i++)
               ode_info[offset + i] = x[i];
          offset += n_;
          for(size_t i = 0; i < n_; i++)
               ode_info[offset + i] = z[i];
     }

     // unpack an ode_info vector
     template <class FloatVector>
     void unpack(
          const FloatVector&         ode_info ,
          FloatVector&               x        ,
          FloatVector&               z        )
     {     assert( size_t( ode_info.size() ) == n_ + n_ );
          assert( size_t( x.size()        ) == n_      );
          assert( size_t( z.size()        ) == n_      );
          //
          size_t offset = 0;
          for(size_t i = 0; i < n_; i++)
               x[i] = ode_info[offset + i];
          offset += n_;
          for(size_t i = 0; i < n_; i++)
               z[i] = ode_info[offset + i];
     }

     // Algorithm that z(t, x)
     void ode_algo(const a1vector& ode_info_in, a1vector& ode_info_out)
     {     assert( size_t( ode_info_in.size()  ) == n_ + n_ );
          assert( size_t( ode_info_out.size() ) == n_ + n_ );
          //
          // initial ode information
          a1vector x(n_), z0(n_);
          unpack(ode_info_in, x, z0);
          //
          // advance z(t, x)
          a1vector z1 = Runge4(x, z0);
          //
          // final ode information
          pack(ode_info_out, x, z1);
          //
          return;
     }
}
//
bool ode(void)
{     bool ok = true;
     using CppAD::NearEqual;
     double eps = std::numeric_limits<double>::epsilon();
     //
     // number of terms in the differential equation
     n_ = 6;
     //
     // step size for the differentiail equation
     size_t n_step = 10;
     double T      = 1.0;
     delta_t_ = T / double(n_step);
     //
     // set parameter value and initial value of the ode
     a1vector ax(n_), az0(n_);
     for(size_t i = 0; i < n_; i++)
     {     ax[i]  = a1double(i + 1);
          az0[i] = a1double(0);
     }
     //
     // pack ode information input vector
     a1vector ode_info_in(2 * n_);
     pack(ode_info_in, ax, az0);
     //
     // create checkpoint version of the algorithm
     a1vector ode_info_out(2 * n_);
     CppAD::checkpoint<double> ode_check(
          "ode", ode_algo, ode_info_in, ode_info_out
     );
     //
     // set the independent variables for recording
     CppAD::Independent( ax );
     //
     // repack to get dependence on ax
     pack(ode_info_in, ax, az0);
     //
     // Now run the checkpoint algorithm n_step times
     for(size_t k = 0; k < n_step; k++)
     {     ode_check(ode_info_in, ode_info_out);
          ode_info_in = ode_info_out;
     }
     //
     // Unpack the results (must use ax1 so do not overwrite ax)
     a1vector ax1(n_), az1(n_);
     unpack(ode_info_out, ax1, az1);
     //
     // We could record a complicated funciton of x and z(T, x) in f,
     // but make this example simpler we record x -> z(T, x).
     CppAD::ADFun<double> f(ax, az1);
     //
     // check function values
     a0vector x(n_), z1(n_);
     for(size_t j = 0; j < n_; j++)
          x[j] = double(j + 1);
     z1 = f.Forward(0, x);
     //
     // separate calculation of z(t, x)
     a0vector check_z1(n_);
     check_z1[0] = x[0] * T;
     for(size_t i = 1; i < n_; i++)
          check_z1[i] = x[i] * T * check_z1[i-1] / double(i+1);
     //
     // expected accuracy for each component of of z(t, x)
     a0vector acc(n_);
     for(size_t i = 0; i < n_; i++)
     {     if( i < 4 )
          {     // Runge-Kutta methos is exact for this case
               acc[i] = 10. * eps;
          }
          else
          {     acc[i] = 1.0;
               for(size_t k = 0; k < 5; k++)
                         acc[i] *= x[k] * delta_t_;
          }
     }
     // check z1(T, x)
     for(size_t i = 0; i < n_; i++)
          ok &= NearEqual(z1[i] , check_z1[i], acc[i], acc[i]);
     //
     // Now use f to compute a derivative. For this 'simple' example it is
     // the derivative of z_{n-1} (T, x) respect to x of the
     a0vector w(n_), dw(n_);
     for(size_t i = 0; i < n_; i++)
     {     w[i] = 0.0;
          if( i == n_ - 1 )
               w[i] = 1.0;
     }
     dw = f.Reverse(1, w);
     for(size_t j = 0; j < n_; j++)
     {     double check = z1[n_ - 1] / x[j];
          ok &= NearEqual(dw[j] , check, 100.*eps, 100.*eps);
     }
     //
     return ok;
}

Input File: example/atomic/ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test

4.4.7.1.4.a: See Also
4.4.7.1.3: checkpoint_ode.cpp , 4.4.7.1.2: atomic_mul_level.cpp .

4.4.7.1.4.b: Discussion
Suppose that we wish to extend an ODE to include derivatives with respect to some parameter in the ODE. In addition, suppose we wish to differentiate a function that depends on these derivatives. Applying checkpointing to at the second level of AD would not work; see 4.4.7.1.2: atomic_mul_level.cpp In this example we show how one can do this by checkpointing an extended ODE solver.

4.4.7.1.4.c: Problem
We consider the initial value problem with parameter @(@ x @)@ defined by, @(@ z(0, x) = z_0 (x) @)@, @[@ \partial_t z(t, x ) = h [ x , z(t, x) ] @]@ Note that if @(@ t @)@ needs to be in the equation, one can define the first component of @(@ z(t, x) @)@ to be equal to @(@ t @)@.

4.4.7.1.4.d: ODE Solver
For this example, we consider the Fourth order Runge-Kutta ODE solver. Given an approximation solution at time @(@ t_k @)@ denoted by @(@ \tilde{z}_k (x) @)@, and @(@ \Delta t = t_{k+1} - t_k @)@, it defines the approximation solution @(@ \tilde{z}_{k+1} (x) @)@ at time @(@ t_{k+1} @)@ by @[@ \begin{array}{rcl} h_1 & = & h [ x , \tilde{z}_k (x) ] \\ h_2 & = & h [ x , \tilde{z}_k (x) + \Delta t \; h_1 / 2 ] \\ h_3 & = & h [ x , \tilde{z}_k (x) + \Delta t \; h_2 / 2 ] \\ h_4 & = & h [ x , \tilde{z}_k (x) + \Delta t \; h_3 ] \\ \tilde{z}_{k+1} (x) & = & \tilde{z}_k (x) + \Delta t \; ( h_1 + 2 h_2 + 2 h_3 + h_4 ) / 6 \end{array} @]@ If @(@ \tilde{z}_k (x) = z_k (x) @)@, then @(@ \tilde{z}_{k+1} (x) = z_{k+1} (x) + O( \Delta t^5 ) @)@, then Other ODE solvers can use a similar method to the one used below.

4.4.7.1.4.e: ODE
For this example the ODE is defined by @(@ z(0, x) = 0 @)@ and @[@ h[ x, z(t, x) ] = \left( \begin{array}{c} x_0 \\ x_1 z_0 (t, x) \\ \vdots \\ x_{n-1} z_{n-2} (t, x) \end{array} \right) = \left( \begin{array}{c} \partial_t z_0 (t , x) \\ \partial_t z_1 (t , x) \\ \vdots \\ \partial_t z_{n-1} (t , x) \end{array} \right) @]@

4.4.7.1.4.f: Solution
The solution of the ODE for this example, which is used to check the results, can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain @[@ z(t, x) = \left( \begin{array}{c} x_0 t \\ x_1 x_0 t^2 / 2 \\ \vdots \\ x_{n-1} x_{n-2} \ldots x_0 t^n / n ! \end{array} \right) @]@

# include <cppad/cppad.hpp>

namespace {
     using CppAD::AD;
     typedef AD<double>                     a1double;
     typedef AD<a1double>                   a2double;
     //
     typedef CPPAD_TESTVECTOR(   double )   a0vector;
     typedef CPPAD_TESTVECTOR( a1double )   a1vector;
     typedef CPPAD_TESTVECTOR( a2double )   a2vector;
     //
     // set once by main and kept that way
     double delta_t_ = std::numeric_limits<double>::quiet_NaN();
     size_t n_       = 0;
     //
     // The function h( x , y)
     template <class FloatVector>
     FloatVector h(const FloatVector& x, const FloatVector& y)
     {     assert( size_t( x.size() ) == n_ );
          assert( size_t( y.size() ) == n_ );
          FloatVector result(n_);
          result[0] = x[0];
          for(size_t i = 1; i < n_; i++)
               result[i] = x[i] * y[i-1];
          return result;
     }

     // The 4-th Order Runge-Kutta Step
     template <class FloatVector>
     FloatVector Runge4(const FloatVector& x, const FloatVector& z0
     )
     {     assert( size_t( x.size() ) == n_ );
          assert( size_t( z0.size() ) == n_ );
          //
          typedef typename FloatVector::value_type Float;
          //
          Float  dt = Float(delta_t_);
          size_t m  = z0.size();
          //
          FloatVector h1(m), h2(m), h3(m), h4(m), result(m);
          h1 = h( x, z0 );
          //
          for(size_t i = 0; i < m; i++)
               h2[i] = z0[i] + dt * h1[i] / 2.0;
          h2 = h( x, h2 );
          //
          for(size_t i = 0; i < m; i++)
               h3[i] = z0[i] + dt * h2[i] / 2.0;
          h3 = h( x, h3 );
          //
          for(size_t i = 0; i < m; i++)
               h4[i] = z0[i] + dt * h3[i];
          h4 = h( x, h4 );
          //
          for(size_t i = 0; i < m; i++)
          {     Float dz = dt * ( h1[i] + 2.0*h2[i] + 2.0*h3[i] + h4[i] ) / 6.0;
               result[i] = z0[i] + dz;
          }
          return result;
     }

     // Derivative of 4-th Order Runge-Kutta Step w.r.t x
     a1vector Runge4_x(const a1vector& x, const a1vector& z0)
     {     assert( size_t( x.size() ) == n_ );
          assert( size_t( z0.size() ) == n_ );
          //
          a2vector ax(n_);
          for(size_t j = 0; j < n_; j++)
               ax[j] = x[j];
          //
          a2vector az0(n_);
          for(size_t i = 0; i < n_; i++)
               az0[i] = z0[i];
          //
          CppAD::Independent(ax);
          a2vector az(n_);
          az = Runge4(ax, az0);
          CppAD::ADFun<a1double> f(ax, az);
          //
          a1vector result =  f.Jacobian(x);
          //
          return result;
     }

     // Derivative of 4-th Order Runge-Kutta Step w.r.t z0
     a1vector Runge4_z0(const a1vector& x, const a1vector& z0)
     {     assert( size_t( x.size()  ) == n_ );
          assert( size_t( z0.size() ) == n_ );
          //
          a2vector ax(n_);
          for(size_t j = 0; j < n_; j++)
               ax[j] = x[j];
          //
          a2vector az0(n_);
          for(size_t i = 0; i < n_; i++)
               az0[i] = z0[i];
          //
          CppAD::Independent(az0);
          a2vector az(n_);
          az = Runge4(ax, az0);
          CppAD::ADFun<a1double> f(az0, az);
          //
          a1vector result =  f.Jacobian(z0);
          //
          return result;
     }

     // pack an extended ode vector
     template <class FloatVector>
     void pack(
          FloatVector&         extended_ode ,
          const FloatVector&   x            ,
          const FloatVector&   z            ,
          const FloatVector&   z_x          )
     {     assert( size_t( extended_ode.size() ) == n_ + n_ + n_ * n_ );
          assert( size_t( x.size()            ) == n_                );
          assert( size_t( z.size()            ) == n_                );
          assert( size_t( z_x.size()          ) == n_ * n_           );
          //
          size_t offset = 0;
          for(size_t i = 0; i < n_; i++)
               extended_ode[offset + i] = x[i];
          offset += n_;
          for(size_t i = 0; i < n_; i++)
               extended_ode[offset + i] = z[i];
          offset += n_;
          for(size_t i = 0; i < n_; i++)
          {     for(size_t j = 0; j < n_; j++)
               {     // partial of z_i (t , x ) w.r.t x_j
                    extended_ode[offset + i * n_ + j] = z_x[i * n_ + j];
               }
          }
     }

     // unpack an extended ode vector
     template <class FloatVector>
     void unpack(
          const FloatVector&         extended_ode ,
          FloatVector&               x            ,
          FloatVector&               z            ,
          FloatVector&               z_x          )
     {     assert( size_t( extended_ode.size() ) == n_ + n_ + n_ * n_ );
          assert( size_t( x.size()            ) == n_                );
          assert( size_t( z.size()            ) == n_                );
          assert( size_t( z_x.size()          ) == n_ * n_           );
          //
          size_t offset = 0;
          for(size_t i = 0; i < n_; i++)
               x[i] = extended_ode[offset + i];
          offset += n_;
          for(size_t i = 0; i < n_; i++)
               z[i] = extended_ode[offset + i];
          offset += n_;
          for(size_t i = 0; i < n_; i++)
          {     for(size_t j = 0; j < n_; j++)
               {     // partial of z_i (t , x ) w.r.t x_j
                    z_x[i * n_ + j] = extended_ode[offset + i * n_ + j];
               }
          }
     }

     // Algorithm that advances the partial of z(t, x) w.r.t x
     void ext_ode_algo(const a1vector& ext_ode_in, a1vector& ext_ode_out)
     {     assert( size_t( ext_ode_in.size()  ) == n_ + n_ + n_ * n_ );
          assert( size_t( ext_ode_out.size() ) == n_ + n_ + n_ * n_ );
          //
          // initial extended ode information
          a1vector x(n_), z0(n_), z0_x(n_ * n_);
          unpack(ext_ode_in, x, z0, z0_x);
          //
          // advance z(t, x)
          a1vector z1 = Runge4(x, z0);
          //
          // partial of z1 w.r.t. x
          a1vector z1_x = Runge4_x(x, z0);
          //
          // partial of z1 w.r.t. z0
          a1vector z1_z0 = Runge4_z0(x, z0);
          //
          // total derivative of z1 w.r.t x
          for(size_t i = 0; i < n_; i++)
          {     for(size_t j = 0; j < n_; j++)
               {     a1double sum = 0.0;
                    for(size_t k = 0; k < n_; k++)
                         sum += z1_z0 [ i * n_ + k ] * z0_x [ k * n_ + j ];
                    z1_x[ i * n_ + j] += sum;
               }
          }
          //
          // final extended ode information
          pack(ext_ode_out, x, z1, z1_x);
          //
          return;
     }
}
//
bool extended_ode(void)
{     bool ok = true;
     using CppAD::NearEqual;
     double eps = std::numeric_limits<double>::epsilon();
     //
     // number of terms in the differential equation
     n_ = 6;
     //
     // step size for the differentiail equation
     size_t n_step = 10;
     double T      = 1.0;
     delta_t_ = T / double(n_step);
     //
     // set parameter value and initial value of the extended ode
     a1vector ax(n_), az0(n_), az0_x(n_ * n_);
     for(size_t i = 0; i < n_; i++)
     {     ax[i]  = a1double(i + 1);
          az0[i] = a1double(0);
          for(size_t j = 0; j < n_; j++)
               az0_x[ i * n_ + j ] = 0.0;
     }
     //
     // pack into extended ode information input vector
     size_t n_ext = n_ + n_ + n_ * n_;
     a1vector aext_ode_in(n_ext);
     pack(aext_ode_in, ax, az0, az0_x);
     //
     // create checkpoint version of the algorithm
     a1vector aext_ode_out(n_ext);
     CppAD::checkpoint<double> ext_ode_check(
          "ext_ode", ext_ode_algo, aext_ode_in, aext_ode_out
     );
     //
     // set the independent variables for recording
     CppAD::Independent( ax );
     //
     // repack to get dependence on ax
     pack(aext_ode_in, ax, az0, az0_x);
     //
     // Now run the checkpoint algorithm n_step times
     for(size_t k = 0; k < n_step; k++)
     {     ext_ode_check(aext_ode_in, aext_ode_out);
          aext_ode_in = aext_ode_out;
     }
     //
     // Unpack the results (must use ax1 so do not overwrite ax)
     a1vector ax1(n_), az1(n_), az1_x(n_ * n_);
     unpack(aext_ode_out, ax1, az1, az1_x);
     //
     // We could record a complicated funciton of x and z_x(T, x) in f,
     // but make this example simpler we record x -> z_x(T, x).
     CppAD::ADFun<double> f(ax, az1_x);
     //
     // check function values
     a0vector x(n_), z1(n_), z1_x(n_ * n_);
     for(size_t j = 0; j < n_; j++)
          x[j] = double(j + 1);
     z1_x = f.Forward(0, x);
     //
     // use z(t, x) for checking solution
     z1[0] = x[0] * T;
     for(size_t i = 1; i < n_; i++)
          z1[i] = x[i] * T * z1[i-1] / double(i+1);
     //
     // expected accuracy for each component of of z(t, x)
     a0vector acc(n_);
     for(size_t i = 0; i < n_; i++)
     {     if( i < 4 )
          {     // Runge-Kutta methos is exact for this case
               acc[i] = 10. * eps;
          }
          else
          {     acc[i] = 1.0;
               for(size_t k = 0; k < 5; k++)
                         acc[i] *= x[k] * delta_t_;
          }
     }
     // check z1(T, x)
     for(size_t i = 0; i < n_; i++)
     {     for(size_t j = 0; j < n_; j++)
          {     // check partial of z1_i w.r.t x_j
               double check = 0.0;
               if( j <= i )
                    check = z1[i] / x[j];
               ok &= NearEqual(z1_x[ i * n_ + j ] , check, acc[i], acc[i]);
          }
     }
     //
     // Now use f to compute a derivative. For this 'simple' example it is
     // the derivative with respect to x of the
     // parital with respect to x[n-1] of z_{n-1} (t , x)
     a0vector w(n_ * n_), dw(n_);
     for(size_t i = 0; i < n_; i++)
     {     for(size_t j = 0; j < n_; j++)
          {     w[ i * n_ + j ] = 0.0;
               if( i == n_ - 1 && j == n_ - 1 )
                    w[ i * n_ + j ] = 1.0;
          }
     }
     dw = f.Reverse(1, w);
     for(size_t j = 0; j < n_; j++)
     {     double check = 0.0;
          if( j < n_ - 1 )
               check = z1[n_ - 1] / ( x[n_ - 1] * x[j] );
          ok &= NearEqual(dw[j] , check, acc[n_-1], acc[n_-1]);
     }
     //
     return ok;
}

Input File: example/atomic/extended_ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2: User Defined Atomic AD Functions

4.4.7.2.a: Syntax

atomic_user afun(ctor_arg_list)
afun(axay)
ok = afun.forward(pqvxvytxty)
ok = afun.reverse(qtxtypxpy)
ok = afun.for_sparse_jac(qrs)
ok = afun.rev_sparse_jac(qrs)
ok = afun.for_sparse_hes(vxrsh)
ok = afun.rev_sparse_hes(vxstqruv)
atomic_base<
Base>::clear()


4.4.7.2.b: Purpose
In some cases, the user knows how to compute derivatives of a function @[@ y = f(x) \; {\rm where} \; f : B^n \rightarrow B^m @]@ more efficiently than by coding it using AD<Base> 12.4.g.a: atomic operations and letting CppAD do the rest. In this case atomic_base<Base> can use the user code for @(@ f(x) @)@, and its derivatives, as AD<Base> atomic operations.

4.4.7.2.c: Virtual Functions
User defined derivatives are implemented by defining the following virtual functions in the base_atomic class: 4.4.7.2.4: forward , 4.4.7.2.5: reverse , 4.4.7.2.6: for_sparse_jac , 4.4.7.2.7: rev_sparse_jac , and 4.4.7.2.9: rev_sparse_hes . These virtual functions have a default implementation that returns ok == false . The forward function, for the case q == 0 , must be implemented. Otherwise, only those functions required by the your calculations need to be implemented. For example, forward for the case q == 2 can just return ok == false unless you require forward mode calculation of second derivatives.

4.4.7.2.d: Contents
atomic_ctor: 4.4.7.2.1Atomic Function Constructor
atomic_option: 4.4.7.2.2Set Atomic Function Options
atomic_afun: 4.4.7.2.3Using AD Version of Atomic Function
atomic_forward: 4.4.7.2.4Atomic Forward Mode
atomic_reverse: 4.4.7.2.5Atomic Reverse Mode
atomic_for_sparse_jac: 4.4.7.2.6Atomic Forward Jacobian Sparsity Patterns
atomic_rev_sparse_jac: 4.4.7.2.7Atomic Reverse Jacobian Sparsity Patterns
atomic_for_sparse_hes: 4.4.7.2.8Atomic Forward Hessian Sparsity Patterns
atomic_rev_sparse_hes: 4.4.7.2.9Atomic Reverse Hessian Sparsity Patterns
atomic_base_clear: 4.4.7.2.10Free Static Variables
atomic_get_started.cpp: 4.4.7.2.11Getting Started with Atomic Operations: Example and Test
atomic_norm_sq.cpp: 4.4.7.2.12Atomic Euclidean Norm Squared: Example and Test
atomic_reciprocal.cpp: 4.4.7.2.13Reciprocal as an Atomic Operation: Example and Test
atomic_set_sparsity.cpp: 4.4.7.2.14Atomic Sparsity with Set Patterns: Example and Test
atomic_tangent.cpp: 4.4.7.2.15Tan and Tanh as User Atomic Operations: Example and Test
atomic_eigen_mat_mul.cpp: 4.4.7.2.16Atomic Eigen Matrix Multiply: Example and Test
atomic_eigen_mat_inv.cpp: 4.4.7.2.17Atomic Eigen Matrix Inverse: Example and Test
atomic_eigen_cholesky.cpp: 4.4.7.2.18Atomic Eigen Cholesky Factorization: Example and Test
atomic_mat_mul.cpp: 4.4.7.2.19User Atomic Matrix Multiply: Example and Test

4.4.7.2.e: Examples

4.4.7.2.e.a: Getting Started
The file 4.4.7.2.11: atomic_get_started.cpp contains an example and test that shows the minimal amount of information required to create a user defined atomic operation.

4.4.7.2.e.b: Scalar Function
The file 4.4.7.2.13: atomic_reciprocal.cpp contains an example and test where the user provides the code for computing derivatives. This example is simple because the domain and range are scalars.

4.4.7.2.e.c: Vector Range
The file 4.4.7.2.15: atomic_tangent.cpp contains another example where the user provides the code for computing derivatives. This example is more complex because the range has two components.

4.4.7.2.e.d: Hessian Sparsity Patterns
The file 4.4.7.2.9.1: atomic_rev_sparse_hes.cpp contains an minimal example where the user provides the code for computing Hessian sparsity patterns.

4.4.7.2.f: General Case
The file 4.4.7.2.19: atomic_mat_mul.cpp contains a more general example where the user provides the code for computing derivatives. This example is more complex because both the domain and range dimensions are arbitrary.
Input File: omh/atomic_base.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.1: Atomic Function Constructor

4.4.7.2.1.a: Syntax
atomic_user afun(ctor_arg_list)
atomic_base<Base>(namesparsity)

4.4.7.2.1.b: atomic_user

4.4.7.2.1.b.a: ctor_arg_list
Is a list of arguments for the atomic_user constructor.

4.4.7.2.1.b.b: afun
The object afun must stay in scope for as long as the corresponding atomic function is used. This includes use by any 5: ADFun<Base> that has this atomic_user operation in its 12.4.g.b: operation sequence .

4.4.7.2.1.b.c: Implementation
The user defined atomic_user class is a publicly derived class of atomic_base<Base> . It should be declared as follows:
     class 
atomic_user : public CppAD::atomic_base<Base> {
     public:
          
atomic_user(ctor_arg_list) : atomic_base<Base>(namesparsity)
     
...
     };
where ... denotes the rest of the implementation of the derived class. This includes completing the constructor and all the virtual functions that have their atomic_base implementations replaced by atomic_user implementations.

4.4.7.2.1.c: atomic_base

4.4.7.2.1.c.a: Restrictions
The atomic_base constructor cannot be called in 8.23.4: parallel mode.

4.4.7.2.1.c.b: Base
The template parameter determines the Base type for this AD<Base> atomic operation.

4.4.7.2.1.c.c: name
This atomic_base constructor argument has the following prototype
     const std::string& 
name
It is the name for this atomic function and is used for error reporting. The suggested value for name is afun or atomic_user , i.e., the name of the corresponding atomic object or class.

4.4.7.2.1.c.d: sparsity
This atomic_base constructor argument has prototype
     atomic_base<
Base>::option_enum sparsity
The current sparsity for an atomic_base object determines which type of sparsity patterns it uses and its value is one of the following:
sparsity sparsity patterns
atomic_base<Base>::pack_sparsity_enum    8.22.m: vectorBool
atomic_base<Base>::bool_sparsity_enum    8.22: vector <bool>
atomic_base<Base>::set_sparsity_enum    8.22: vector <std::set<std::size_t> >
There is a default value for sparsity if it is not included in the constructor (which may be either the bool or set option).

4.4.7.2.1.d: Example

4.4.7.2.1.d.a: Define Constructor
The following is an example of a user atomic function constructor definitions: 4.4.7.2.11.c: get_started.cpp .

4.4.7.2.1.d.b: Use Constructor
The following is an example using a user atomic function constructor: 4.4.7.2.11.f.a: get_started.cpp .
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.2: Set Atomic Function Options

4.4.7.2.2.a: Syntax
afun.option(option_value) These settings do not apply to individual afun calls, but rather all subsequent uses of the corresponding atomic operation in an 5: ADFun object.

4.4.7.2.2.b: atomic_sparsity
Note that, if you use 5.7: optimize , these sparsity patterns are used to determine the 5.5.9: dependency relationship between argument and result variables.

4.4.7.2.2.b.a: pack_sparsity_enum
If option_value is atomic_base<Base>::pack_sparsity_enum , then the type used by afun for 12.4.j: sparsity patterns , (after the option is set) will be
     typedef CppAD::vectorBool 
atomic_sparsity
If r is a sparsity pattern for a matrix @(@ R \in B^{p \times q} @)@: r.size() == p * q .

4.4.7.2.2.b.b: bool_sparsity_enum
If option_value is atomic_base<Base>::bool_sparsity_enum , then the type used by afun for 12.4.j: sparsity patterns , (after the option is set) will be
     typedef CppAD::vector<bool> 
atomic_sparsity
If r is a sparsity pattern for a matrix @(@ R \in B^{p \times q} @)@: r.size() == p * q .

4.4.7.2.2.b.c: set_sparsity_enum
If option_value is atomic_base<Base>::set_sparsity_enum , then the type used by afun for 12.4.j: sparsity patterns , (after the option is set) will be
     typedef CppAD::vector< std::set<size_t> > 
atomic_sparsity
If r is a sparsity pattern for a matrix @(@ R \in B^{p \times q} @)@: r.size() == p , and for @(@ i = 0 , \ldots , p-1 @)@, the elements of r[i] are between zero and @(@ q-1 @)@ inclusive.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.3: Using AD Version of Atomic Function

4.4.7.2.3.a: Syntax
afun(axay)

4.4.7.2.3.b: Purpose
Given ax , this call computes the corresponding value of ay . If AD<Base> operations are being recorded, it enters the computation as an atomic operation in the recording; see 5.1.1.c: start recording .

4.4.7.2.3.c: ADVector
The type ADVector must be a 8.9: simple vector class with elements of type AD<Base> ; see 4.4.7.2.1.c.b: Base .

4.4.7.2.3.d: afun
is a 4.4.7.2.1.b: atomic_user object and this afun function call is implemented by the 4.4.7.2.1.c: atomic_base class.

4.4.7.2.3.e: ax
This argument has prototype
     const 
ADVectorax
and size must be equal to n . It specifies vector @(@ x \in B^n @)@ at which an AD<Base> version of @(@ y = f(x) @)@ is to be evaluated; see 4.4.7.2.1.c.b: Base .

4.4.7.2.3.f: ay
This argument has prototype
     
ADVectoray
and size must be equal to m . The input values of its elements are not specified (must not matter). Upon return, it is an AD<Base> version of @(@ y = f(x) @)@.

4.4.7.2.3.g: Examples
The following files contain example uses of the AD version of atomic functions during recording: 4.4.7.2.11.f.b: get_started.cpp , 4.4.7.2.12.k.b: norm_sq.cpp , 4.4.7.2.13.k.b: reciprocal.cpp , 4.4.7.2.15.k.b: tangent.cpp , 4.4.7.2.19.c.b: mat_mul.cpp .
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.4: Atomic Forward Mode

4.4.7.2.4.a: Syntax
ok = afun.forward(pqvxvytxty)

4.4.7.2.4.b: Purpose
This virtual function is used by 4.4.7.2.3: atomic_afun to evaluate function values. It is also used buy 5.3: forward to compute function vales and derivatives.

4.4.7.2.4.c: Implementation
This virtual function must be defined by the 4.4.7.2.1.b: atomic_user class. It can just return ok == false (and not compute anything) for values of q > 0 that are greater than those used by your 5.3: forward mode calculations.

4.4.7.2.4.d: p
The argument p has prototype
     size_t 
p
It specifies the lowest order Taylor coefficient that we are evaluating. During calls to 4.4.7.2.3: atomic_afun , p == 0 .

4.4.7.2.4.e: q
The argument q has prototype
     size_t 
q
It specifies the highest order Taylor coefficient that we are evaluating. During calls to 4.4.7.2.3: atomic_afun , q == 0 .

4.4.7.2.4.f: vx
The forward argument vx has prototype
     const CppAD::vector<bool>& 
vx
The case vx.size() > 0 only occurs while evaluating a call to 4.4.7.2.3: atomic_afun . In this case, p == q == 0 , vx.size() == n , and for @(@ j = 0 , \ldots , n-1 @)@, vx[j] is true if and only if ax[j] is a 12.4.m: variable in the corresponding call to
     
afun(axay)
If vx.size() == 0 , then vy.size() == 0 and neither of these vectors should be used.

4.4.7.2.4.g: vy
The forward argument vy has prototype
     CppAD::vector<bool>& 
vy
If vy.size() == 0 , it should not be used. Otherwise, q == 0 and vy.size() == m . The input values of the elements of vy are not specified (must not matter). Upon return, for @(@ j = 0 , \ldots , m-1 @)@, vy[i] is true if and only if ay[i] is a variable (CppAD uses vy to reduce the necessary computations).

4.4.7.2.4.h: tx
The argument tx has prototype
     const CppAD::vector<
Base>& tx
and tx.size() == (q+1)*n . For @(@ j = 0 , \ldots , n-1 @)@ and @(@ k = 0 , \ldots , q @)@, we use the Taylor coefficient notation @[@ \begin{array}{rcl} x_j^k & = & tx [ j * ( q + 1 ) + k ] \\ X_j (t) & = & x_j^0 + x_j^1 t^1 + \cdots + x_j^q t^q \end{array} @]@ Note that superscripts represent an index for @(@ x_j^k @)@ and an exponent for @(@ t^k @)@. Also note that the Taylor coefficients for @(@ X(t) @)@ correspond to the derivatives of @(@ X(t) @)@ at @(@ t = 0 @)@ in the following way: @[@ x_j^k = \frac{1}{ k ! } X_j^{(k)} (0) @]@

4.4.7.2.4.i: ty
The argument ty has prototype
     CppAD::vector<
Base>& ty
and tx.size() == (q+1)*m . Upon return, For @(@ i = 0 , \ldots , m-1 @)@ and @(@ k = 0 , \ldots , q @)@, @[@ \begin{array}{rcl} Y_i (t) & = & f_i [ X(t) ] \\ Y_i (t) & = & y_i^0 + y_i^1 t^1 + \cdots + y_i^q t^q + o ( t^q ) \\ ty [ i * ( q + 1 ) + k ] & = & y_i^k \end{array} @]@ where @(@ o( t^q ) / t^q \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. Note that superscripts represent an index for @(@ y_j^k @)@ and an exponent for @(@ t^k @)@. Also note that the Taylor coefficients for @(@ Y(t) @)@ correspond to the derivatives of @(@ Y(t) @)@ at @(@ t = 0 @)@ in the following way: @[@ y_j^k = \frac{1}{ k ! } Y_j^{(k)} (0) @]@ If @(@ p > 0 @)@, for @(@ i = 0 , \ldots , m-1 @)@ and @(@ k = 0 , \ldots , p-1 @)@, the input of ty satisfies @[@ ty [ i * ( q + 1 ) + k ] = y_i^k @]@ and hence the corresponding elements need not be recalculated.

4.4.7.2.4.j: ok
If the required results are calculated, ok should be true. Otherwise, it should be false.

4.4.7.2.4.k: Discussion
For example, suppose that q == 2 , and you know how to compute the function @(@ f(x) @)@, its first derivative @(@ f^{(1)} (x) @)@, and it component wise Hessian @(@ f_i^{(2)} (x) @)@. Then you can compute ty using the following formulas: @[@ \begin{array}{rcl} y_i^0 & = & Y(0) = f_i ( x^0 ) \\ y_i^1 & = & Y^{(1)} ( 0 ) = f_i^{(1)} ( x^0 ) X^{(1)} ( 0 ) = f_i^{(1)} ( x^0 ) x^1 \\ y_i^2 & = & \frac{1}{2 !} Y^{(2)} (0) \\ & = & \frac{1}{2} X^{(1)} (0)^\R{T} f_i^{(2)} ( x^0 ) X^{(1)} ( 0 ) + \frac{1}{2} f_i^{(1)} ( x^0 ) X^{(2)} ( 0 ) \\ & = & \frac{1}{2} (x^1)^\R{T} f_i^{(2)} ( x^0 ) x^1 + f_i^{(1)} ( x^0 ) x^2 \end{array} @]@ For @(@ i = 0 , \ldots , m-1 @)@, and @(@ k = 0 , 1 , 2 @)@, @[@ ty [ i * (q + 1) + k ] = y_i^k @]@

4.4.7.2.4.l: Examples
The file 4.4.7.2.4.1: atomic_forward.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.4.1: Atomic Forward: Example and Test

4.4.7.2.4.1.a: Purpose
This example demonstrates forward mode derivative calculation using an atomic operation.

4.4.7.2.4.1.b: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f(x) = \left( \begin{array}{c} x_2 * x_2 \\ x_0 * x_1 \end{array} \right) @]@ The corresponding Jacobian is @[@ f^{(1)} (x) = \left( \begin{array}{ccc} 0 & 0 & 2 x_2 \\ x_1 & x_0 & 0 \end{array} \right) @]@ The Hessians of the component functions are @[@ f_0^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 2 \end{array} \right) \W{,} f_1^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) @]@

4.4.7.2.4.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
//
class atomic_forward : public CppAD::atomic_base<double> {

4.4.7.2.4.1.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_forward(const std::string& name) :
     // this example does not use sparsity patterns
     CppAD::atomic_base<double>(name)
     { }
private:

4.4.7.2.4.1.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
          size_t q1 = q + 1;
# ifndef NDEBUG
          size_t n = tx.size() / q1;
          size_t m = ty.size() / q1;
# endif
          assert( n == 3 );
          assert( m == 2 );
          assert( p <= q );

          // this example only implements up to second order forward mode
          bool ok = q <= 2;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
          {     vy[0] = vx[2];
               vy[1] = vx[0] || vx[1];
          }
          // ------------------------------------------------------------------
          // Zero forward mode.
          // This case must always be implemented
          // f(x) = [ x_2 * x_2 ]
          //        [ x_0 * x_1 ]
          // y^0  = f( x^0 )
          if( p <= 0 )
          {     // y_0^0 = x_2^0 * x_2^0
               ty[0 * q1 + 0] = tx[2 * q1 + 0] * tx[2 * q1 + 0];
               // y_1^0 = x_0^0 * x_1^0
               ty[1 * q1 + 0] = tx[0 * q1 + 0] * tx[1 * q1 + 0];
          }
          if( q <= 0 )
               return ok;
          // ------------------------------------------------------------------
          // First order one forward mode.
          // This case is needed if first order forward mode is used.
          // f'(x) = [   0,   0, 2 * x_2 ]
          //         [ x_1, x_0,       0 ]
          // y^1 =  f'(x^0) * x^1
          if( p <= 1 )
          {     // y_0^1 = 2 * x_2^0 * x_2^1
               ty[0 * q1 + 1] = 2.0 * tx[2 * q1 + 0] * tx[2 * q1 + 1];
               // y_1^1 = x_1^0 * x_0^1 + x_0^0 * x_1^1
               ty[1 * q1 + 1]  = tx[1 * q1 + 0] * tx[0 * q1 + 1];
               ty[1 * q1 + 1] += tx[0 * q1 + 0] * tx[1 * q1 + 1];
          }
          if( q <= 1 )
               return ok;
          // ------------------------------------------------------------------
          // Second order forward mode.
          // This case is neede if second order forwrd mode is used.
          // f'(x) = [   0,   0, 2 x_2 ]
          //         [ x_1, x_0,     0 ]
          //
          //            [ 0 , 0 , 0 ]                  [ 0 , 1 , 0 ]
          // f_0''(x) = [ 0 , 0 , 0 ]  f_1^{(2)} (x) = [ 1 , 0 , 0 ]
          //            [ 0 , 0 , 2 ]                  [ 0 , 0 , 0 ]
          //
          //  y_0^2 = x^1 * f_0''( x^0 ) x^1 / 2! + f_0'( x^0 ) x^2
          //        = ( x_2^1 * 2.0 * x_2^1 ) / 2!
          //        + 2.0 * x_2^0 * x_2^2
          ty[0 * q1 + 2]  = tx[2 * q1 + 1] * tx[2 * q1 + 1];
          ty[0 * q1 + 2] += 2.0 * tx[2 * q + 0] * tx[2 * q1 + 2];
          //
          //  y_1^2 = x^1 * f_1''( x^0 ) x^1 / 2! + f_1'( x^0 ) x^2
          //        = ( x_1^1 * x_0^1 + x_0^1 * x_1^1) / 2
          //        + x_1^0 * x_0^2 + x_0^0 + x_1^2
          ty[1 * q1 + 2]  = tx[1 * q1 + 1] * tx[0 * q1 + 1];
          ty[1 * q1 + 2] += tx[1 * q1 + 0] * tx[0 * q1 + 2];
          ty[1 * q1 + 2] += tx[0 * q1 + 0] * tx[1 * q1 + 2];
          // ------------------------------------------------------------------
          return ok;
     }
};
}  // End empty namespace

4.4.7.2.4.1.f: Use Atomic Function
bool forward(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // Create the atomic_forward object
     atomic_forward afun("atomic_forward");
     //
     // Create the function f(u)
     //
     // domain space vector
     size_t n  = 3;
     double x_0 = 1.00;
     double x_1 = 2.00;
     double x_2 = 3.00;
     vector< AD<double> > au(n);
     au[0] = x_0;
     au[1] = x_1;
     au[2] = x_2;

     // declare independent variables and start tape recording
     CppAD::Independent(au);

     // range space vector
     size_t m = 2;
     vector< AD<double> > ay(m);

     // call user function
     vector< AD<double> > ax = au;
     afun(ax, ay);

     // create f: u -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (au, ay);  // y = f(u)
     //
     // check function value
     double check = x_2 * x_2;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual( Value(ay[1]) , check,  eps, eps);

     // --------------------------------------------------------------------
     // zero order forward
     //
     vector<double> x0(n), y0(m);
     x0[0] = x_0;
     x0[1] = x_1;
     x0[2] = x_2;
     y0   = f.Forward(0, x0);
     check = x_2 * x_2;
     ok &= NearEqual(y0[0] , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual(y0[1] , check,  eps, eps);
     // --------------------------------------------------------------------
     // first order forward
     //
     // value of Jacobian of f
     double check_jac[] = {
          0.0, 0.0, 2.0 * x_2,
          x_1, x_0,       0.0
     };
     vector<double> x1(n), y1(m);
     // check first order forward mode
     for(size_t j = 0; j < n; j++)
          x1[j] = 0.0;
     for(size_t j = 0; j < n; j++)
     {     // compute partial in j-th component direction
          x1[j] = 1.0;
          y1    = f.Forward(1, x1);
          x1[j] = 0.0;
          // check this direction
          for(size_t i = 0; i < m; i++)
               ok &= NearEqual(y1[i], check_jac[i * n + j], eps, eps);
     }
     // --------------------------------------------------------------------
     // second order forward
     //
     // value of Hessian of f_0
     double check_hes_0[] = {
          0.0, 0.0, 0.0,
          0.0, 0.0, 0.0,
          0.0, 0.0, 2.0
     };
     //
     // value of Hessian of f_1
     double check_hes_1[] = {
          0.0, 1.0, 0.0,
          1.0, 0.0, 0.0,
          0.0, 0.0, 0.0
     };
     vector<double> x2(n), y2(m);
     for(size_t j = 0; j < n; j++)
          x2[j] = 0.0;
     // compute diagonal elements of the Hessian
     for(size_t j = 0; j < n; j++)
     {     // first order forward in j-th direction
          x1[j] = 1.0;
          f.Forward(1, x1);
          y2 = f.Forward(2, x2);
          // check this element of Hessian diagonal
          ok &= NearEqual(y2[0], check_hes_0[j * n + j] / 2.0, eps, eps);
          ok &= NearEqual(y2[1], check_hes_1[j * n + j] / 2.0, eps, eps);
          //
          for(size_t k = 0; k < n; k++) if( k != j )
          {     x1[k] = 1.0;
               f.Forward(1, x1);
               y2 = f.Forward(2, x2);
               //
               // y2 = (H_jj + H_kk + H_jk + H_kj) / 2.0
               // y2 = (H_jj + H_kk) / 2.0 + H_jk
               //
               double H_jj = check_hes_0[j * n + j];
               double H_kk = check_hes_0[k * n + k];
               double H_jk = y2[0] - (H_kk + H_jj) / 2.0;
               ok &= NearEqual(H_jk, check_hes_0[j * n + k], eps, eps);
               //
               H_jj = check_hes_1[j * n + j];
               H_kk = check_hes_1[k * n + k];
               H_jk = y2[1] - (H_kk + H_jj) / 2.0;
               ok &= NearEqual(H_jk, check_hes_1[j * n + k], eps, eps);
               //
               x1[k] = 0.0;
          }
          x1[j] = 0.0;
     }
     // --------------------------------------------------------------------
     return ok;
}

Input File: example/atomic/forward.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.5: Atomic Reverse Mode

4.4.7.2.5.a: Syntax
ok = afun.reverse(qtxtypxpy)

4.4.7.2.5.b: Purpose
This function is used by 5.4: reverse to compute derivatives.

4.4.7.2.5.c: Implementation
If you are using 5.4: reverse mode, this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class. It can just return ok == false (and not compute anything) for values of q that are greater than those used by your 5.4: reverse mode calculations.

4.4.7.2.5.d: q
The argument q has prototype
     size_t 
q
It specifies the highest order Taylor coefficient that computing the derivative of.

4.4.7.2.5.e: tx
The argument tx has prototype
     const CppAD::vector<
Base>& tx
and tx.size() == (q+1)*n . For @(@ j = 0 , \ldots , n-1 @)@ and @(@ k = 0 , \ldots , q @)@, we use the Taylor coefficient notation @[@ \begin{array}{rcl} x_j^k & = & tx [ j * ( q + 1 ) + k ] \\ X_j (t) & = & x_j^0 + x_j^1 t^1 + \cdots + x_j^q t^q \end{array} @]@ Note that superscripts represent an index for @(@ x_j^k @)@ and an exponent for @(@ t^k @)@. Also note that the Taylor coefficients for @(@ X(t) @)@ correspond to the derivatives of @(@ X(t) @)@ at @(@ t = 0 @)@ in the following way: @[@ x_j^k = \frac{1}{ k ! } X_j^{(k)} (0) @]@

4.4.7.2.5.f: ty
The argument ty has prototype
     const CppAD::vector<
Base>& ty
and tx.size() == (q+1)*m . For @(@ i = 0 , \ldots , m-1 @)@ and @(@ k = 0 , \ldots , q @)@, we use the Taylor coefficient notation @[@ \begin{array}{rcl} Y_i (t) & = & f_i [ X(t) ] \\ Y_i (t) & = & y_i^0 + y_i^1 t^1 + \cdots + y_i^q t^q + o ( t^q ) \\ y_i^k & = & ty [ i * ( q + 1 ) + k ] \end{array} @]@ where @(@ o( t^q ) / t^q \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. Note that superscripts represent an index for @(@ y_j^k @)@ and an exponent for @(@ t^k @)@. Also note that the Taylor coefficients for @(@ Y(t) @)@ correspond to the derivatives of @(@ Y(t) @)@ at @(@ t = 0 @)@ in the following way: @[@ y_j^k = \frac{1}{ k ! } Y_j^{(k)} (0) @]@

4.4.7.2.5.g: F
We use the notation @(@ \{ x_j^k \} \in B^{n \times (q+1)} @)@ for @[@ \{ x_j^k \W{:} j = 0 , \ldots , n-1, k = 0 , \ldots , q \} @]@ We use the notation @(@ \{ y_i^k \} \in B^{m \times (q+1)} @)@ for @[@ \{ y_i^k \W{:} i = 0 , \ldots , m-1, k = 0 , \ldots , q \} @]@ We define the function @(@ F : B^{n \times (q+1)} \rightarrow B^{m \times (q+1)} @)@ by @[@ y_i^k = F_i^k [ \{ x_j^k \} ] @]@ Note that @[@ F_i^0 ( \{ x_j^k \} ) = f_i ( X(0) ) = f_i ( x^0 ) @]@ We also note that @(@ F_i^\ell ( \{ x_j^k \} ) @)@ is a function of @(@ x^0 , \ldots , x^\ell @)@ and is determined by the derivatives of @(@ f_i (x) @)@ up to order @(@ \ell @)@.

4.4.7.2.5.h: G, H
We use @(@ G : B^{m \times (q+1)} \rightarrow B @)@ to denote an arbitrary scalar valued function of @(@ \{ y_i^k \} @)@. We use @(@ H : B^{n \times (q+1)} \rightarrow B @)@ defined by @[@ H ( \{ x_j^k \} ) = G[ F( \{ x_j^k \} ) ] @]@

4.4.7.2.5.i: py
The argument py has prototype
     const CppAD::vector<
Base>& py
and py.size() == m * (q+1) . For @(@ i = 0 , \ldots , m-1 @)@, @(@ k = 0 , \ldots , q @)@, @[@ py[ i * (q + 1 ) + k ] = \partial G / \partial y_i^k @]@

4.4.7.2.5.i.a: px
The px has prototype
     CppAD::vector<
Base>& px
and px.size() == n * (q+1) . The input values of the elements of px are not specified (must not matter). Upon return, for @(@ j = 0 , \ldots , n-1 @)@ and @(@ \ell = 0 , \ldots , q @)@, @[@ \begin{array}{rcl} px [ j * (q + 1) + \ell ] & = & \partial H / \partial x_j^\ell \\ & = & ( \partial G / \partial \{ y_i^k \} ) \cdot ( \partial \{ y_i^k \} / \partial x_j^\ell ) \\ & = & \sum_{k=0}^q \sum_{i=0}^{m-1} ( \partial G / \partial y_i^k ) ( \partial y_i^k / \partial x_j^\ell ) \\ & = & \sum_{k=\ell}^q \sum_{i=0}^{m-1} py[ i * (q + 1 ) + k ] ( \partial F_i^k / \partial x_j^\ell ) \end{array} @]@ Note that we have used the fact that for @(@ k < \ell @)@, @(@ \partial F_i^k / \partial x_j^\ell = 0 @)@.

4.4.7.2.5.j: ok
The return value ok has prototype
     bool 
ok
If it is true, the corresponding evaluation succeeded, otherwise it failed.

4.4.7.2.5.k: Examples
The file 4.4.7.2.5.1: atomic_reverse.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.5.1: Atomic Reverse: Example and Test

4.4.7.2.5.1.a: Purpose
This example demonstrates reverse mode derivative calculation using an atomic operation.

4.4.7.2.5.1.b: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f(x) = \left( \begin{array}{c} x_2 * x_2 \\ x_0 * x_1 \end{array} \right) @]@ The corresponding Jacobian is @[@ f^{(1)} (x) = \left( \begin{array}{ccc} 0 & 0 & 2 x_2 \\ x_1 & x_0 & 0 \end{array} \right) @]@ The Hessians of the component functions are @[@ f_0^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 2 \end{array} \right) \W{,} f_1^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) @]@

4.4.7.2.5.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
//
class atomic_reverse : public CppAD::atomic_base<double> {

4.4.7.2.5.1.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_reverse(const std::string& name) :
     // this example does not use sparsity patterns
     CppAD::atomic_base<double>(name)
     { }
private:

4.4.7.2.5.1.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
          size_t q1 = q + 1;
# ifndef NDEBUG
          size_t n = tx.size() / q1;
          size_t m = ty.size() / q1;
# endif
          assert( n == 3 );
          assert( m == 2 );
          assert( p <= q );

          // this example only implements up to first order forward mode
          bool ok = q <= 1;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
          {     vy[0] = vx[2];
               vy[1] = vx[0] || vx[1];
          }
          // ------------------------------------------------------------------
          // Zero forward mode.
          // This case must always be implemented
          // f(x) = [ x_2 * x_2 ]
          //        [ x_0 * x_1 ]
          // y^0  = f( x^0 )
          if( p <= 0 )
          {     // y_0^0 = x_2^0 * x_2^0
               ty[0 * q1 + 0] = tx[2 * q1 + 0] * tx[2 * q1 + 0];
               // y_1^0 = x_0^0 * x_1^0
               ty[1 * q1 + 0] = tx[0 * q1 + 0] * tx[1 * q1 + 0];
          }
          if( q <= 0 )
               return ok;
          // ------------------------------------------------------------------
          // First order one forward mode.
          // This case is needed if first order forward mode is used.
          // f'(x) = [   0,   0, 2 * x_2 ]
          //         [ x_1, x_0,       0 ]
          // y^1 =  f'(x^0) * x^1
          if( p <= 1 )
          {     // y_0^1 = 2 * x_2^0 * x_2^1
               ty[0 * q1 + 1] = 2.0 * tx[2 * q1 + 0] * tx[2 * q1 + 1];

               // y_1^1 = x_1^0 * x_0^1 + x_0^0 * x_1^1
               ty[1 * q1 + 1]  = tx[1 * q1 + 0] * tx[0 * q1 + 1];
               ty[1 * q1 + 1] += tx[0 * q1 + 0] * tx[1 * q1 + 1];
          }
          return ok;
     }

4.4.7.2.5.1.f: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          size_t                   q ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
          vector<double>&          px ,
          const vector<double>&    py
     )
     {
          size_t q1 = q + 1;
          size_t n = tx.size() / q1;
# ifndef NDEBUG
          size_t m = ty.size() / q1;
# endif
          assert( n == 3 );
          assert( m == 2 );

          // this example only implements up to second order reverse mode
          bool ok = q1 <= 2;
          if( ! ok )
               return ok;
          //
          // initalize summation as zero
          for(size_t j = 0; j < n; j++)
               for(size_t k = 0; k < q; k++)
                    px[j * q1 + k] = 0.0;
          //
          if( q1 == 2 )
          {     // --------------------------------------------------------------
               // Second order reverse first compute partials of first order
               // We use the notation pf_ij^k for partial of F_i^1 w.r.t. x_j^k
               //
               // y_0^1    = 2 * x_2^0 * x_2^1
               // pf_02^0  = 2 * x_2^1
               // pf_02^1  = 2 * x_2^0
               //
               // y_1^1    = x_1^0 * x_0^1 + x_0^0 * x_1^1
               // pf_10^0  = x_1^1
               // pf_11^0  = x_0^1
               // pf_10^1  = x_1^0
               // pf_11^1  = x_0^0
               //
               // px_0^0 += py_0^1 * pf_00^0 + py_1^1 * pf_10^0
               //        += py_1^1 * x_1^1
               px[0 * q1 + 0] += py[1 * q1 + 1] * tx[1 * q1 + 1];
               //
               // px_0^1 += py_0^1 * pf_00^1 + py_1^1 * pf_10^1
               //        += py_1^1 * x_1^0
               px[0 * q1 + 1] += py[1 * q1 + 1] * tx[1 * q1 + 0];
               //
               // px_1^0 += py_0^1 * pf_01^0 + py_1^1 * pf_11^0
               //        += py_1^1 * x_0^1
               px[1 * q1 + 0] += py[1 * q1 + 1] * tx[0 * q1 + 1];
               //
               // px_1^1 += py_0^1 * pf_01^1 + py_1^1 * pf_11^1
               //        += py_1^1 * x_0^0
               px[1 * q1 + 1] += py[1 * q1 + 1] * tx[0 * q1 + 0];
               //
               // px_2^0 += py_0^1 * pf_02^0 + py_1^1 * pf_12^0
               //        += py_0^1 * 2 * x_2^1
               px[2 * q1 + 0] += py[0 * q1 + 1] * 2.0 * tx[2 * q1 + 1];
               //
               // px_2^1 += py_0^1 * pf_02^1 + py_1^1 * pf_12^1
               //        += py_0^1 * 2 * x_2^0
               px[2 * q1 + 1] += py[0 * q1 + 1] * 2.0 * tx[2 * q1 + 0];
          }
          // --------------------------------------------------------------
          // First order reverse computes partials of zero order coefficients
          // We use the notation pf_ij for partial of F_i^0 w.r.t. x_j^0
          //
          // y_0^0 = x_2^0 * x_2^0
          // pf_00 = 0,     pf_01 = 0,  pf_02 = 2 * x_2^0
          //
          // y_1^0 = x_0^0 * x_1^0
          // pf_10 = x_1^0, pf_11 = x_0^0,  pf_12 = 0
          //
          // px_0^0 += py_0^0 * pf_00 + py_1^0 * pf_10
          //        += py_1^0 * x_1^0
          px[0 * q1 + 0] += py[1 * q1 + 0] * tx[1 * q1 + 0];
          //
          // px_1^0 += py_1^0 * pf_01 + py_1^0 * pf_11
          //        += py_1^0 * x_0^0
          px[1 * q1 + 0] += py[1 * q1 + 0] * tx[0 * q1 + 0];
          //
          // px_2^0 += py_1^0 * pf_02 + py_1^0 * pf_12
          //        += py_0^0 * 2.0 * x_2^0
          px[2 * q1 + 0] += py[0 * q1 + 0] * 2.0 * tx[2 * q1 + 0];
          // --------------------------------------------------------------
          return ok;
     }
};
}  // End empty namespace

4.4.7.2.5.1.g: Use Atomic Function
bool reverse(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // Create the atomic_reverse object
     atomic_reverse afun("atomic_reverse");
     //
     // Create the function f(u)
     //
     // domain space vector
     size_t n  = 3;
     double x_0 = 1.00;
     double x_1 = 2.00;
     double x_2 = 3.00;
     vector< AD<double> > au(n);
     au[0] = x_0;
     au[1] = x_1;
     au[2] = x_2;

     // declare independent variables and start tape recording
     CppAD::Independent(au);

     // range space vector
     size_t m = 2;
     vector< AD<double> > ay(m);

     // call user function
     vector< AD<double> > ax = au;
     afun(ax, ay);

     // create f: u -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (au, ay);  // y = f(u)
     //
     // check function value
     double check = x_2 * x_2;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual( Value(ay[1]) , check,  eps, eps);

     // --------------------------------------------------------------------
     // zero order forward
     //
     vector<double> x0(n), y0(m);
     x0[0] = x_0;
     x0[1] = x_1;
     x0[2] = x_2;
     y0   = f.Forward(0, x0);
     check = x_2 * x_2;
     ok &= NearEqual(y0[0] , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual(y0[1] , check,  eps, eps);
     // --------------------------------------------------------------------
     // first order reverse
     //
     // value of Jacobian of f
     double check_jac[] = {
          0.0, 0.0, 2.0 * x_2,
          x_1, x_0,       0.0
     };
     vector<double> w(m), dw(n);
     //
     // check derivative of f_0 (x)
     for(size_t i = 0; i < m; i++)
     {     w[i]   = 1.0;
          w[1-i] = 0.0;
          dw = f.Reverse(1, w);
          for(size_t j = 0; j < n; j++)
          {     // compute partial in j-th component direction
               ok &= NearEqual(dw[j], check_jac[i * n + j], eps, eps);
          }
     }
     // --------------------------------------------------------------------
     // second order reverse
     //
     // value of Hessian of f_0
     double check_hes_0[] = {
          0.0, 0.0, 0.0,
          0.0, 0.0, 0.0,
          0.0, 0.0, 2.0
     };
     //
     // value of Hessian of f_1
     double check_hes_1[] = {
          0.0, 1.0, 0.0,
          1.0, 0.0, 0.0,
          0.0, 0.0, 0.0
     };
     vector<double> x1(n), dw2( 2 * n );
     for(size_t j = 0; j < n; j++)
     {     for(size_t j1 = 0; j1 < n; j1++)
               x1[j1] = 0.0;
          x1[j] = 1.0;
          // first order forward
          f.Forward(1, x1);
          w[0] = 1.0;
          w[1] = 0.0;
          dw2  = f.Reverse(2, w);
          for(size_t i = 0; i < n; i++)
               ok &= NearEqual(dw2[i * 2 + 1], check_hes_0[i * n + j], eps, eps);
          w[0] = 0.0;
          w[1] = 1.0;
          dw2  = f.Reverse(2, w);
          for(size_t i = 0; i < n; i++)
               ok &= NearEqual(dw2[i * 2 + 1], check_hes_1[i * n + j], eps, eps);
     }
     // --------------------------------------------------------------------
     return ok;
}

Input File: example/atomic/reverse.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.6: Atomic Forward Jacobian Sparsity Patterns

4.4.7.2.6.a: Syntax
ok = afun.for_sparse_jac(qrsx)

4.4.7.2.6.b: Deprecated 2016-06-27
ok = afun.for_sparse_jac(qrs)

4.4.7.2.6.c: Purpose
This function is used by 5.5.2: ForSparseJac to compute Jacobian sparsity patterns. For a fixed matrix @(@ R \in B^{n \times q} @)@, the Jacobian of @(@ f( x + R * u) @)@ with respect to @(@ u \in B^q @)@ is @[@ S(x) = f^{(1)} (x) * R @]@ Given a 12.4.j: sparsity pattern for @(@ R @)@, for_sparse_jac computes a sparsity pattern for @(@ S(x) @)@.

4.4.7.2.6.d: Implementation
If you are using 5.5.2: ForSparseJac , 5.5.8: ForSparseHes , or 5.5.6: RevSparseHes , one of the versions of this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class.

4.4.7.2.6.d.a: q
The argument q has prototype
     size_t 
q
It specifies the number of columns in @(@ R \in B^{n \times q} @)@ and the Jacobian @(@ S(x) \in B^{m \times q} @)@.

4.4.7.2.6.d.b: r
This argument has prototype
     const 
atomic_sparsityr
and is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ R \in B^{n \times q} @)@.

4.4.7.2.6.d.c: s
This argument has prototype
     
atomic_sparsitys
The input values of its elements are not specified (must not matter). Upon return, s is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ S(x) \in B^{m \times q} @)@.

4.4.7.2.6.d.d: x
The argument has prototype
     const CppAD::vector<
Base>& x
and size is equal to the n . This is the 4.3.1: Value value corresponding to the parameters in the vector 4.4.7.2.3.e: ax (when the atomic function was called). To be specific, if
     if( Parameter(
ax[i]) == true )
          
x[i] = Value( ax[i] );
     else
          
x[i] = CppAD::numeric_limits<Base>::quiet_NaN();
The version of this function with out the x argument is deprecated; i.e., you should include the argument even if you do not use it.

4.4.7.2.6.e: ok
The return value ok has prototype
     bool 
ok
If it is true, the corresponding evaluation succeeded, otherwise it failed.

4.4.7.2.6.f: Examples
The file 4.4.7.2.6.1: atomic_for_sparse_jac.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test

4.4.7.2.6.1.a: Purpose
This example demonstrates calculation of the forward Jacobian sparsity pattern for an atomic operation.

4.4.7.2.6.1.b: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f(x) = \left( \begin{array}{c} x_2 * x_2 \\ x_0 * x_1 \end{array} \right) @]@ The corresponding Jacobian is @[@ f^{(1)} (x) = \left( \begin{array}{ccc} 0 & 0 & 2 x_2 \\ x_1 & x_0 & 0 \end{array} \right) @]@

4.4.7.2.6.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
//
class atomic_for_sparse_jac : public CppAD::atomic_base<double> {

4.4.7.2.6.1.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_for_sparse_jac(const std::string& name) :
     // this example only uses pack sparsty patterns
     CppAD::atomic_base<double>(name, pack_sparsity_enum)
     { }
private:

4.4.7.2.6.1.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 3 );
          assert( m == 2 );

          // return flag
          bool ok = q == 0;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
          {     vy[0] = vx[2];
               vy[1] = vx[0] || vx[1];
          }

          // Order zero forward mode.
          // This case must always be implemented
          // f(x) = [ x_2 * x_2 ]
          //        [ x_0 * x_1 ]
          assert( p <= 0 );
          if( p <= 0 )
          {     ty[0] = tx[2] * tx[2];
               ty[1] = tx[0] * tx[1];
          }
          return ok;
     }

4.4.7.2.6.1.f: for_sparse_jac
     // forward Jacobian sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                     q ,
          const CppAD::vectorBool&   r ,
          CppAD::vectorBool&         s ,
          const vector<double>&      x )
     {     // This function needed because we are using ForSparseJac
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t n = r.size() / q;
          size_t m = s.size() / q;
# endif
          assert( x.size() == n );
          assert( n == 3 );
          assert( m == 2 );

          // f'(x) = [   0,   0, 2 x_2 ]
          //         [ x_1, x_0,     0 ]

          // sparsity for first row of S(x) = f'(x) * R
          size_t i = 0;
          for(size_t j = 0; j < q; j++)
               s[ i * q + j ] = r[ 2 * q + j ];

          // sparsity for second row of S(x) = f'(x) * R
          i = 1;
          for(size_t j = 0; j < q; j++)
               s[ i * q + j ] = r[ 0 * q + j ] | r[ 1 * q + j];

          return true;
     }
}; // End of atomic_for_sparse_jac class
4.4.7.2.6.1.g: Use Atomic Function
bool use_atomic_for_sparse_jac(bool x_1_variable)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // Create the atomic for_sparse_jac object
     atomic_for_sparse_jac afun("atomic_for_sparse_jac");
     //
     // Create the function f(u)
     //
     // domain space vector
     size_t n  = 3;
     double x_0 = 1.00;
     double x_1 = 2.00;
     double x_2 = 3.00;
     vector< AD<double> > au(n);
     au[0] = x_0;
     au[1] = x_1;
     au[2] = x_2;

     // declare independent variables and start tape recording
     CppAD::Independent(au);

     // range space vector
     size_t m = 2;
     vector< AD<double> > ay(m);

     // call user function
     vector< AD<double> > ax(n);
     ax[0] = au[0];
     ax[2] = au[2];
     if( x_1_variable )
          ax[1] = au[1];
     else
          ax[1] = x_1;
     afun(ax, ay);          // y = [ x_2 * x_2 ,  x_0 * x_1 ]^T

     // create f: u -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (au, ay);  // f(u) = y
     //
     // check function value
     double check = x_2 * x_2;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual( Value(ay[1]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> xq(n), yq(m);
     q     = 0;
     xq[0] = x_0;
     xq[1] = x_1;
     xq[2] = x_2;
     yq    = f.Forward(q, xq);
     check = x_2 * x_2;
     ok &= NearEqual(yq[0] , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual(yq[1] , check,  eps, eps);

     // forward sparse Jacobian
     CppAD::vectorBool r(n * n), s(m * n);
     // r = identity matrix
     for(size_t i = 0; i < n; i++)
          for(size_t j = 0; j < n; j++)
               r[ i * n + j] = i == j;
     s = f.ForSparseJac(n, r);

     // check result
     CppAD::vectorBool check_s(m * n);
     check_s[ 0 * n + 0 ] = false;
     check_s[ 0 * n + 1 ] = false;
     check_s[ 0 * n + 2 ] = true;
     check_s[ 1 * n + 0 ] = true;
     check_s[ 1 * n + 1 ] = x_1_variable;
     check_s[ 1 * n + 2 ] = false;
     //
     for(size_t i = 0; i < m * n; i++)
          ok &= s[ i ] == check_s[ i ];
     //
     return ok;
}
}  // End empty namespace

4.4.7.2.6.1.h: Test with x_1 Both a Variable and a Parameter
bool for_sparse_jac(void)
{     bool ok = true;
     // test with x_1 a variable
     ok     &= use_atomic_for_sparse_jac(true);
     // test with x_1 a parameter
     ok     &= use_atomic_for_sparse_jac(false);
     return ok;
}

Input File: example/atomic/for_sparse_jac.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.7: Atomic Reverse Jacobian Sparsity Patterns

4.4.7.2.7.a: Syntax
ok = afun.rev_sparse_jac(qrtstx)

4.4.7.2.7.b: Deprecated 2016-06-27
ok = afun.rev_sparse_jac(qrtst)

4.4.7.2.7.c: Purpose
This function is used by 5.5.4: RevSparseJac to compute Jacobian sparsity patterns. If you are using 5.5.4: RevSparseJac , one of the versions of this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class.

For a fixed matrix @(@ R \in B^{q \times m} @)@, the Jacobian of @(@ R * f( x ) @)@ with respect to @(@ x \in B^n @)@ is @[@ S(x) = R * f^{(1)} (x) @]@ Given a 12.4.j: sparsity pattern for @(@ R @)@, rev_sparse_jac computes a sparsity pattern for @(@ S(x) @)@.

4.4.7.2.7.d: Implementation
If you are using 5.5.4: RevSparseJac or 5.5.8: ForSparseHes , this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class.

4.4.7.2.7.d.a: q
The argument q has prototype
     size_t 
q
It specifies the number of rows in @(@ R \in B^{q \times m} @)@ and the Jacobian @(@ S(x) \in B^{q \times n} @)@.

4.4.7.2.7.d.b: rt
This argument has prototype
     const 
atomic_sparsityrt
and is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ R^\R{T} \in B^{m \times q} @)@.

4.4.7.2.7.d.c: st
This argument has prototype
     
atomic_sparsityst
The input value of its elements are not specified (must not matter). Upon return, s is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ S(x)^\R{T} \in B^{n \times q} @)@.

4.4.7.2.7.d.d: x
The argument has prototype
     const CppAD::vector<
Base>& x
and size is equal to the n . This is the 4.3.1: Value corresponding to the parameters in the vector 4.4.7.2.3.e: ax (when the atomic function was called). To be specific, if
     if( Parameter(
ax[i]) == true )
          
x[i] = Value( ax[i] );
     else
          
x[i] = CppAD::numeric_limits<Base>::quiet_NaN();
The version of this function with out the x argument is deprecated; i.e., you should include the argument even if you do not use it.

4.4.7.2.7.e: ok
The return value ok has prototype
     bool 
ok
If it is true, the corresponding evaluation succeeded, otherwise it failed.

4.4.7.2.7.f: Examples
The file 4.4.7.2.7.1: atomic_rev_sparse_jac.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test

4.4.7.2.7.1.a: Purpose
This example demonstrates calculation of the reverse Jacobians sparsity pattern for an atomic operation.

4.4.7.2.7.1.b: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f( x ) = \left( \begin{array}{c} x_2 * x_2 \\ x_0 * x_1 \end{array} \right) @]@ The corresponding Jacobian is @[@ f^{(1)} (x) = \left( \begin{array}{ccc} 0 & 0 & 2 x_2 \\ x_1 & x_0 & 0 \end{array} \right) @]@

4.4.7.2.7.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
//
class atomic_rev_sparse_jac : public CppAD::atomic_base<double> {

4.4.7.2.7.1.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_rev_sparse_jac(const std::string& name) :
     // this example only uses pack sparsity patterns
     CppAD::atomic_base<double>(name, pack_sparsity_enum)
     { }
private:

4.4.7.2.7.1.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 3 );
          assert( m == 2 );

          // return flag
          bool ok = q == 0;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
          {     vy[0] = vx[0];
               vy[1] = vx[0] || vy[0];
          }

          // Order zero forward mode.
          // This case must always be implemented
          // f(x) = [ x_0 * x_0 ]
          //        [ x_0 * x_1 ]
          assert( p <= 0 );
          if( p <= 0 )
          {     ty[0] = tx[2] * tx[2];
               ty[1] = tx[0] * tx[1];
          }
          return ok;
     }

4.4.7.2.7.1.f: rev_sparse_jac
     // reverse Jacobian sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                     q  ,
          const CppAD::vectorBool&   rt ,
          CppAD::vectorBool&         st ,
          const vector<double>&      x  )
     {     // This function needed because we are using RevSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t m = rt.size() / q;
          size_t n = st.size() / q;
# endif
          assert( n == x.size() );
          assert( n == 3 );
          assert( m == 2 );

          //           [     0,  x_1 ]
          // f'(x)^T = [     0,  x_0 ]
          //           [ 2 x_2,    0 ]

          // sparsity for first row of S(x)^T = f'(x)^T * R^T
          size_t i = 0;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 1 * q + j ];

          // sparsity for second row of S(x)^T = f'(x)^T * R^T
          i = 1;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 1 * q + j ];

          // sparsity for third row of S(x)^T = f'(x)^T * R^T
          i = 2;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 0 * q + j ];

          return true;
     }
}; // End of atomic_rev_sparse_jac class
4.4.7.2.7.1.g: Use Atomic Function
bool use_atomic_rev_sparse_jac(bool x_1_variable)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // Create the atomic rev_sparse_jac object
     atomic_rev_sparse_jac afun("atomic_rev_sparse_jac");
     //
     // Create the function f(u)
     //
     // domain space vector
     size_t n  = 3;
     double x_0 = 1.00;
     double x_1 = 2.00;
     double x_2 = 3.00;
     vector< AD<double> > au(n);
     au[0] = x_0;
     au[1] = x_1;
     au[2] = x_2;

     // declare independent variables and start tape recording
     CppAD::Independent(au);

     // range space vector
     size_t m = 2;
     vector< AD<double> > ay(m);

     // call user function
     vector< AD<double> > ax(n);
     ax[0] = au[0];
     ax[2] = au[2];
     if( x_1_variable )
          ax[1] = au[1];
     else
          ax[1] = x_1;
     afun(ax, ay);          // y = [ x_2 * x_2 ,  x_0 * x_1 ]^T

     // create f: u -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (au, ay);  // f(u) = y
     //
     // check function value
     double check = x_2 * x_2;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual( Value(ay[1]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> xq(n), yq(m);
     q     = 0;
     xq[0] = x_0;
     xq[1] = x_1;
     xq[2] = x_2;
     yq    = f.Forward(q, xq);
     check = x_2 * x_2;
     ok &= NearEqual(yq[0] , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual(yq[1] , check,  eps, eps);

     // forward sparse Jacobian
     CppAD::vectorBool r(m * m), s(m * n);
     // r = identity matrix
     for(size_t i = 0; i < m; i++)
          for(size_t j = 0; j < m; j++)
               r[ i * m + j] = i == j;
     s = f.RevSparseJac(m, r);

     // check result
     CppAD::vectorBool check_s(m * n);
     check_s[ 0 * n + 0 ] = false;
     check_s[ 0 * n + 1 ] = false;
     check_s[ 0 * n + 2 ] = true;
     check_s[ 1 * n + 0 ] = true;
     check_s[ 1 * n + 1 ] = x_1_variable;
     check_s[ 1 * n + 2 ] = false;
     //
     for(size_t i = 0; i < m * n; i++)
          ok &= s[ i ] == check_s[ i ];
     //
     return ok;
}
}  // End empty namespace

4.4.7.2.7.1.h: Test with x_1 Both a Variable and a Parameter
bool rev_sparse_jac(void)
{     bool ok = true;
     // test with x_1 a variable
     ok     &= use_atomic_rev_sparse_jac(true);
     // test with x_1 a parameter
     ok     &= use_atomic_rev_sparse_jac(false);
     return ok;
}

Input File: example/atomic/rev_sparse_jac.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.8: Atomic Forward Hessian Sparsity Patterns

4.4.7.2.8.a: Syntax
ok = afun.for_sparse_hes(vxrshx)

4.4.7.2.8.b: Deprecated 2016-06-27
ok = afun.for_sparse_hes(vxrsh)

4.4.7.2.8.c: Purpose
This function is used by 5.5.8: ForSparseHes to compute Hessian sparsity patterns. If you are using 5.5.8: ForSparseHes , one of the versions of this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class.

Given a 12.4.j: sparsity pattern for a diagonal matrix @(@ R \in B^{n \times n} @)@, and a row vector @(@ S \in B^{1 \times m} @)@, this routine computes the sparsity pattern for @[@ H(x) = R^\R{T} \cdot (S \cdot f)^{(2)}( x ) \cdot R @]@

4.4.7.2.8.d: Implementation
If you are using and 5.5.8: ForSparseHes , this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class.

4.4.7.2.8.d.a: vx
The argument vx has prototype
     const CppAD:vector<bool>& 
vx
vx.size() == n , and for @(@ j = 0 , \ldots , n-1 @)@, vx[j] is true if and only if ax[j] is a 12.4.m: variable in the corresponding call to
     
afun(axay)

4.4.7.2.8.d.b: r
This argument has prototype
     const CppAD:vector<bool>& 
r
and is a 4.4.7.2.2.b: atomic_sparsity pattern for the diagonal of @(@ R \in B^{n \times n} @)@.

4.4.7.2.8.d.c: s
The argument s has prototype
     const CppAD:vector<bool>& 
s
and its size is m . It is a sparsity pattern for @(@ S \in B^{1 \times m} @)@.

4.4.7.2.8.d.d: h
This argument has prototype
     
atomic_sparsityh
The input value of its elements are not specified (must not matter). Upon return, h is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ H(x) \in B^{n \times n} @)@ which is defined above.

4.4.7.2.8.d.e: x
The argument has prototype
     const CppAD::vector<
Base>& x
and size is equal to the n . This is the 4.3.1: Value value corresponding to the parameters in the vector 4.4.7.2.3.e: ax (when the atomic function was called). To be specific, if
     if( Parameter(
ax[i]) == true )
          
x[i] = Value( ax[i] );
     else
          
x[i] = CppAD::numeric_limits<Base>::quiet_NaN();
The version of this function with out the x argument is deprecated; i.e., you should include the argument even if you do not use it.

4.4.7.2.8.e: Examples
The file 4.4.7.2.8.1: atomic_for_sparse_hes.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test

4.4.7.2.8.1.a: Purpose
This example demonstrates calculation of the forward Hessian sparsity pattern for an atomic operation.

4.4.7.2.8.1.b: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f( x ) = \left( \begin{array}{c} x_2 * x_2 \\ x_0 * x_1 \end{array} \right) @]@ The Hessians of the component functions are @[@ f_0^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 2 \end{array} \right) \W{,} f_1^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) @]@

4.4.7.2.8.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
//
class atomic_for_sparse_hes : public CppAD::atomic_base<double> {

4.4.7.2.8.1.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_for_sparse_hes(const std::string& name) :
     // this example only uses pack sparsity patterns
     CppAD::atomic_base<double>(name, pack_sparsity_enum)
     { }
private:

4.4.7.2.8.1.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 3 );
          assert( m == 2 );

          // return flag
          bool ok = q == 0;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
          {     vy[0] = vx[0];
               vy[1] = vx[0] || vy[0];
          }

          // Order zero forward mode.
          // This case must always be implemented
          // f(x) = [ x_0 * x_0 ]
          //        [ x_0 * x_1 ]
          assert( p <= 0 );
          if( p <= 0 )
          {     ty[0] = tx[2] * tx[2];
               ty[1] = tx[0] * tx[1];
          }
          return ok;
     }

4.4.7.2.8.1.f: for_sparse_jac
     // forward Jacobian sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                     q ,
          const CppAD::vectorBool&   r ,
          CppAD::vectorBool&         s ,
          const vector<double>&      x )
     {     // This function needed because we are using ForSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t n = r.size() / q;
          size_t m = s.size() / q;
# endif
          assert( x.size() == n );
          assert( n == 3 );
          assert( m == 2 );


          // f'(x) = [   0,   0, 2 x_2 ]
          //         [ x_1, x_0,     0 ]

          // sparsity for first row of S(x) = f'(x) * R
          size_t i = 0;
          for(size_t j = 0; j < q; j++)
               s[ i * q + j ] = r[ 2 * q + j ];

          // sparsity for second row of S(x) = f'(x) * R
          i = 1;
          for(size_t j = 0; j < q; j++)
               s[ i * q + j ] = r[ 0 * q + j ] | r[ 1 * q + j];

          return true;
     }

4.4.7.2.8.1.g: rev_sparse_jac
     // reverse Jacobian sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                     q  ,
          const CppAD::vectorBool&   rt ,
          CppAD::vectorBool&         st ,
          const vector<double>&      x  )
     {     // This function needed because we are using ForSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t m = rt.size() / q;
          size_t n = st.size() / q;
# endif
          assert( x.size() == n );
          assert( n == 3 );
          assert( m == 2 );

          //           [     0,  x_1 ]
          // f'(x)^T = [     0,  x_0 ]
          //           [ 2 x_2,    0 ]

          // sparsity for first row of S(x)^T = f'(x)^T * R^T
          size_t i = 0;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 1 * q + j ];

          // sparsity for second row of S(x)^T = f'(x)^T * R^T
          i = 1;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 1 * q + j ];

          // sparsity for third row of S(x)^T = f'(x)^T * R^T
          i = 2;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 0 * q + j ];

          return true;
     }

4.4.7.2.8.1.h: for_sparse_hes
     // forward Hessian sparsity routine called by CppAD
     virtual bool for_sparse_hes(
          const vector<bool>&   vx,
          const vector<bool>&   r ,
          const vector<bool>&   s ,
          CppAD::vectorBool&    h ,
          const vector<double>& x )
     {     // This function needed because we are using RevSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
          size_t n = r.size();
# ifndef NDEBUG
          size_t m = s.size();
# endif
          assert( x.size() == n );
          assert( n == 3 );
          assert( m == 2 );
          assert( h.size() == n * n );

          //            [ 0 , 0 , 0 ]                  [ 0 , 1 , 0 ]
          // f_0''(x) = [ 0 , 0 , 0 ]  f_1^{(2)} (x) = [ 1 , 0 , 0 ]
          //            [ 0 , 0 , 2 ]                  [ 0 , 0 , 0 ]

          // initial entire matrix as false
          for(size_t i = 0; i < n * n; i++)
               h[i] = false;

          // component (2, 2)
          h[ 2 * n + 2 ] = s[0] & r[2];

          // components (1, 0) and (0, 1)
          h[ 1 * n + 0 ] = s[1] & r[0] & r[1];
          h[ 0 * n + 1 ] = s[1] & r[0] & r[1];

          return true;
     }
}; // End of atomic_for_sparse_hes class
4.4.7.2.8.1.i: Use Atomic Function
bool use_atomic_for_sparse_hes(bool x_1_variable)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // Create the atomic for_sparse_hes object
     atomic_for_sparse_hes afun("atomic_for_sparse_hes");
     //
     // Create the function f(u)
     //
     // domain space vector
     size_t n  = 3;
     double x_0 = 1.00;
     double x_1 = 2.00;
     double x_2 = 3.00;
     vector< AD<double> > au(n);
     au[0] = x_0;
     au[1] = x_1;
     au[2] = x_2;

     // declare independent variables and start tape recording
     CppAD::Independent(au);

     // range space vector
     size_t m = 2;
     vector< AD<double> > ay(m);

     // call user function
     vector< AD<double> > ax(n);
     ax[0] = au[0];
     ax[2] = au[2];
     if( x_1_variable )
          ax[1] = au[1];
     else
          ax[1] = x_1;
     afun(ax, ay);          // y = [ x_2 * x_2 ,  x_0 * x_1 ]^T

     // create f: u -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (au, ay);  // f(u) = y
     //
     // check function value
     double check = x_2 * x_2;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual( Value(ay[1]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> xq(n), yq(m);
     q     = 0;
     xq[0] = x_0;
     xq[1] = x_1;
     xq[2] = x_2;
     yq    = f.Forward(q, xq);
     check = x_2 * x_2;
     ok &= NearEqual(yq[0] , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual(yq[1] , check,  eps, eps);

     // forward sparse Hessian
     CppAD::vectorBool r(n), s(m), h(n * n);
     for(size_t j = 0; j < n; j++)
          r[j] = true;
     for(size_t i = 0; i < m; i++)
          s[i] = true;
     h = f.ForSparseHes(r, s);

     // check result
     CppAD::vectorBool check_h(n * n);
     for(size_t i = 0; i < n * n; i++)
          check_h[i] = false;
     check_h[ 2 * n + 2 ] = true;
     if( x_1_variable )
     {     check_h[0 * n + 1] = true;
          check_h[1 * n + 0] = true;
     }
     for(size_t i = 0; i < n * n; i++)
          ok &= h[ i ] == check_h[ i ];
     //
     return ok;
}
}  // End empty namespace

4.4.7.2.8.1.j: Test with x_1 Both a Variable and a Parameter
bool for_sparse_hes(void)
{     bool ok = true;
     // test with x_1 a variable
     ok     &= use_atomic_for_sparse_hes(true);
     // test with x_1 a parameter
     ok     &= use_atomic_for_sparse_hes(false);
     return ok;
}

Input File: example/atomic/for_sparse_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.9: Atomic Reverse Hessian Sparsity Patterns

4.4.7.2.9.a: Syntax
ok = afun.rev_sparse_hes(vxstqruvx)

4.4.7.2.9.b: Deprecated 2016-06-27
ok = afun.rev_sparse_hes(vxstqruv)

4.4.7.2.9.c: Purpose
This function is used by 5.5.6: RevSparseHes to compute Hessian sparsity patterns. If you are using 5.5.6: RevSparseHes to compute one of the versions of this virtual function muse be defined by the 4.4.7.2.1.b: atomic_user class.

There is an unspecified scalar valued function @(@ g : B^m \rightarrow B @)@. Given a 12.4.j: sparsity pattern for @(@ R \in B^{n \times q} @)@, and information about the function @(@ z = g(y) @)@, this routine computes the sparsity pattern for @[@ V(x) = (g \circ f)^{(2)}( x ) R @]@

4.4.7.2.9.d: Implementation
If you are using and 5.5.6: RevSparseHes , this virtual function must be defined by the 4.4.7.2.1.b: atomic_user class.

4.4.7.2.9.d.a: vx
The argument vx has prototype
     const CppAD:vector<bool>& 
vx
vx.size() == n , and for @(@ j = 0 , \ldots , n-1 @)@, vx[j] is true if and only if ax[j] is a 12.4.m: variable in the corresponding call to
     
afun(axay)

4.4.7.2.9.d.b: s
The argument s has prototype
     const CppAD:vector<bool>& 
s
and its size is m . It is a sparsity pattern for @(@ S(x) = g^{(1)} [ f(x) ] \in B^{1 \times m} @)@.

4.4.7.2.9.d.c: t
This argument has prototype
     CppAD:vector<bool>& 
t
and its size is m . The input values of its elements are not specified (must not matter). Upon return, t is a sparsity pattern for @(@ T(x) \in B^{1 \times n} @)@ where @[@ T(x) = (g \circ f)^{(1)} (x) = S(x) * f^{(1)} (x) @]@

4.4.7.2.9.d.d: q
The argument q has prototype
     size_t 
q
It specifies the number of columns in @(@ R \in B^{n \times q} @)@, @(@ U(x) \in B^{m \times q} @)@, and @(@ V(x) \in B^{n \times q} @)@.

4.4.7.2.9.d.e: r
This argument has prototype
     const 
atomic_sparsityr
and is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ R \in B^{n \times q} @)@.

4.4.7.2.9.e: u
This argument has prototype
     const 
atomic_sparsityu
and is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ U(x) \in B^{m \times q} @)@ which is defined by @[@ \begin{array}{rcl} U(x) & = & \{ \partial_u \{ \partial_y g[ y + f^{(1)} (x) R u ] \}_{y=f(x)} \}_{u=0} \\ & = & \partial_u \{ g^{(1)} [ f(x) + f^{(1)} (x) R u ] \}_{u=0} \\ & = & g^{(2)} [ f(x) ] f^{(1)} (x) R \end{array} @]@

4.4.7.2.9.e.a: v
This argument has prototype
     
atomic_sparsityv
The input value of its elements are not specified (must not matter). Upon return, v is a 4.4.7.2.2.b: atomic_sparsity pattern for @(@ V(x) \in B^{n \times q} @)@ which is defined by @[@ \begin{array}{rcl} V(x) & = & \partial_u [ \partial_x (g \circ f) ( x + R u ) ]_{u=0} \\ & = & \partial_u [ (g \circ f)^{(1)}( x + R u ) ]_{u=0} \\ & = & (g \circ f)^{(2)}( x ) R \\ & = & f^{(1)} (x)^\R{T} g^{(2)} [ f(x) ] f^{(1)} (x) R + \sum_{i=1}^m g_i^{(1)} [ f(x) ] \; f_i^{(2)} (x) R \\ & = & f^{(1)} (x)^\R{T} U(x) + \sum_{i=1}^m S_i (x) \; f_i^{(2)} (x) R \end{array} @]@

4.4.7.2.9.e.b: x
The argument has prototype
     const CppAD::vector<
Base>& x
and size is equal to the n . This is the 4.3.1: Value value corresponding to the parameters in the vector 4.4.7.2.3.e: ax (when the atomic function was called). To be specific, if
     if( Parameter(
ax[i]) == true )
          
x[i] = Value( ax[i] );
     else
          
x[i] = CppAD::numeric_limits<Base>::quiet_NaN();
The version of this function with out the x argument is deprecated; i.e., you should include the argument even if you do not use it.

4.4.7.2.9.f: Examples
The file 4.4.7.2.9.1: atomic_rev_sparse_hes.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test

4.4.7.2.9.1.a: Purpose
This example demonstrates calculation of the reverse Hessian sparsity pattern for an atomic operation.

4.4.7.2.9.1.b: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f( x ) = \left( \begin{array}{c} x_2 * x_2 \\ x_0 * x_1 \end{array} \right) @]@ The Hessians of the component functions are @[@ f_0^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 2 \end{array} \right) \W{,} f_1^{(2)} ( x ) = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) @]@

4.4.7.2.9.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
//
class atomic_rev_sparse_hes : public CppAD::atomic_base<double> {

4.4.7.2.9.1.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_rev_sparse_hes(const std::string& name) :
     // this example only uses pack sparsity patterns
     CppAD::atomic_base<double>(name, pack_sparsity_enum)
     { }
private:

4.4.7.2.9.1.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 3 );
          assert( m == 2 );

          // return flag
          bool ok = q == 0;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
          {     vy[0] = vx[0];
               vy[1] = vx[0] || vy[0];
          }

          // Order zero forward mode.
          // This case must always be implemented
          // f(x) = [ x_0 * x_0 ]
          //        [ x_0 * x_1 ]
          assert( p <= 0 );
          if( p <= 0 )
          {     ty[0] = tx[2] * tx[2];
               ty[1] = tx[0] * tx[1];
          }
          return ok;
     }

4.4.7.2.9.1.f: for_sparse_jac
     // forward Jacobian sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                     q ,
          const CppAD::vectorBool&   r ,
          CppAD::vectorBool&         s ,
          const vector<double>&      x )
     {     // This function needed because we are using ForSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t n = r.size() / q;
          size_t m = s.size() / q;
# endif
          assert( n == x.size() );
          assert( n == 3 );
          assert( m == 2 );


          // f'(x) = [   0,   0, 2 x_2 ]
          //         [ x_1, x_0,     0 ]

          // sparsity for first row of S(x) = f'(x) * R
          size_t i = 0;
          for(size_t j = 0; j < q; j++)
               s[ i * q + j ] = r[ 2 * q + j ];

          // sparsity for second row of S(x) = f'(x) * R
          i = 1;
          for(size_t j = 0; j < q; j++)
               s[ i * q + j ] = r[ 0 * q + j ] | r[ 1 * q + j];

          return true;
     }

4.4.7.2.9.1.g: rev_sparse_jac
     // reverse Jacobian sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                     q  ,
          const CppAD::vectorBool&   rt ,
          CppAD::vectorBool&         st ,
          const vector<double>&      x  )
     {     // This function needed because we are using ForSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t m = rt.size() / q;
          size_t n = st.size() / q;
# endif
          assert( n == x.size() );
          assert( n == 3 );
          assert( m == 2 );

          //           [     0,  x_1 ]
          // f'(x)^T = [     0,  x_0 ]
          //           [ 2 x_2,    0 ]

          // sparsity for first row of S(x)^T = f'(x)^T * R^T
          size_t i = 0;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 1 * q + j ];

          // sparsity for second row of S(x)^T = f'(x)^T * R^T
          i = 1;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 1 * q + j ];

          // sparsity for third row of S(x)^T = f'(x)^T * R^T
          i = 2;
          for(size_t j = 0; j < q; j++)
               st[ i * q + j ] = rt[ 0 * q + j ];

          return true;
     }

4.4.7.2.9.1.h: rev_sparse_hes
     // reverse Hessian sparsity routine called by CppAD
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
          vector<bool>&                         t ,
          size_t                                q ,
          const CppAD::vectorBool&              r ,
          const CppAD::vectorBool&              u ,
          CppAD::vectorBool&                    v ,
          const vector<double>&                 x )
     {     // This function needed because we are using RevSparseHes
          // with afun.option( CppAD::atomic_base<double>::pack_sparsity_enum )
# ifndef NDEBUG
          size_t m = s.size();
          size_t n = t.size();
# endif
          assert( x.size() == n );
          assert( r.size() == n * q );
          assert( u.size() == m * q );
          assert( v.size() == n * q );
          assert( n == 3 );
          assert( m == 2 );
          //
          // f'(x) = [   0,   0, 2 x_2 ]
          //         [ x_1, x_0,     0 ]
          //
          //            [ 0 , 0 , 0 ]                  [ 0 , 1 , 0 ]
          // f_0''(x) = [ 0 , 0 , 0 ]  f_1^{(2)} (x) = [ 1 , 0 , 0 ]
          //            [ 0 , 0 , 2 ]                  [ 0 , 0 , 0 ]
          // ------------------------------------------------------------------
          // sparsity pattern for row vector T(x) = S(x) * f'(x)
          t[0] = s[1];
          t[1] = s[1];
          t[2] = s[0];
          // ------------------------------------------------------------------
          // sparsity pattern for W(x) = f'(x)^T * U(x)
          for(size_t j = 0; j < q; j++)
          {     v[ 0 * q + j ] = u[ 1 * q + j ];
               v[ 1 * q + j ] = u[ 1 * q + j ];
               v[ 2 * q + j ] = u[ 0 * q + j ];
          }
          // ------------------------------------------------------------------
          // sparsity pattern for Q(x) = W(x) + S_0 (x) * f_0^{(2)} (x) * R
          if( s[0] )
          {     for(size_t j = 0; j < q; j++)
               {     // cannot use |= with vectorBool
                    v[ 2 * q + j ] = bool(v[ 2 * q + j ]) | bool(r[ 2 * q + j ]);
               }
          }
          // ------------------------------------------------------------------
          // sparsity pattern for V(x) = Q(x) + S_1 (x) * f_1^{(2)} (x) * R
          if( s[1] )
          {     for(size_t j = 0; j < q; j++)
               {     // cannot use |= with vectorBool
                    v[ 0 * q + j ] = bool(v[ 0 * q + j ]) | bool(r[ 1 * q + j ]);
                    v[ 1 * q + j ] = bool(v[ 1 * q + j ]) | bool(r[ 0 * q + j ]);
               }
          }
          return true;
     }
}; // End of atomic_rev_sparse_hes class
4.4.7.2.9.1.i: Use Atomic Function
bool use_atomic_rev_sparse_hes(bool x_1_variable)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // Create the atomic rev_sparse_hes object
     atomic_rev_sparse_hes afun("atomic_rev_sparse_hes");
     //
     // Create the function f(u)
     //
     // domain space vector
     size_t n  = 3;
     double x_0 = 1.00;
     double x_1 = 2.00;
     double x_2 = 3.00;
     vector< AD<double> > au(n);
     au[0] = x_0;
     au[1] = x_1;
     au[2] = x_2;

     // declare independent variables and start tape recording
     CppAD::Independent(au);

     // range space vector
     size_t m = 2;
     vector< AD<double> > ay(m);

     // call user function
     vector< AD<double> > ax(n);
     ax[0] = au[0];
     ax[2] = au[2];
     if( x_1_variable )
          ax[1] = au[1];
     else
          ax[1] = x_1;
     afun(ax, ay);          // y = [ x_2 * x_2 ,  x_0 * x_1 ]^T

     // create f: u -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (au, ay);  // f(u) = y
     //
     // check function value
     double check = x_2 * x_2;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual( Value(ay[1]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> xq(n), yq(m);
     q     = 0;
     xq[0] = x_0;
     xq[1] = x_1;
     xq[2] = x_2;
     yq    = f.Forward(q, xq);
     check = x_2 * x_2;
     ok &= NearEqual(yq[0] , check,  eps, eps);
     check = x_0 * x_1;
     ok &= NearEqual(yq[1] , check,  eps, eps);

     // reverse sparse Hessian
     CppAD::vectorBool r(n * n), s(m), h(n * n);
     for(size_t i = 0; i < n; i++)
     {     for(size_t j = 0; j < n; j++)
               r[i * n + j] = i == j;
     }
     for(size_t i = 0; i < m; i++)
          s[i] = true;
     f.ForSparseJac(n, r);
     h = f.RevSparseHes(n, s);

     // check result
     CppAD::vectorBool check_h(n * n);
     for(size_t i = 0; i < n * n; i++)
          check_h[i] = false;
     check_h[ 2 * n + 2 ] = true;
     if( x_1_variable )
     {     check_h[0 * n + 1] = true;
          check_h[1 * n + 0] = true;
     }
     for(size_t i = 0; i < n * n; i++)
          ok &= h[ i ] == check_h[ i ];
     //
     return ok;
}
}  // End empty namespace

4.4.7.2.9.1.j: Test with x_1 Both a Variable and a Parameter
bool rev_sparse_hes(void)
{     bool ok = true;
     // test with x_1 a variable
     ok     &= use_atomic_rev_sparse_hes(true);
     // test with x_1 a parameter
     ok     &= use_atomic_rev_sparse_hes(false);
     return ok;
}

Input File: example/atomic/rev_sparse_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.10: Free Static Variables

4.4.7.2.10.a: Syntax
atomic_base<Base>::clear()

4.4.7.2.10.b: Purpose
Each atomic_base objects holds onto work space in order to avoid repeated memory allocation calls and thereby increase speed (until it is deleted). If an the atomic_base object is global or static because, the it does not get deleted. This is a problem when using thread_alloc 8.23.14: free_all to check that all allocated memory has been freed. Calling this clear function will free all the memory currently being held onto by the atomic_base<Base> class.

4.4.7.2.10.c: Future Use
If there is future use of an atomic_base object, after a call to clear, the work space will be reallocated and held onto.

4.4.7.2.10.d: Restriction
This routine cannot be called while in 8.23.4: parallel execution mode.
Input File: cppad/core/atomic_base.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.11: Getting Started with Atomic Operations: Example and Test

4.4.7.2.11.a: Purpose
This example demonstrates the minimal amount of information necessary for a 4.4.7.2: atomic_base operation.

4.4.7.2.11.b: Start Class Definition
# include <cppad/cppad.hpp>
namespace {          // isolate items below to this file
using CppAD::vector; // abbreviate as vector
class atomic_get_started : public CppAD::atomic_base<double> {

4.4.7.2.11.c: Constructor
public:
     // constructor (could use const char* for name)
     atomic_get_started(const std::string& name) :
     // this example does not use any sparsity patterns
     CppAD::atomic_base<double>(name)
     { }
private:

4.4.7.2.11.d: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
                vector<bool>&      vy ,
          const vector<double>&    tx ,
                vector<double>&    ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 1 );
          assert( m == 1 );

          // return flag
          bool ok = q == 0;
          if( ! ok )
               return ok;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
               vy[0] = vx[0];

          // Order zero forward mode.
          // This case must always be implemented
          // y^0 = f( x^0 ) = 1 / x^0
          double f = 1. / tx[0];
          if( p <= 0 )
               ty[0] = f;
          return ok;
     }

4.4.7.2.11.e: End Class Definition

}; // End of atomic_get_started class
}  // End empty namespace

4.4.7.2.11.f: Use Atomic Function
bool get_started(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

4.4.7.2.11.f.a: Constructor

     // Create the atomic get_started object
     atomic_get_started afun("atomic_get_started");

4.4.7.2.11.f.b: Recording
     // Create the function f(x)
     //
     // domain space vector
     size_t n  = 1;
     double  x0 = 0.5;
     vector< AD<double> > ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     vector< AD<double> > ay(m);

     // call user function and store get_started(x) in au[0]
     vector< AD<double> > au(m);
     afun(ax, au);        // u = 1 / x

     // now use AD division to invert to invert the operation
     ay[0] = 1.0 / au[0]; // y = 1 / u = x

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (ax, ay);  // f(x) = x

4.4.7.2.11.f.c: forward
     // check function value
     double check = x0;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> x_q(n), y_q(m);
     q      = 0;
     x_q[0] = x0;
     y_q    = f.Forward(q, x_q);
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     return ok;
}

Input File: example/atomic/get_started.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test

4.4.7.2.12.a: Theory
This example demonstrates using 4.4.7.2: atomic_base to define the operation @(@ f : \B{R}^n \rightarrow \B{R}^m @)@ where @(@ n = 2 @)@, @(@ m = 1 @)@, where @[@ f(x) = x_0^2 + x_1^2 @]@

4.4.7.2.12.b: sparsity
This example only uses bool sparsity patterns.

4.4.7.2.12.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {           // isolate items below to this file
using CppAD::vector;  // abbreviate as vector
//
class atomic_norm_sq : public CppAD::atomic_base<double> {

4.4.7.2.12.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_norm_sq(const std::string& name) :
     // this example only uses boolean sparsity patterns
     CppAD::atomic_base<double>(name, atomic_base<double>::bool_sparsity_enum)
     { }
private:

4.4.7.2.12.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
                vector<bool>&      vy ,
          const vector<double>&    tx ,
                vector<double>&    ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q+1);
          size_t m = ty.size() / (q+1);
# endif
          assert( n == 2 );
          assert( m == 1 );
          assert( p <= q );

          // return flag
          bool ok = q <= 1;

          // Variable information must always be implemented.
          // y_0 is a variable if and only if x_0 or x_1 is a variable.
          if( vx.size() > 0 )
               vy[0] = vx[0] | vx[1];

          // Order zero forward mode must always be implemented.
          // y^0 = f( x^0 )
          double x_00 = tx[ 0*(q+1) + 0];        // x_0^0
          double x_10 = tx[ 1*(q+1) + 0];        // x_10
          double f = x_00 * x_00 + x_10 * x_10;  // f( x^0 )
          if( p <= 0 )
               ty[0] = f;   // y_0^0
          if( q <= 0 )
               return ok;
          assert( vx.size() == 0 );

          // Order one forward mode.
          // This case needed if first order forward mode is used.
          // y^1 = f'( x^0 ) x^1
          double x_01 = tx[ 0*(q+1) + 1];   // x_0^1
          double x_11 = tx[ 1*(q+1) + 1];   // x_1^1
          double fp_0 = 2.0 * x_00;         // partial f w.r.t x_0^0
          double fp_1 = 2.0 * x_10;         // partial f w.r.t x_1^0
          if( p <= 1 )
               ty[1] = fp_0 * x_01 + fp_1 * x_11; // f'( x^0 ) * x^1
          if( q <= 1 )
               return ok;

          // Assume we are not using forward mode with order > 1
          assert( ! ok );
          return ok;
     }

4.4.7.2.12.f: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          size_t                    q ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
                vector<double>&    px ,
          const vector<double>&    py
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q+1);
          size_t m = ty.size() / (q+1);
# endif
          assert( px.size() == n * (q+1) );
          assert( py.size() == m * (q+1) );
          assert( n == 2 );
          assert( m == 1 );
          bool ok = q <= 1;

          double fp_0, fp_1;
          switch(q)
          {     case 0:
               // This case needed if first order reverse mode is used
               // F ( {x} ) = f( x^0 ) = y^0
               fp_0  =  2.0 * tx[0];  // partial F w.r.t. x_0^0
               fp_1  =  2.0 * tx[1];  // partial F w.r.t. x_0^1
               px[0] = py[0] * fp_0;; // partial G w.r.t. x_0^0
               px[1] = py[0] * fp_1;; // partial G w.r.t. x_0^1
               assert(ok);
               break;

               default:
               // Assume we are not using reverse with order > 1 (q > 0)
               assert(!ok);
          }
          return ok;
     }

4.4.7.2.12.g: for_sparse_jac
     // forward Jacobian bool sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                                p ,
          const vector<bool>&                   r ,
                vector<bool>&                   s ,
          const vector<double>&                 x )
     {     // This function needed if using f.ForSparseJac
          size_t n = r.size() / p;
# ifndef NDEBUG
          size_t m = s.size() / p;
# endif
          assert( n == x.size() );
          assert( n == 2 );
          assert( m == 1 );

          // sparsity for S(x) = f'(x) * R
          // where f'(x) = 2 * [ x_0, x_1 ]
          for(size_t j = 0; j < p; j++)
          {     s[j] = false;
               for(size_t i = 0; i < n; i++)
               {     // Visual Studio 2013 generates warning without bool below
                    s[j] |= bool( r[i * p + j] );
               }
          }
          return true;
     }

4.4.7.2.12.h: rev_sparse_jac
     // reverse Jacobian bool sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                                p  ,
          const vector<bool>&                   rt ,
                vector<bool>&                   st ,
          const vector<double>&                 x  )
     {     // This function needed if using RevSparseJac or optimize
          size_t n = st.size() / p;
# ifndef NDEBUG
          size_t m = rt.size() / p;
# endif
          assert( n == x.size() );
          assert( n == 2 );
          assert( m == 1 );

          // sparsity for S(x)^T = f'(x)^T * R^T
          // where f'(x)^T = 2 * [ x_0, x_1]^T
          for(size_t j = 0; j < p; j++)
               for(size_t i = 0; i < n; i++)
                    st[i * p + j] = rt[j];

          return true;
     }

4.4.7.2.12.i: rev_sparse_hes
     // reverse Hessian bool sparsity routine called by CppAD
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          size_t                                p ,
          const vector<bool>&                   r ,
          const vector<bool>&                   u ,
                vector<bool>&                   v ,
          const vector<double>&                 x )
     {     // This function needed if using RevSparseHes
# ifndef NDEBUG
          size_t m = s.size();
# endif
          size_t n = t.size();
          assert( x.size() == n );
          assert( r.size() == n * p );
          assert( u.size() == m * p );
          assert( v.size() == n * p );
          assert( n == 2 );
          assert( m == 1 );

          // There are no cross term second derivatives for this case,
          // so it is not necessary to use vx.

          // sparsity for T(x) = S(x) * f'(x)
          t[0] = s[0];
          t[1] = s[0];

          // V(x) = f'(x)^T * g''(y) * f'(x) * R  +  g'(y) * f''(x) * R
          // U(x) = g''(y) * f'(x) * R
          // S(x) = g'(y)

          // back propagate the sparsity for U
          size_t j;
          for(j = 0; j < p; j++)
               for(size_t i = 0; i < n; i++)
                    v[ i * p + j] = u[j];

          // include forward Jacobian sparsity in Hessian sparsity
          // sparsity for g'(y) * f''(x) * R  (Note f''(x) has same sparsity
          // as the identity matrix)
          if( s[0] )
          {     for(j = 0; j < p; j++)
                    for(size_t i = 0; i < n; i++)
                    {     // Visual Studio 2013 generates warning without bool below
                         v[ i * p + j] |= bool( r[ i * p + j] );
                    }
          }

          return true;
     }

4.4.7.2.12.j: End Class Definition

}; // End of atomic_norm_sq class
}  // End empty namespace

4.4.7.2.12.k: Use Atomic Function
bool norm_sq(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

4.4.7.2.12.k.a: Constructor

     // --------------------------------------------------------------------
     // Create the atomic reciprocal object
     atomic_norm_sq afun("atomic_norm_sq");

4.4.7.2.12.k.b: Recording
     // Create the function f(x)
     //
     // domain space vector
     size_t  n  = 2;
     double  x0 = 0.25;
     double  x1 = 0.75;
     vector< AD<double> > ax(n);
     ax[0]      = x0;
     ax[1]      = x1;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     vector< AD<double> > ay(m);

     // call user function and store norm_sq(x) in au[0]
     afun(ax, ay);        // y_0 = x_0 * x_0 + x_1 * x_1

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (ax, ay);

4.4.7.2.12.k.c: forward
     // check function value
     double check = x0 * x0 + x1 * x1;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> x_q(n), y_q(m);
     q      = 0;
     x_q[0] = x0;
     x_q[1] = x1;
     y_q    = f.Forward(q, x_q);
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check first order forward mode
     q      = 1;
     x_q[0] = 0.3;
     x_q[1] = 0.7;
     y_q    = f.Forward(q, x_q);
     check  = 2.0 * x0 * x_q[0] + 2.0 * x1 * x_q[1];
     ok &= NearEqual(y_q[0] , check,  eps, eps);
4.4.7.2.12.k.d: reverse
     // first order reverse mode
     q     = 1;
     vector<double> w(m), dw(n * q);
     w[0]  = 1.;
     dw    = f.Reverse(q, w);
     check = 2.0 * x0;
     ok &= NearEqual(dw[0] , check,  eps, eps);
     check = 2.0 * x1;
     ok &= NearEqual(dw[1] , check,  eps, eps);

4.4.7.2.12.k.e: for_sparse_jac
     // forward mode sparstiy pattern
     size_t p = n;
     CppAD::vectorBool r1(n * p), s1(m * p);
     r1[0] = true;  r1[1] = false; // sparsity pattern identity
     r1[2] = false; r1[3] = true;
     //
     s1    = f.ForSparseJac(p, r1);
     ok  &= s1[0] == true;  // f[0] depends on x[0]
     ok  &= s1[1] == true;  // f[0] depends on x[1]

4.4.7.2.12.k.f: rev_sparse_jac
     // reverse mode sparstiy pattern
     q = m;
     CppAD::vectorBool s2(q * m), r2(q * n);
     s2[0] = true;          // compute sparsity pattern for f[0]
     //
     r2    = f.RevSparseJac(q, s2);
     ok  &= r2[0] == true;  // f[0] depends on x[0]
     ok  &= r2[1] == true;  // f[0] depends on x[1]

4.4.7.2.12.k.g: rev_sparse_hes
     // Hessian sparsity (using previous ForSparseJac call)
     CppAD::vectorBool s3(m), h(p * n);
     s3[0] = true;        // compute sparsity pattern for f[0]
     //
     h     = f.RevSparseHes(p, s3);
     ok  &= h[0] == true;  // partial of f[0] w.r.t. x[0],x[0] is non-zero
     ok  &= h[1] == false; // partial of f[0] w.r.t. x[0],x[1] is zero
     ok  &= h[2] == false; // partial of f[0] w.r.t. x[1],x[0] is zero
     ok  &= h[3] == true;  // partial of f[0] w.r.t. x[1],x[1] is non-zero
     //
     return ok;
}

Input File: example/atomic/norm_sq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test

4.4.7.2.13.a: Theory
This example demonstrates using 4.4.7.2: atomic_base to define the operation @(@ f : \B{R}^n \rightarrow \B{R}^m @)@ where @(@ n = 1 @)@, @(@ m = 1 @)@, and @(@ f(x) = 1 / x @)@.

4.4.7.2.13.b: sparsity
This example only uses set sparsity patterns.

4.4.7.2.13.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {           // isolate items below to this file
using CppAD::vector;  // abbreviate as vector
//
// a utility to compute the union of two sets.
using CppAD::set_union;
//
class atomic_reciprocal : public CppAD::atomic_base<double> {

4.4.7.2.13.d: Constructor
public:
     // constructor (could use const char* for name)
     atomic_reciprocal(const std::string& name) :
     // this exmaple only uses set sparsity patterns
     CppAD::atomic_base<double>(name, atomic_base<double>::set_sparsity_enum)
     { }
private:

4.4.7.2.13.e: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
                vector<bool>&      vy ,
          const vector<double>&    tx ,
                vector<double>&    ty
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 1 );
          assert( m == 1 );
          assert( p <= q );

          // return flag
          bool ok = q <= 2;

          // check for defining variable information
          // This case must always be implemented
          if( vx.size() > 0 )
               vy[0] = vx[0];

          // Order zero forward mode.
          // This case must always be implemented
          // y^0 = f( x^0 ) = 1 / x^0
          double f = 1. / tx[0];
          if( p <= 0 )
               ty[0] = f;
          if( q <= 0 )
               return ok;
          assert( vx.size() == 0 );

          // Order one forward mode.
          // This case needed if first order forward mode is used.
          // y^1 = f'( x^0 ) x^1
          double fp = - f / tx[0];
          if( p <= 1 )
               ty[1] = fp * tx[1];
          if( q <= 1 )
               return ok;

          // Order two forward mode.
          // This case needed if second order forward mode is used.
          // Y''(t) = X'(t)^\R{T} f''[X(t)] X'(t) + f'[X(t)] X''(t)
          // 2 y^2  = x^1 * f''( x^0 ) x^1 + 2 f'( x^0 ) x^2
          double fpp  = - 2.0 * fp / tx[0];
          ty[2] = tx[1] * fpp * tx[1] / 2.0 + fp * tx[2];
          if( q <= 2 )
               return ok;

          // Assume we are not using forward mode with order > 2
          assert( ! ok );
          return ok;
     }

4.4.7.2.13.f: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          size_t                    q ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
                vector<double>&    px ,
          const vector<double>&    py
     )
     {
# ifndef NDEBUG
          size_t n = tx.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
# endif
          assert( px.size() == n * (q + 1) );
          assert( py.size() == m * (q + 1) );
          assert( n == 1 );
          assert( m == 1 );
          bool ok = q <= 2;

          double f, fp, fpp, fppp;
          switch(q)
          {     case 0:
               // This case needed if first order reverse mode is used
               // reverse: F^0 ( tx ) = y^0 = f( x^0 )
               f     = ty[0];
               fp    = - f / tx[0];
               px[0] = py[0] * fp;;
               assert(ok);
               break;

               case 1:
               // This case needed if second order reverse mode is used
               // reverse: F^1 ( tx ) = y^1 = f'( x^0 ) x^1
               f      = ty[0];
               fp     = - f / tx[0];
               fpp    = - 2.0 * fp / tx[0];
               px[1]  = py[1] * fp;
               px[0]  = py[1] * fpp * tx[1];
               // reverse: F^0 ( tx ) = y^0 = f( x^0 );
               px[0] += py[0] * fp;
               assert(ok);
               break;

               case 2:
               // This needed if third order reverse mode is used
               // reverse: F^2 ( tx ) = y^2 =
               //          = x^1 * f''( x^0 ) x^1 / 2 + f'( x^0 ) x^2
               f      = ty[0];
               fp     = - f / tx[0];
               fpp    = - 2.0 * fp / tx[0];
               fppp   = - 3.0 * fpp / tx[0];
               px[2]  = py[2] * fp;
               px[1]  = py[2] * fpp * tx[1];
               px[0]  = py[2] * tx[1] * fppp * tx[1] / 2.0 + fpp * tx[2];
               // reverse: F^1 ( tx ) = y^1 = f'( x^0 ) x^1
               px[1] += py[1] * fp;
               px[0] += py[1] * fpp * tx[1];
               // reverse: F^0 ( tx ) = y^0 = f( x^0 );
               px[0] += py[0] * fp;
               assert(ok);
               break;

               default:
               assert(!ok);
          }
          return ok;
     }

4.4.7.2.13.g: for_sparse_jac
     // forward Jacobian set sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
                vector< std::set<size_t> >&     s ,
          const vector<double>&                 x )
     {     // This function needed if using f.ForSparseJac
# ifndef NDEBUG
          size_t n = r.size();
          size_t m = s.size();
# endif
          assert( n == x.size() );
          assert( n == 1 );
          assert( m == 1 );

          // sparsity for S(x) = f'(x) * R is same as sparsity for R
          s[0] = r[0];

          return true;
     }

4.4.7.2.13.h: rev_sparse_jac
     // reverse Jacobian set sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                                p  ,
          const vector< std::set<size_t> >&     rt ,
                vector< std::set<size_t> >&     st ,
          const vector<double>&                 x  )
     {     // This function needed if using RevSparseJac or optimize
# ifndef NDEBUG
          size_t n = st.size();
          size_t m = rt.size();
# endif
          assert( n == x.size() );
          assert( n == 1 );
          assert( m == 1 );

          // sparsity for S(x)^T = f'(x)^T * R^T is same as sparsity for R^T
          st[0] = rt[0];

          return true;
     }

4.4.7.2.13.i: rev_sparse_hes
     // reverse Hessian set sparsity routine called by CppAD
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector< std::set<size_t> >&     u ,
                vector< std::set<size_t> >&     v ,
          const vector<double>&                 x )
     {     // This function needed if using RevSparseHes
# ifndef NDEBUG
          size_t n = vx.size();
          size_t m = s.size();
# endif
          assert( x.size() == n );
          assert( t.size() == n );
          assert( r.size() == n );
          assert( u.size() == m );
          assert( v.size() == n );
          assert( n == 1 );
          assert( m == 1 );

          // There are no cross term second derivatives for this case,
          // so it is not necessary to vx.

          // sparsity for T(x) = S(x) * f'(x) is same as sparsity for S
          t[0] = s[0];

          // V(x) = f'(x)^T * g''(y) * f'(x) * R  +  g'(y) * f''(x) * R
          // U(x) = g''(y) * f'(x) * R
          // S(x) = g'(y)

          // back propagate the sparsity for U, note f'(x) may be non-zero;
          v[0] = u[0];

          // include forward Jacobian sparsity in Hessian sparsity
          // (note sparsty for f''(x) * R same as for R)
          if( s[0] )
               v[0] = set_union(v[0], r[0] );

          return true;
     }

4.4.7.2.13.j: End Class Definition

}; // End of atomic_reciprocal class
}  // End empty namespace

4.4.7.2.13.k: Use Atomic Function
bool reciprocal(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

4.4.7.2.13.k.a: Constructor

     // --------------------------------------------------------------------
     // Create the atomic reciprocal object
     atomic_reciprocal afun("atomic_reciprocal");

4.4.7.2.13.k.b: Recording
     // Create the function f(x)
     //
     // domain space vector
     size_t n  = 1;
     double  x0 = 0.5;
     vector< AD<double> > ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     vector< AD<double> > ay(m);

     // call user function and store reciprocal(x) in au[0]
     vector< AD<double> > au(m);
     afun(ax, au);        // u = 1 / x

     // now use AD division to invert to invert the operation
     ay[0] = 1.0 / au[0]; // y = 1 / u = x

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (ax, ay);  // f(x) = x

4.4.7.2.13.k.c: forward
     // check function value
     double check = x0;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> x_q(n), y_q(m);
     q      = 0;
     x_q[0] = x0;
     y_q    = f.Forward(q, x_q);
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check first order forward mode
     q      = 1;
     x_q[0] = 1;
     y_q    = f.Forward(q, x_q);
     check  = 1.;
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check second order forward mode
     q      = 2;
     x_q[0] = 0;
     y_q    = f.Forward(q, x_q);
     check  = 0.;
     ok &= NearEqual(y_q[0] , check,  eps, eps);

4.4.7.2.13.k.d: reverse
     // third order reverse mode
     q     = 3;
     vector<double> w(m), dw(n * q);
     w[0]  = 1.;
     dw    = f.Reverse(q, w);
     check = 1.;
     ok &= NearEqual(dw[0] , check,  eps, eps);
     check = 0.;
     ok &= NearEqual(dw[1] , check,  eps, eps);
     ok &= NearEqual(dw[2] , check,  eps, eps);

4.4.7.2.13.k.e: for_sparse_jac
     // forward mode sparstiy pattern
     size_t p = n;
     CppAD::vectorBool r1(n * p), s1(m * p);
     r1[0] = true;          // compute sparsity pattern for x[0]
     //
     s1    = f.ForSparseJac(p, r1);
     ok  &= s1[0] == true;  // f[0] depends on x[0]

4.4.7.2.13.k.f: rev_sparse_jac
     // reverse mode sparstiy pattern
     q = m;
     CppAD::vectorBool s2(q * m), r2(q * n);
     s2[0] = true;          // compute sparsity pattern for f[0]
     //
     r2    = f.RevSparseJac(q, s2);
     ok  &= r2[0] == true;  // f[0] depends on x[0]

4.4.7.2.13.k.g: rev_sparse_hes
     // Hessian sparsity (using previous ForSparseJac call)
     CppAD::vectorBool s3(m), h(p * n);
     s3[0] = true;        // compute sparsity pattern for f[0]
     //
     h     = f.RevSparseHes(p, s3);
     ok  &= h[0] == true; // second partial of f[0] w.r.t. x[0] may be non-zero
     //
     return ok;
}

Input File: example/atomic/reciprocal.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test

4.4.7.2.14.a: function
For this example, the atomic function @(@ f : \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ f( x ) = \left( \begin{array}{c} x_2 \\ x_0 * x_1 \end{array} \right) @]@

4.4.7.2.14.b: set_sparsity_enum
This example only uses set sparsity patterns.

4.4.7.2.14.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace {   // isolate items below to this file
using   CppAD::vector;                          // vector
typedef vector< std::set<size_t> > set_vector;  // atomic_sparsity
//
// a utility to compute the union of two sets.
using CppAD::set_union;
//
class atomic_set_sparsity : public CppAD::atomic_base<double> {

4.4.7.2.14.d: Constructor
public:
     // constructor
     atomic_set_sparsity(const std::string& name) :
     // this exampel only uses set sparsity patterns
     CppAD::atomic_base<double>(name, set_sparsity_enum )
     { }
private:

4.4.7.2.14.e: forward
     // forward
     virtual bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {
          size_t n = tx.size() / (q + 1);
# ifndef NDEBUG
          size_t m = ty.size() / (q + 1);
# endif
          assert( n == 3 );
          assert( m == 2 );

          // only order zero
          bool ok = q == 0;
          if( ! ok )
               return ok;

          // check for defining variable information
          if( vx.size() > 0 )
          {     ok   &= vx.size() == n;
               vy[0] = vx[2];
               vy[1] = vx[0] || vx[1];
          }

          // Order zero forward mode.
          // y[0] = x[2], y[1] = x[0] * x[1]
          if( p <= 0 )
          {     ty[0] = tx[2];
               ty[1] = tx[0] * tx[1];
          }
          return ok;
     }

4.4.7.2.14.f: for_sparse_jac
     // for_sparse_jac
     virtual bool for_sparse_jac(
          size_t                          p ,
          const set_vector&               r ,
          set_vector&                     s ,
          const vector<double>&           x )
     {     // This function needed if using f.ForSparseJac
# ifndef NDEBUG
          size_t n = r.size();
          size_t m = s.size();
# endif
          assert( n == x.size() );
          assert( n == 3 );
          assert( m == 2 );

          // sparsity for S(x) = f'(x) * R  = [ 0,   0, 1 ] * R
          s[0] = r[2];
          // s[1] = union(r[0], r[1])
          s[1] = set_union(r[0], r[1]);
          //
          return true;
     }

4.4.7.2.14.g: rev_sparse_jac
     virtual bool rev_sparse_jac(
          size_t                                p  ,
          const set_vector&                     rt ,
          set_vector&                           st ,
          const vector<double>&                 x  )
     {     // This function needed if using RevSparseJac or optimize
# ifndef NDEBUG
          size_t n = st.size();
          size_t m = rt.size();
# endif
          assert( n == x.size() );
          assert( n == 3 );
          assert( m == 2 );

          //                                       [ 0, x1 ]
          // sparsity for S(x)^T = f'(x)^T * R^T = [ 0, x0 ] * R^T
          //                                       [ 1, 0  ]
          st[0] = rt[1];
          st[1] = rt[1];
          st[2] = rt[0];
          return true;
     }

4.4.7.2.14.h: for_sparse_hes
     virtual bool for_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   r ,
          const vector<bool>&                   s ,
          set_vector&                           h ,
          const vector<double>&                 x )
     {
          size_t n = r.size();
# ifndef NDEBUG
          size_t m = s.size();
# endif
          assert( x.size() == n );
          assert( h.size() == n );
          assert( n == 3 );
          assert( m == 2 );

          // initialize h as empty
          for(size_t i = 0; i < n; i++)
               h[i].clear();

          // only f_1 has a non-zero hessian
          if( ! s[1] )
               return true;

          // only the cross term between x[0] and x[1] is non-zero
          if( ! ( r[0] & r[1] ) )
               return true;

          // set the possibly non-zero terms in the hessian
          h[0].insert(1);
          h[1].insert(0);

          return true;
     }

4.4.7.2.14.i: rev_sparse_hes
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
          vector<bool>&                         t ,
          size_t                                p ,
          const set_vector&                     r ,
          const set_vector&                     u ,
          set_vector&                           v ,
          const vector<double>&                 x )
     {     // This function needed if using RevSparseHes
# ifndef NDEBUG
          size_t m = s.size();
          size_t n = t.size();
# endif
          assert( x.size() == n );
          assert( r.size() == n );
          assert( u.size() == m );
          assert( v.size() == n );
          assert( n == 3 );
          assert( m == 2 );

          // sparsity for T(x) = S(x) * f'(x) = S(x) * [  0,  0,  1 ]
          //                                           [ x1, x0,  0 ]
          t[0] = s[1];
          t[1] = s[1];
          t[2] = s[0];

          // V(x) = f'(x)^T * g''(y) * f'(x) * R  +  g'(y) * f''(x) * R
          // U(x) = g''(y) * f'(x) * R
          // S(x) = g'(y)

          //                                      [ 0, x1 ]
          // sparsity for W(x) = f'(x)^T * U(x) = [ 0, x0 ] * U(x)
          //                                      [ 1, 0  ]
          v[0] = u[1];
          v[1] = u[1];
          v[2] = u[0];
          //
          //                                      [ 0, 1, 0 ]
          // sparsity for V(x) = W(x) + S_1 (x) * [ 1, 0, 0 ] * R
          //                                      [ 0, 0, 0 ]
          if( s[1] )
          {     // v[0] = union( v[0], r[1] )
               v[0] = set_union(v[0], r[1]);
               // v[1] = union( v[1], r[0] )
               v[1] = set_union(v[1], r[0]);
          }
          return true;
     }

4.4.7.2.14.j: End Class Definition

}; // End of atomic_set_sparsity class
}  // End empty namespace

4.4.7.2.14.k: Test Atomic Function
bool set_sparsity(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

4.4.7.2.14.k.a: Constructor

     // Create the atomic get_started object
     atomic_set_sparsity afun("atomic_set_sparsity");

4.4.7.2.14.k.b: Recording
     size_t n = 3;
     size_t m = 2;
     vector< AD<double> > ax(n), ay(m);
     for(size_t j = 0; j < n; j++)
          ax[j] = double(j + 1);

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // call user function
     afun(ax, ay);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (ax, ay);  // f(x) = x

     // check function value
     ok &= NearEqual(ay[0] , ax[2],  eps, eps);
     ok &= NearEqual(ay[1] , ax[0] * ax[1],  eps, eps);
4.4.7.2.14.k.c: for_sparse_jac
     // correct Jacobian result
     set_vector check_s(m);
     check_s[0].insert(2);
     check_s[1].insert(0);
     check_s[1].insert(1);
     // compute and test forward mode
     {     set_vector r(n), s(m);
          for(size_t i = 0; i < n; i++)
               r[i].insert(i);
          s = f.ForSparseJac(n, r);
          for(size_t i = 0; i < m; i++)
               ok &= s[i] == check_s[i];
     }

4.4.7.2.14.k.d: rev_sparse_jac
     // compute and test reverse mode
     {     set_vector r(m), s(m);
          for(size_t i = 0; i < m; i++)
               r[i].insert(i);
          s = f.RevSparseJac(m, r);
          for(size_t i = 0; i < m; i++)
               ok &= s[i] == check_s[i];
     }

4.4.7.2.14.k.e: for_sparse_hes
     // correct Hessian result
     set_vector check_h(n);
     check_h[0].insert(1);
     check_h[1].insert(0);
     // compute and test forward mode
     {     set_vector r(1), s(1), h(n);
          for(size_t i = 0; i < m; i++)
               s[0].insert(i);
          for(size_t j = 0; j < n; j++)
               r[0].insert(j);
          h = f.ForSparseHes(r, s);
          for(size_t i = 0; i < n; i++)
               ok &= h[i] == check_h[i];
     }

4.4.7.2.14.k.f: rev_sparse_hes
Note the previous call to ForSparseJac above stored its results in f .
     // compute and test reverse mode
     {     set_vector s(1), h(n);
          for(size_t i = 0; i < m; i++)
               s[0].insert(i);
          h = f.RevSparseHes(n, s);
          for(size_t i = 0; i < n; i++)
               ok &= h[i] == check_h[i];
     }

4.4.7.2.14.k.g: Test Result

     return ok;
}

Input File: example/atomic/set_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test

4.4.7.2.15.a: Theory
The code below uses the 12.3.1.8: tan_forward and 12.3.2.8: tan_reverse to implement the tangent and hyperbolic tangent functions as user atomic operations.

4.4.7.2.15.b: sparsity
This atomic operation can use both set and bool sparsity patterns.

4.4.7.2.15.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace { // Begin empty namespace
using CppAD::vector;
//
// a utility to compute the union of two sets.
using CppAD::set_union;
//
class atomic_tangent : public CppAD::atomic_base<float> {

4.4.7.2.15.d: Constructor
private:
     const bool hyperbolic_; // is this hyperbolic tangent
public:
     // constructor
     atomic_tangent(const char* name, bool hyperbolic)
     : CppAD::atomic_base<float>(name),
     hyperbolic_(hyperbolic)
     { }
private:

4.4.7.2.15.e: forward
     // forward mode routine called by CppAD
     bool forward(
          size_t                    p ,
          size_t                    q ,
          const vector<bool>&      vx ,
                vector<bool>&     vzy ,
          const vector<float>&     tx ,
                vector<float>&    tzy
     )
     {     size_t q1 = q + 1;
# ifndef NDEBUG
          size_t n  = tx.size()  / q1;
          size_t m  = tzy.size() / q1;
# endif
          assert( n == 1 );
          assert( m == 2 );
          assert( p <= q );
          size_t j, k;

          // check if this is during the call to old_tan(id, ax, ay)
          if( vx.size() > 0 )
          {     // set variable flag for both y an z
               vzy[0] = vx[0];
               vzy[1] = vx[0];
          }

          if( p == 0 )
          {     // z^{(0)} = tan( x^{(0)} ) or tanh( x^{(0)} )
               if( hyperbolic_ )
                    tzy[0] = float( tanh( tx[0] ) );
               else     tzy[0] = float( tan( tx[0] ) );

               // y^{(0)} = z^{(0)} * z^{(0)}
               tzy[q1 + 0] = tzy[0] * tzy[0];

               p++;
          }
          for(j = p; j <= q; j++)
          {     float j_inv = 1.f / float(j);
               if( hyperbolic_ )
                    j_inv = - j_inv;

               // z^{(j)} = x^{(j)} +- sum_{k=1}^j k x^{(k)} y^{(j-k)} / j
               tzy[j] = tx[j];
               for(k = 1; k <= j; k++)
                    tzy[j] += tx[k] * tzy[q1 + j-k] * float(k) * j_inv;

               // y^{(j)} = sum_{k=0}^j z^{(k)} z^{(j-k)}
               tzy[q1 + j] = 0.;
               for(k = 0; k <= j; k++)
                    tzy[q1 + j] += tzy[k] * tzy[j-k];
          }

          // All orders are implemented and there are no possible errors
          return true;
     }

4.4.7.2.15.f: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          size_t                    q ,
          const vector<float>&     tx ,
          const vector<float>&    tzy ,
                vector<float>&     px ,
          const vector<float>&    pzy
     )
     {     size_t q1 = q + 1;
# ifndef NDEBUG
          size_t n  = tx.size()  / q1;
          size_t m  = tzy.size() / q1;
# endif
          assert( px.size()  == n * q1 );
          assert( pzy.size() == m * q1 );
          assert( n == 1 );
          assert( m == 2 );

          size_t j, k;

          // copy because partials w.r.t. y and z need to change
          vector<float> qzy = pzy;

          // initialize accumultion of reverse mode partials
          for(k = 0; k < q1; k++)
               px[k] = 0.;

          // eliminate positive orders
          for(j = q; j > 0; j--)
          {     float j_inv = 1.f / float(j);
               if( hyperbolic_ )
                    j_inv = - j_inv;

               // H_{x^{(k)}} += delta(j-k) +- H_{z^{(j)} y^{(j-k)} * k / j
               px[j] += qzy[j];
               for(k = 1; k <= j; k++)
                    px[k] += qzy[j] * tzy[q1 + j-k] * float(k) * j_inv;

               // H_{y^{j-k)} += +- H_{z^{(j)} x^{(k)} * k / j
               for(k = 1; k <= j; k++)
                    qzy[q1 + j-k] += qzy[j] * tx[k] * float(k) * j_inv;

               // H_{z^{(k)}} += H_{y^{(j-1)}} * z^{(j-k-1)} * 2.
               for(k = 0; k < j; k++)
                    qzy[k] += qzy[q1 + j-1] * tzy[j-k-1] * 2.f;
          }

          // eliminate order zero
          if( hyperbolic_ )
               px[0] += qzy[0] * (1.f - tzy[q1 + 0]);
          else
               px[0] += qzy[0] * (1.f + tzy[q1 + 0]);

          return true;
     }

4.4.7.2.15.g: for_sparse_jac
     // forward Jacobian sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                                p ,
          const vector<bool>&                   r ,
                vector<bool>&                   s ,
          const vector<float>&                  x )
     {
# ifndef NDEBUG
          size_t n = r.size() / p;
          size_t m = s.size() / p;
# endif
          assert( n == x.size() );
          assert( n == 1 );
          assert( m == 2 );

          // sparsity for S(x) = f'(x) * R
          for(size_t j = 0; j < p; j++)
          {     s[0 * p + j] = r[j];
               s[1 * p + j] = r[j];
          }

          return true;
     }
     // forward Jacobian sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
                vector< std::set<size_t> >&     s ,
          const vector<float>&                  x )
     {
# ifndef NDEBUG
          size_t n = r.size();
          size_t m = s.size();
# endif
          assert( n == x.size() );
          assert( n == 1 );
          assert( m == 2 );

          // sparsity for S(x) = f'(x) * R
          s[0] = r[0];
          s[1] = r[0];

          return true;
     }

4.4.7.2.15.h: rev_sparse_jac
     // reverse Jacobian sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                                p ,
          const vector<bool>&                  rt ,
                vector<bool>&                  st ,
          const vector<float>&                  x )
     {
# ifndef NDEBUG
          size_t n = st.size() / p;
          size_t m = rt.size() / p;
# endif
          assert( n == 1 );
          assert( m == 2 );
          assert( n == x.size() );

          // sparsity for S(x)^T = f'(x)^T * R^T
          for(size_t j = 0; j < p; j++)
               st[j] = rt[0 * p + j] | rt[1 * p + j];

          return true;
     }
     // reverse Jacobian sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          size_t                                p ,
          const vector< std::set<size_t> >&    rt ,
                vector< std::set<size_t> >&    st ,
          const vector<float>&                  x )
     {
# ifndef NDEBUG
          size_t n = st.size();
          size_t m = rt.size();
# endif
          assert( n == 1 );
          assert( m == 2 );
          assert( n == x.size() );

          // sparsity for S(x)^T = f'(x)^T * R^T
          st[0] = set_union(rt[0], rt[1]);
          return true;
     }

4.4.7.2.15.i: rev_sparse_hes
     // reverse Hessian sparsity routine called by CppAD
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          size_t                                p ,
          const vector<bool>&                   r ,
          const vector<bool>&                   u ,
                vector<bool>&                   v ,
          const vector<float>&                  x )
     {
# ifndef NDEBUG
          size_t m = s.size();
          size_t n = t.size();
# endif
          assert( x.size() == n );
          assert( r.size() == n * p );
          assert( u.size() == m * p );
          assert( v.size() == n * p );
          assert( n == 1 );
          assert( m == 2 );

          // There are no cross term second derivatives for this case,
          // so it is not necessary to vx.

          // sparsity for T(x) = S(x) * f'(x)
          t[0] =  s[0] | s[1];

          // V(x) = f'(x)^T * g''(y) * f'(x) * R  +  g'(y) * f''(x) * R
          // U(x) = g''(y) * f'(x) * R
          // S(x) = g'(y)

          // back propagate the sparsity for U, note both components
          // of f'(x) may be non-zero;
          size_t j;
          for(j = 0; j < p; j++)
               v[j] = u[ 0 * p + j ] | u[ 1 * p + j ];

          // include forward Jacobian sparsity in Hessian sparsity
          // (note sparsty for f''(x) * R same as for R)
          if( s[0] | s[1] )
          {     for(j = 0; j < p; j++)
               {     // Visual Studio 2013 generates warning without bool below
                    v[j] |= bool( r[j] );
               }
          }

          return true;
     }
     // reverse Hessian sparsity routine called by CppAD
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector< std::set<size_t> >&     u ,
                vector< std::set<size_t> >&     v ,
          const vector<float>&                  x )
     {
# ifndef NDEBUG
          size_t m = s.size();
          size_t n = t.size();
# endif
          assert( x.size() == n );
          assert( r.size() == n );
          assert( u.size() == m );
          assert( v.size() == n );
          assert( n == 1 );
          assert( m == 2 );

          // There are no cross term second derivatives for this case,
          // so it is not necessary to vx.

          // sparsity for T(x) = S(x) * f'(x)
          t[0] =  s[0] | s[1];

          // V(x) = f'(x)^T * g''(y) * f'(x) * R  +  g'(y) * f''(x) * R
          // U(x) = g''(y) * f'(x) * R
          // S(x) = g'(y)

          // back propagate the sparsity for U, note both components
          // of f'(x) may be non-zero;
          v[0] = set_union(u[0], u[1]);

          // include forward Jacobian sparsity in Hessian sparsity
          // (note sparsty for f''(x) * R same as for R)
          if( s[0] | s[1] )
               v[0] = set_union(v[0], r[0]);

          return true;
     }

4.4.7.2.15.j: End Class Definition

}; // End of atomic_tangent class
}  // End empty namespace

4.4.7.2.15.k: Use Atomic Function
bool tangent(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     float eps = 10.f * CppAD::numeric_limits<float>::epsilon();

4.4.7.2.15.k.a: Constructor

     // --------------------------------------------------------------------
     // Creater a tan and tanh object
     atomic_tangent my_tan("my_tan", false), my_tanh("my_tanh", true);

4.4.7.2.15.k.b: Recording
     // domain space vector
     size_t n  = 1;
     float  x0 = 0.5;
     CppAD::vector< AD<float> > ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CppAD::vector< AD<float> > af(m);

     // temporary vector for computations
     // (my_tan and my_tanh computes tan or tanh and its square)
     CppAD::vector< AD<float> > az(2);

     // call atomic tan function and store tan(x) in f[0] (ignore tan(x)^2)
     my_tan(ax, az);
     af[0] = az[0];

     // call atomic tanh function and store tanh(x) in f[1] (ignore tanh(x)^2)
     my_tanh(ax, az);
     af[1] = az[0];

     // put a constant in f[2] = tanh(1.) (for sparsity pattern testing)
     CppAD::vector< AD<float> > one(1);
     one[0] = 1.;
     my_tanh(one, az);
     af[2] = az[0];

     // create f: x -> f and stop tape recording
     CppAD::ADFun<float> F;
     F.Dependent(ax, af);

4.4.7.2.15.k.c: forward
     // check function value
     float tan = std::tan(x0);
     ok &= NearEqual(af[0] , tan,  eps, eps);
     float tanh = std::tanh(x0);
     ok &= NearEqual(af[1] , tanh,  eps, eps);

     // check zero order forward
     CppAD::vector<float> x(n), f(m);
     x[0] = x0;
     f    = F.Forward(0, x);
     ok &= NearEqual(f[0] , tan,  eps, eps);
     ok &= NearEqual(f[1] , tanh,  eps, eps);

     // compute first partial of f w.r.t. x[0] using forward mode
     CppAD::vector<float> dx(n), df(m);
     dx[0] = 1.;
     df    = F.Forward(1, dx);

4.4.7.2.15.k.d: reverse
     // compute derivative of tan - tanh using reverse mode
     CppAD::vector<float> w(m), dw(n);
     w[0]  = 1.;
     w[1]  = 1.;
     w[2]  = 0.;
     dw    = F.Reverse(1, w);

     // tan'(x)   = 1 + tan(x)  * tan(x)
     // tanh'(x)  = 1 - tanh(x) * tanh(x)
     float tanp  = 1.f + tan * tan;
     float tanhp = 1.f - tanh * tanh;
     ok   &= NearEqual(df[0], tanp, eps, eps);
     ok   &= NearEqual(df[1], tanhp, eps, eps);
     ok   &= NearEqual(dw[0], w[0]*tanp + w[1]*tanhp, eps, eps);

     // compute second partial of f w.r.t. x[0] using forward mode
     CppAD::vector<float> ddx(n), ddf(m);
     ddx[0] = 0.;
     ddf    = F.Forward(2, ddx);

     // compute second derivative of tan - tanh using reverse mode
     CppAD::vector<float> ddw(2);
     ddw   = F.Reverse(2, w);

     // tan''(x)   = 2 *  tan(x) * tan'(x)
     // tanh''(x)  = - 2 * tanh(x) * tanh'(x)
     // Note that second order Taylor coefficient for u half the
     // corresponding second derivative.
     float two    = 2;
     float tanpp  =   two * tan * tanp;
     float tanhpp = - two * tanh * tanhp;
     ok   &= NearEqual(two * ddf[0], tanpp, eps, eps);
     ok   &= NearEqual(two * ddf[1], tanhpp, eps, eps);
     ok   &= NearEqual(ddw[0], w[0]*tanp  + w[1]*tanhp , eps, eps);
     ok   &= NearEqual(ddw[1], w[0]*tanpp + w[1]*tanhpp, eps, eps);

4.4.7.2.15.k.e: for_sparse_jac
     // Forward mode computation of sparsity pattern for F.
     size_t p = n;
     // user vectorBool because m and n are small
     CppAD::vectorBool r1(p), s1(m * p);
     r1[0] = true;            // propagate sparsity for x[0]
     s1    = F.ForSparseJac(p, r1);
     ok  &= (s1[0] == true);  // f[0] depends on x[0]
     ok  &= (s1[1] == true);  // f[1] depends on x[0]
     ok  &= (s1[2] == false); // f[2] does not depend on x[0]

4.4.7.2.15.k.f: rev_sparse_jac
     // Reverse mode computation of sparsity pattern for F.
     size_t q = m;
     CppAD::vectorBool s2(q * m), r2(q * n);
     // Sparsity pattern for identity matrix
     size_t i, j;
     for(i = 0; i < q; i++)
     {     for(j = 0; j < m; j++)
               s2[i * q + j] = (i == j);
     }
     r2   = F.RevSparseJac(q, s2);
     ok  &= (r2[0] == true);  // f[0] depends on x[0]
     ok  &= (r2[1] == true);  // f[1] depends on x[0]
     ok  &= (r2[2] == false); // f[2] does not depend on x[0]

4.4.7.2.15.k.g: rev_sparse_hes
     // Hessian sparsity for f[0]
     CppAD::vectorBool s3(m), h(p * n);
     s3[0] = true;
     s3[1] = false;
     s3[2] = false;
     h    = F.RevSparseHes(p, s3);
     ok  &= (h[0] == true);  // Hessian is non-zero

     // Hessian sparsity for f[2]
     s3[0] = false;
     s3[2] = true;
     h    = F.RevSparseHes(p, s3);
     ok  &= (h[0] == false);  // Hessian is zero

4.4.7.2.15.k.h: Large x Values
     // check tanh results for a large value of x
     x[0]  = std::numeric_limits<float>::max() / two;
     f     = F.Forward(0, x);
     tanh  = 1.;
     ok   &= NearEqual(f[1], tanh, eps, eps);
     df    = F.Forward(1, dx);
     tanhp = 0.;
     ok   &= NearEqual(df[1], tanhp, eps, eps);

     return ok;
}

Input File: example/atomic/tangent.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test

4.4.7.2.16.a: Description
The 5: ADFun function object f for this example is @[@ f(x) = \left( \begin{array}{cc} 0 & 0 \\ 1 & 2 \\ x_0 & x_1 \end{array} \right) \left( \begin{array}{c} x_0 \\ x_1 \end{array} \right) = \left( \begin{array}{c} 0 \\ x_0 + 2 x_1 \\ x_0 x_0 + x_1 x_1 ) \end{array} \right) @]@

4.4.7.2.16.b: Class Definition
This example uses the file 4.4.7.2.16.1: atomic_eigen_mat_mul.hpp which defines matrix multiply as a 4.4.7.2: atomic_base operation.

4.4.7.2.16.c: Use Atomic Function
# include <cppad/cppad.hpp>
# include <cppad/example/eigen_mat_mul.hpp>

bool eigen_mat_mul(void)
{     //
     typedef double                                            scalar;
     typedef CppAD::AD<scalar>                                 ad_scalar;
     typedef typename atomic_eigen_mat_mul<scalar>::ad_matrix  ad_matrix;
     //
     bool ok    = true;
     scalar eps = 10. * std::numeric_limits<scalar>::epsilon();
     using CppAD::NearEqual;
     //

4.4.7.2.16.c.a: Constructor
     // -------------------------------------------------------------------
     // object that multiplies arbitrary matrices
     atomic_eigen_mat_mul<scalar> mat_mul;
     // -------------------------------------------------------------------
     // declare independent variable vector x
     size_t n = 2;
     CPPAD_TESTVECTOR(ad_scalar) ad_x(n);
     for(size_t j = 0; j < n; j++)
          ad_x[j] = ad_scalar(j);
     CppAD::Independent(ad_x);
     // -------------------------------------------------------------------
     //        [ 0     0    ]
     // left = [ 1     2    ]
     //        [ x[0]  x[1] ]
     size_t nr_left  = 3;
     size_t n_middle   = 2;
     ad_matrix ad_left(nr_left, n_middle);
     ad_left(0, 0) = ad_scalar(0.0);
     ad_left(0, 1) = ad_scalar(0.0);
     ad_left(1, 0) = ad_scalar(1.0);
     ad_left(1, 1) = ad_scalar(2.0);
     ad_left(2, 0) = ad_x[0];
     ad_left(2, 1) = ad_x[1];
     // -------------------------------------------------------------------
     // right = [ x[0] , x[1] ]^T
     size_t nc_right = 1;
     ad_matrix ad_right(n_middle, nc_right);
     ad_right(0, 0) = ad_x[0];
     ad_right(1, 0) = ad_x[1];
     // -------------------------------------------------------------------
     // use atomic operation to multiply left * right
     ad_matrix ad_result = mat_mul.op(ad_left, ad_right);
     // -------------------------------------------------------------------
     // check that first component of result is a parameter
     // and the other components are varaibles.
     ok &= Parameter( ad_result(0, 0) );
     ok &= Variable(  ad_result(1, 0) );
     ok &= Variable(  ad_result(2, 0) );
     // -------------------------------------------------------------------
     // declare the dependent variable vector y
     size_t m = 3;
     CPPAD_TESTVECTOR(ad_scalar) ad_y(m);
     for(size_t i = 0; i < m; i++)
          ad_y[i] = ad_result(i, 0);
     CppAD::ADFun<scalar> f(ad_x, ad_y);
     // -------------------------------------------------------------------
     // check zero order forward mode
     CPPAD_TESTVECTOR(scalar) x(n), y(m);
     for(size_t i = 0; i < n; i++)
          x[i] = scalar(i + 2);
     y   = f.Forward(0, x);
     ok &= NearEqual(y[0], 0.0,                       eps, eps);
     ok &= NearEqual(y[1], x[0] + 2.0 * x[1],         eps, eps);
     ok &= NearEqual(y[2], x[0] * x[0] + x[1] * x[1], eps, eps);
     // -------------------------------------------------------------------
     // check first order forward mode
     CPPAD_TESTVECTOR(scalar) x1(n), y1(m);
     x1[0] = 1.0;
     x1[1] = 0.0;
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0,        eps, eps);
     ok   &= NearEqual(y1[1], 1.0,        eps, eps);
     ok   &= NearEqual(y1[2], 2.0 * x[0], eps, eps);
     x1[0] = 0.0;
     x1[1] = 1.0;
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0,        eps, eps);
     ok   &= NearEqual(y1[1], 2.0,        eps, eps);
     ok   &= NearEqual(y1[2], 2.0 * x[1], eps, eps);
     // -------------------------------------------------------------------
     // check second order forward mode
     CPPAD_TESTVECTOR(scalar) x2(n), y2(m);
     x2[0] = 0.0;
     x2[1] = 0.0;
     y2    = f.Forward(2, x2);
     ok   &= NearEqual(y2[0], 0.0, eps, eps);
     ok   &= NearEqual(y2[1], 0.0, eps, eps);
     ok   &= NearEqual(y2[2], 1.0, eps, eps); // 1/2 * f_1''(x)
     // -------------------------------------------------------------------
     // check first order reverse mode
     CPPAD_TESTVECTOR(scalar) w(m), d1w(n);
     w[0]  = 0.0;
     w[1]  = 1.0;
     w[2]  = 0.0;
     d1w   = f.Reverse(1, w);
     ok   &= NearEqual(d1w[0], 1.0, eps, eps);
     ok   &= NearEqual(d1w[1], 2.0, eps, eps);
     w[0]  = 0.0;
     w[1]  = 0.0;
     w[2]  = 1.0;
     d1w   = f.Reverse(1, w);
     ok   &= NearEqual(d1w[0], 2.0 * x[0], eps, eps);
     ok   &= NearEqual(d1w[1], 2.0 * x[1], eps, eps);
     // -------------------------------------------------------------------
     // check second order reverse mode
     CPPAD_TESTVECTOR(scalar) d2w(2 * n);
     d2w   = f.Reverse(2, w);
     // partial f_2 w.r.t. x_0
     ok   &= NearEqual(d2w[0 * 2 + 0], 2.0 * x[0], eps, eps);
     // partial f_2 w.r.t  x_1
     ok   &= NearEqual(d2w[1 * 2 + 0], 2.0 * x[1], eps, eps);
     // partial f_2 w.r.t x_1, x_0
     ok   &= NearEqual(d2w[0 * 2 + 1], 0.0,        eps, eps);
     // partial f_2 w.r.t x_1, x_1
     ok   &= NearEqual(d2w[1 * 2 + 1], 2.0,        eps, eps);
     // -------------------------------------------------------------------
     // check forward Jacobian sparsity
     CPPAD_TESTVECTOR( std::set<size_t> ) r(n), s(m);
     std::set<size_t> check_set;
     for(size_t j = 0; j < n; j++)
          r[j].insert(j);
     s      = f.ForSparseJac(n, r);
     check_set.clear();
     ok    &= s[0] == check_set;
     check_set.insert(0);
     check_set.insert(1);
     ok    &= s[1] == check_set;
     ok    &= s[2] == check_set;
     // -------------------------------------------------------------------
     // check reverse Jacobian sparsity
     r.resize(m);
     for(size_t i = 0; i < m; i++)
          r[i].insert(i);
     s  = f.RevSparseJac(m, r);
     check_set.clear();
     ok    &= s[0] == check_set;
     check_set.insert(0);
     check_set.insert(1);
     ok    &= s[1] == check_set;
     ok    &= s[2] == check_set;
     // -------------------------------------------------------------------
     // check forward Hessian sparsity for f_2 (x)
     CPPAD_TESTVECTOR( std::set<size_t> ) r2(1), s2(1), h(n);
     for(size_t j = 0; j < n; j++)
          r2[0].insert(j);
     s2[0].clear();
     s2[0].insert(2);
     h = f.ForSparseHes(r2, s2);
     check_set.clear();
     check_set.insert(0);
     ok &= h[0] == check_set;
     check_set.clear();
     check_set.insert(1);
     ok &= h[1] == check_set;
     // -------------------------------------------------------------------
     // check reverse Hessian sparsity for f_2 (x)
     CPPAD_TESTVECTOR( std::set<size_t> ) s3(1);
     s3[0].clear();
     s3[0].insert(2);
     h = f.RevSparseHes(n, s3);
     check_set.clear();
     check_set.insert(0);
     ok &= h[0] == check_set;
     check_set.clear();
     check_set.insert(1);
     ok &= h[1] == check_set;
     // -------------------------------------------------------------------
     return ok;
}

Input File: example/atomic/eigen_mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.16.1: Atomic Eigen Matrix Multiply Class

4.4.7.2.16.1.a: See Also
4.4.7.2.19.1: atomic_mat_mul.hpp

4.4.7.2.16.1.b: Purpose
Construct an atomic operation that computes the matrix product, @(@ R = A \times B @)@ for any positive integers @(@ r @)@, @(@ m @)@, @(@ c @)@, and any @(@ A \in \B{R}^{r \times m} @)@, @(@ B \in \B{R}^{m \times c} @)@.

4.4.7.2.16.1.c: Matrix Dimensions
This example puts the matrix dimensions in the atomic function arguments, instead of the 4.4.7.2.1: constructor , so that they can be different for different calls to the atomic function. These dimensions are:
nr_left number of rows in the left matrix; i.e, @(@ r @)@
n_middle rows in the left matrix and columns in right; i.e, @(@ m @)@
nc_right number of columns in the right matrix; i.e., @(@ c @)@

4.4.7.2.16.1.d: Theory

4.4.7.2.16.1.d.a: Forward
For @(@ k = 0 , \ldots @)@, the k-th order Taylor coefficient @(@ R_k @)@ is given by @[@ R_k = \sum_{\ell = 0}^{k} A_\ell B_{k-\ell} @]@

4.4.7.2.16.1.d.b: Product of Two Matrices
Suppose @(@ \bar{E} @)@ is the derivative of the scalar value function @(@ s(E) @)@ with respect to @(@ E @)@; i.e., @[@ \bar{E}_{i,j} = \frac{ \partial s } { \partial E_{i,j} } @]@ Also suppose that @(@ t @)@ is a scalar valued argument and @[@ E(t) = C(t) D(t) @]@ It follows that @[@ E'(t) = C'(t) D(t) + C(t) D'(t) @]@ @[@ (s \circ E)'(t) = \R{tr} [ \bar{E}^\R{T} E'(t) ] @]@ @[@ = \R{tr} [ \bar{E}^\R{T} C'(t) D(t) ] + \R{tr} [ \bar{E}^\R{T} C(t) D'(t) ] @]@ @[@ = \R{tr} [ D(t) \bar{E}^\R{T} C'(t) ] + \R{tr} [ \bar{E}^\R{T} C(t) D'(t) ] @]@ @[@ \bar{C} = \bar{E} D^\R{T} \W{,} \bar{D} = C^\R{T} \bar{E} @]@

4.4.7.2.16.1.d.c: Reverse
Reverse mode eliminates @(@ R_k @)@ as follows: for @(@ \ell = 0, \ldots , k-1 @)@, @[@ \bar{A}_\ell = \bar{A}_\ell + \bar{R}_k B_{k-\ell}^\R{T} @]@ @[@ \bar{B}_{k-\ell} = \bar{B}_{k-\ell} + A_\ell^\R{T} \bar{R}_k @]@

4.4.7.2.16.1.e: Start Class Definition
# include <cppad/cppad.hpp>
# include <Eigen/Core>

4.4.7.2.16.1.f: Public

4.4.7.2.16.1.f.a: Types
namespace { // BEGIN_EMPTY_NAMESPACE

template <class Base>
class atomic_eigen_mat_mul : public CppAD::atomic_base<Base> {
public:
     // -----------------------------------------------------------
     // type of elements during calculation of derivatives
     typedef Base              scalar;
     // type of elements during taping
     typedef CppAD::AD<scalar> ad_scalar;
     // type of matrix during calculation of derivatives
     typedef Eigen::Matrix<
          scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>     matrix;
     // type of matrix during taping
     typedef Eigen::Matrix<
          ad_scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > ad_matrix;

4.4.7.2.16.1.f.b: Constructor
     // constructor
     atomic_eigen_mat_mul(void) : CppAD::atomic_base<Base>(
          "atom_eigen_mat_mul"                             ,
          CppAD::atomic_base<Base>::set_sparsity_enum
     )
     { }

4.4.7.2.16.1.f.c: op
     // use atomic operation to multiply two AD matrices
     ad_matrix op(
          const ad_matrix&              left    ,
          const ad_matrix&              right   )
     {     size_t  nr_left   = size_t( left.rows() );
          size_t  n_middle  = size_t( left.cols() );
          size_t  nc_right  = size_t( right.cols() );
          assert( n_middle  == size_t( right.rows() )  );
          size_t  nx      = 3 + (nr_left + nc_right) * n_middle;
          size_t  ny      = nr_left * nc_right;
          size_t n_left   = nr_left * n_middle;
          size_t n_right  = n_middle * nc_right;
          size_t n_result = nr_left * nc_right;
          //
          assert( 3 + n_left + n_right == nx );
          assert( n_result == ny );
          // -----------------------------------------------------------------
          // packed version of left and right
          CPPAD_TESTVECTOR(ad_scalar) packed_arg(nx);
          //
          packed_arg[0] = ad_scalar( nr_left );
          packed_arg[1] = ad_scalar( n_middle );
          packed_arg[2] = ad_scalar( nc_right );
          for(size_t i = 0; i < n_left; i++)
               packed_arg[3 + i] = left.data()[i];
          for(size_t i = 0; i < n_right; i++)
               packed_arg[ 3 + n_left + i ] = right.data()[i];
          // ------------------------------------------------------------------
          // Packed version of result = left * right.
          // This as an atomic_base funciton call that CppAD uses
          // to store the atomic operation on the tape.
          CPPAD_TESTVECTOR(ad_scalar) packed_result(ny);
          (*this)(packed_arg, packed_result);
          // ------------------------------------------------------------------
          // unpack result matrix
          ad_matrix result(nr_left, nc_right);
          for(size_t i = 0; i < n_result; i++)
               result.data()[i] = packed_result[ i ];
          //
          return result;
     }

4.4.7.2.16.1.g: Private

4.4.7.2.16.1.g.a: Variables
private:
     // -------------------------------------------------------------
     // one forward mode vector of matrices for left, right, and result
     CppAD::vector<matrix> f_left_, f_right_, f_result_;
     // one reverse mode vector of matrices for left, right, and result
     CppAD::vector<matrix> r_left_, r_right_, r_result_;
     // -------------------------------------------------------------

4.4.7.2.16.1.g.b: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          // lowest order Taylor coefficient we are evaluating
          size_t                          p ,
          // highest order Taylor coefficient we are evaluating
          size_t                          q ,
          // which components of x are variables
          const CppAD::vector<bool>&      vx ,
          // which components of y are variables
          CppAD::vector<bool>&            vy ,
          // tx [ 3 + j * (q+1) + k ] is x_j^k
          const CppAD::vector<scalar>&    tx ,
          // ty [ i * (q+1) + k ] is y_i^k
          CppAD::vector<scalar>&          ty
     )
     {     size_t n_order  = q + 1;
          size_t nr_left  = size_t( CppAD::Integer( tx[ 0 * n_order + 0 ] ) );
          size_t n_middle = size_t( CppAD::Integer( tx[ 1 * n_order + 0 ] ) );
          size_t nc_right = size_t( CppAD::Integer( tx[ 2 * n_order + 0 ] ) );
# ifndef NDEBUG
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
          size_t  ny        = nr_left * nc_right;
# endif
          //
          assert( vx.size() == 0 || nx == vx.size() );
          assert( vx.size() == 0 || ny == vy.size() );
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          //
          size_t n_left   = nr_left * n_middle;
          size_t n_right  = n_middle * nc_right;
          size_t n_result = nr_left * nc_right;
          assert( 3 + n_left + n_right == nx );
          assert( n_result == ny );
          //
          // -------------------------------------------------------------------
          // make sure f_left_, f_right_, and f_result_ are large enough
          assert( f_left_.size() == f_right_.size() );
          assert( f_left_.size() == f_result_.size() );
          if( f_left_.size() < n_order )
          {     f_left_.resize(n_order);
               f_right_.resize(n_order);
               f_result_.resize(n_order);
               //
               for(size_t k = 0; k < n_order; k++)
               {     f_left_[k].resize(nr_left, n_middle);
                    f_right_[k].resize(n_middle, nc_right);
                    f_result_[k].resize(nr_left, nc_right);
               }
          }
          // -------------------------------------------------------------------
          // unpack tx into f_left and f_right
          for(size_t k = 0; k < n_order; k++)
          {     // unpack left values for this order
               for(size_t i = 0; i < n_left; i++)
                    f_left_[k].data()[i] = tx[ (3 + i) * n_order + k ];
               //
               // unpack right values for this order
               for(size_t i = 0; i < n_right; i++)
                    f_right_[k].data()[i] = tx[ ( 3 + n_left + i) * n_order + k ];
          }
          // -------------------------------------------------------------------
          // result for each order
          // (we could avoid recalculting f_result_[k] for k=0,...,p-1)
          for(size_t k = 0; k < n_order; k++)
          {     // result[k] = sum_ell left[ell] * right[k-ell]
               f_result_[k] = matrix::Zero(nr_left, nc_right);
               for(size_t ell = 0; ell <= k; ell++)
                    f_result_[k] += f_left_[ell] * f_right_[k-ell];
          }
          // -------------------------------------------------------------------
          // pack result_ into ty
          for(size_t k = 0; k < n_order; k++)
          {     for(size_t i = 0; i < n_result; i++)
                    ty[ i * n_order + k ] = f_result_[k].data()[i];
          }
          // ------------------------------------------------------------------
          // check if we are computing vy
          if( vx.size() == 0 )
               return true;
          // ------------------------------------------------------------------
          // compute variable information for y; i.e., vy
          // (note that the constant zero times a variable is a constant)
          scalar zero(0.0);
          assert( n_order == 1 );
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     bool var = false;
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     // left information
                         size_t index   = 3 + i * n_middle + ell;
                         bool var_left  = vx[index];
                         bool nz_left   = var_left | (f_left_[0](i, ell) != zero);
                         // right information
                         index          = 3 + n_left + ell * nc_right + j;
                         bool var_right = vx[index];
                         bool nz_right  = var_right | (f_right_[0](ell, j) != zero);
                         // effect of result
                         var |= var_left & nz_right;
                         var |= nz_left  & var_right;
                    }
                    size_t index = i * nc_right + j;
                    vy[index]    = var;
               }
          }
          return true;
     }

4.4.7.2.16.1.g.c: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          // highest order Taylor coefficient that we are computing derivative of
          size_t                     q ,
          // forward mode Taylor coefficients for x variables
          const CppAD::vector<double>&     tx ,
          // forward mode Taylor coefficients for y variables
          const CppAD::vector<double>&     ty ,
          // upon return, derivative of G[ F[ {x_j^k} ] ] w.r.t {x_j^k}
          CppAD::vector<double>&           px ,
          // derivative of G[ {y_i^k} ] w.r.t. {y_i^k}
          const CppAD::vector<double>&     py
     )
     {     size_t n_order  = q + 1;
          size_t nr_left  = size_t( CppAD::Integer( tx[ 0 * n_order + 0 ] ) );
          size_t n_middle = size_t( CppAD::Integer( tx[ 1 * n_order + 0 ] ) );
          size_t nc_right = size_t( CppAD::Integer( tx[ 2 * n_order + 0 ] ) );
# ifndef NDEBUG
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
          size_t  ny        = nr_left * nc_right;
# endif
          //
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          assert( px.size() == tx.size() );
          assert( py.size() == ty.size() );
          //
          size_t n_left   = nr_left * n_middle;
          size_t n_right  = n_middle * nc_right;
          size_t n_result = nr_left * nc_right;
          assert( 3 + n_left + n_right == nx );
          assert( n_result == ny );
          // -------------------------------------------------------------------
          // make sure f_left_, f_right_ are large enough
          assert( f_left_.size() == f_right_.size() );
          assert( f_left_.size() == f_result_.size() );
          // must have previous run forward with order >= n_order
          assert( f_left_.size() >= n_order );
          // -------------------------------------------------------------------
          // make sure r_left_, r_right_, and r_result_ are large enough
          assert( r_left_.size() == r_right_.size() );
          assert( r_left_.size() == r_result_.size() );
          if( r_left_.size() < n_order )
          {     r_left_.resize(n_order);
               r_right_.resize(n_order);
               r_result_.resize(n_order);
               //
               for(size_t k = 0; k < n_order; k++)
               {     r_left_[k].resize(nr_left, n_middle);
                    r_right_[k].resize(n_middle, nc_right);
                    r_result_[k].resize(nr_left, nc_right);
               }
          }
          // -------------------------------------------------------------------
          // unpack tx into f_left and f_right
          for(size_t k = 0; k < n_order; k++)
          {     // unpack left values for this order
               for(size_t i = 0; i < n_left; i++)
                    f_left_[k].data()[i] = tx[ (3 + i) * n_order + k ];
               //
               // unpack right values for this order
               for(size_t i = 0; i < n_right; i++)
                    f_right_[k].data()[i] = tx[ (3 + n_left + i) * n_order + k ];
          }
          // -------------------------------------------------------------------
          // unpack py into r_result_
          for(size_t k = 0; k < n_order; k++)
          {     for(size_t i = 0; i < n_result; i++)
                    r_result_[k].data()[i] = py[ i * n_order + k ];
          }
          // -------------------------------------------------------------------
          // initialize r_left_ and r_right_ as zero
          for(size_t k = 0; k < n_order; k++)
          {     r_left_[k]   = matrix::Zero(nr_left, n_middle);
               r_right_[k]  = matrix::Zero(n_middle, nc_right);
          }
          // -------------------------------------------------------------------
          // matrix reverse mode calculation
          for(size_t k1 = n_order; k1 > 0; k1--)
          {     size_t k = k1 - 1;
               for(size_t ell = 0; ell <= k; ell++)
               {     // nr x nm       = nr x nc      * nc * nm
                    r_left_[ell]    += r_result_[k] * f_right_[k-ell].transpose();
                    // nm x nc       = nm x nr * nr * nc
                    r_right_[k-ell] += f_left_[ell].transpose() * r_result_[k];
               }
          }
          // -------------------------------------------------------------------
          // pack r_left and r_right int px
          for(size_t k = 0; k < n_order; k++)
          {     // dimensions are integer constants
               px[ 0 * n_order + k ] = 0.0;
               px[ 1 * n_order + k ] = 0.0;
               px[ 2 * n_order + k ] = 0.0;
               //
               // pack left values for this order
               for(size_t i = 0; i < n_left; i++)
                    px[ (3 + i) * n_order + k ] = r_left_[k].data()[i];
               //
               // pack right values for this order
               for(size_t i = 0; i < n_right; i++)
                    px[ (3 + i + n_left) * n_order + k] = r_right_[k].data()[i];
          }
          //
          return true;
     }

4.4.7.2.16.1.g.d: for_sparse_jac
     // forward Jacobian sparsity routine called by CppAD
     virtual bool for_sparse_jac(
          // number of columns in the matrix R
          size_t                                       q ,
          // sparsity pattern for the matrix R
          const CppAD::vector< std::set<size_t> >&     r ,
          // sparsity pattern for the matrix S = f'(x) * R
          CppAD::vector< std::set<size_t> >&           s ,
          const CppAD::vector<Base>&                   x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
# ifndef NDEBUG
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
          size_t  ny        = nr_left * nc_right;
# endif
          //
          assert( nx == r.size() );
          assert( ny == s.size() );
          //
          size_t n_left = nr_left * n_middle;
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     // pack index for entry (i, j) in result
                    size_t i_result = i * nc_right + j;
                    s[i_result].clear();
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     // pack index for entry (i, ell) in left
                         size_t i_left  = 3 + i * n_middle + ell;
                         // pack index for entry (ell, j) in right
                         size_t i_right = 3 + n_left + ell * nc_right + j;
                         // check if result of for this product is alwasy zero
                         // note that x is nan for commponents that are variables
                         bool zero = x[i_left] == Base(0.0) || x[i_right] == Base(0);
                         if( ! zero )
                         {     s[i_result] =
                                   CppAD::set_union(s[i_result], r[i_left] );
                              s[i_result] =
                                   CppAD::set_union(s[i_result], r[i_right] );
                         }
                    }
               }
          }
          return true;
     }

4.4.7.2.16.1.g.e: rev_sparse_jac
     // reverse Jacobian sparsity routine called by CppAD
     virtual bool rev_sparse_jac(
          // number of columns in the matrix R^T
          size_t                                      q  ,
          // sparsity pattern for the matrix R^T
          const CppAD::vector< std::set<size_t> >&    rt ,
          // sparsoity pattern for the matrix S^T = f'(x)^T * R^T
          CppAD::vector< std::set<size_t> >&          st ,
          const CppAD::vector<Base>&                   x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny        = nr_left * nc_right;
# endif
          //
          assert( nx == st.size() );
          assert( ny == rt.size() );
          //
          // initialize S^T as empty
          for(size_t i = 0; i < nx; i++)
               st[i].clear();

          // sparsity for S(x)^T = f'(x)^T * R^T
          size_t n_left = nr_left * n_middle;
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     // pack index for entry (i, j) in result
                    size_t i_result = i * nc_right + j;
                    st[i_result].clear();
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     // pack index for entry (i, ell) in left
                         size_t i_left  = 3 + i * n_middle + ell;
                         // pack index for entry (ell, j) in right
                         size_t i_right = 3 + n_left + ell * nc_right + j;
                         //
                         st[i_left]  = CppAD::set_union(st[i_left],  rt[i_result]);
                         st[i_right] = CppAD::set_union(st[i_right], rt[i_result]);
                    }
               }
          }
          return true;
     }

4.4.7.2.16.1.g.f: for_sparse_hes
     virtual bool for_sparse_hes(
          // which components of x are variables for this call
          const CppAD::vector<bool>&                   vx,
          // sparsity pattern for the diagonal of R
          const CppAD::vector<bool>&                   r ,
          // sparsity pattern for the vector S
          const CppAD::vector<bool>&                   s ,
          // sparsity patternfor the Hessian H(x)
          CppAD::vector< std::set<size_t> >&           h ,
          const CppAD::vector<Base>&                   x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny        = nr_left * nc_right;
# endif
          //
          assert( vx.size() == nx );
          assert( r.size()  == nx );
          assert( s.size()  == ny );
          assert( h.size()  == nx );
          //
          // initilize h as empty
          for(size_t i = 0; i < nx; i++)
               h[i].clear();
          //
          size_t n_left = nr_left * n_middle;
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     // pack index for entry (i, j) in result
                    size_t i_result = i * nc_right + j;
                    if( s[i_result] )
                    {     for(size_t ell = 0; ell < n_middle; ell++)
                         {     // pack index for entry (i, ell) in left
                              size_t i_left  = 3 + i * n_middle + ell;
                              // pack index for entry (ell, j) in right
                              size_t i_right = 3 + n_left + ell * nc_right + j;
                              if( r[i_left] & r[i_right] )
                              {     h[i_left].insert(i_right);
                                   h[i_right].insert(i_left);
                              }
                         }
                    }
               }
          }
          return true;
     }

4.4.7.2.16.1.g.g: rev_sparse_hes
     // reverse Hessian sparsity routine called by CppAD
     virtual bool rev_sparse_hes(
          // which components of x are variables for this call
          const CppAD::vector<bool>&                   vx,
          // sparsity pattern for S(x) = g'[f(x)]
          const CppAD::vector<bool>&                   s ,
          // sparsity pattern for d/dx g[f(x)] = S(x) * f'(x)
          CppAD::vector<bool>&                         t ,
          // number of columns in R, U(x), and V(x)
          size_t                                       q ,
          // sparsity pattern for R
          const CppAD::vector< std::set<size_t> >&     r ,
          // sparsity pattern for U(x) = g^{(2)} [ f(x) ] * f'(x) * R
          const CppAD::vector< std::set<size_t> >&     u ,
          // sparsity pattern for
          // V(x) = f'(x)^T * U(x) + sum_{i=0}^{m-1} S_i(x) f_i^{(2)} (x) * R
          CppAD::vector< std::set<size_t> >&           v ,
          // parameters as integers
          const CppAD::vector<Base>&                   x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny        = nr_left * nc_right;
# endif
          //
          assert( vx.size() == nx );
          assert( s.size()  == ny );
          assert( t.size()  == nx );
          assert( r.size()  == nx );
          assert( v.size()  == nx );
          //
          // initilaize return sparsity patterns as false
          for(size_t j = 0; j < nx; j++)
          {     t[j] = false;
               v[j].clear();
          }
          //
          size_t n_left = nr_left * n_middle;
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     // pack index for entry (i, j) in result
                    size_t i_result = i * nc_right + j;
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     // pack index for entry (i, ell) in left
                         size_t i_left  = 3 + i * n_middle + ell;
                         // pack index for entry (ell, j) in right
                         size_t i_right = 3 + n_left + ell * nc_right + j;
                         //
                         // back propagate T(x) = S(x) * f'(x).
                         t[i_left]  |= bool( s[i_result] );
                         t[i_right] |= bool( s[i_result] );
                         //
                         // V(x) = f'(x)^T * U(x) +  sum_i S_i(x) * f_i''(x) * R
                         // U(x)   = g''[ f(x) ] * f'(x) * R
                         // S_i(x) = g_i'[ f(x) ]
                         //
                         // back propagate f'(x)^T * U(x)
                         v[i_left]  = CppAD::set_union(v[i_left],  u[i_result] );
                         v[i_right] = CppAD::set_union(v[i_right], u[i_result] );
                         //
                         // back propagate S_i(x) * f_i''(x) * R
                         // (here is where we use vx to check for cross terms)
                         if( s[i_result] & vx[i_left] & vx[i_right] )
                         {     v[i_left]  = CppAD::set_union(v[i_left],  r[i_right] );
                              v[i_right] = CppAD::set_union(v[i_right], r[i_left]  );
                         }
                    }
               }
          }
          return true;
     }

4.4.7.2.16.1.h: End Class Definition

}; // End of atomic_eigen_mat_mul class

}  // END_EMPTY_NAMESPACE

Input File: cppad/example/eigen_mat_mul.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test

4.4.7.2.17.a: Description
The 5: ADFun function object f for this example is @[@ f(x) = \left( \begin{array}{cc} x_0 & 0 \\ 0 & x_1 \end{array} \right)^{-1} \left( \begin{array}{c} 0 \\ x_2 \end{array} \right) = \left( \begin{array}{c} 0 \\ x_2 / x_1 ) \end{array} \right) @]@

4.4.7.2.17.b: Class Definition
This example uses the file 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp which defines matrix multiply as a 4.4.7.2: atomic_base operation.

4.4.7.2.17.c: Use Atomic Function
# include <cppad/cppad.hpp>
# include <cppad/example/eigen_mat_inv.hpp>
# include <cppad/example/eigen_mat_mul.hpp>


bool eigen_mat_inv(void)
{
     typedef double                                            scalar;
     typedef CppAD::AD<scalar>                                 ad_scalar;
     typedef typename atomic_eigen_mat_inv<scalar>::ad_matrix  ad_matrix;
     //
     bool ok    = true;
     scalar eps = 10. * std::numeric_limits<scalar>::epsilon();
     using CppAD::NearEqual;
     //

4.4.7.2.17.c.a: Constructor
     // -------------------------------------------------------------------
     // object that multiplies matrices
     atomic_eigen_mat_mul<scalar> mat_mul;
     // -------------------------------------------------------------------
     // object that computes inverse of a square matrix
     atomic_eigen_mat_inv<scalar> mat_inv;
     // -------------------------------------------------------------------
     // declare independent variable vector x
     size_t n = 3;
     CPPAD_TESTVECTOR(ad_scalar) ad_x(n);
     for(size_t j = 0; j < n; j++)
          ad_x[j] = ad_scalar(j + 1);
     CppAD::Independent(ad_x);
     // -------------------------------------------------------------------
     // left = [ x[0]  0    ]
     //        [ 0     x[1] ]
     size_t nr_left  = 2;
     ad_matrix ad_left(nr_left, nr_left);
     ad_left(0, 0) = ad_x[0];
     ad_left(0, 1) = ad_scalar(0.0);
     ad_left(1, 0) = ad_scalar(0.0);
     ad_left(1, 1) = ad_x[1];
     // -------------------------------------------------------------------
     // right = [ 0 , x[2] ]^T
     size_t nc_right = 1;
     ad_matrix ad_right(nr_left, nc_right);
     ad_right(0, 0) = ad_scalar(0.0);
     ad_right(1, 0) = ad_x[2];
     // -------------------------------------------------------------------
     // use atomic operation to compute left^{-1}
     ad_matrix ad_left_inv = mat_inv.op(ad_left);
     // use atomic operation to multiply left^{-1} * right
     ad_matrix ad_result   = mat_mul.op(ad_left_inv, ad_right);
     // -------------------------------------------------------------------
     // declare the dependent variable vector y
     size_t m = 2;
     CPPAD_TESTVECTOR(ad_scalar) ad_y(2);
     for(size_t i = 0; i < m; i++)
          ad_y[i] = ad_result(i, 0);
     CppAD::ADFun<scalar> f(ad_x, ad_y);
     // -------------------------------------------------------------------
     // check zero order forward mode
     CPPAD_TESTVECTOR(scalar) x(n), y(m);
     for(size_t i = 0; i < n; i++)
          x[i] = scalar(i + 2);
     y   = f.Forward(0, x);
     ok &= NearEqual(y[0], 0.0,          eps, eps);
     ok &= NearEqual(y[1], x[2] / x[1],  eps, eps);
     // -------------------------------------------------------------------
     // check first order forward mode
     CPPAD_TESTVECTOR(scalar) x1(n), y1(m);
     x1[0] = 1.0;
     x1[1] = 0.0;
     x1[2] = 0.0;
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0,        eps, eps);
     ok   &= NearEqual(y1[1], 0.0,        eps, eps);
     x1[0] = 0.0;
     x1[1] = 0.0;
     x1[2] = 1.0;
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0,        eps, eps);
     ok   &= NearEqual(y1[1], 1.0 / x[1], eps, eps);
     x1[0] = 0.0;
     x1[1] = 1.0;
     x1[2] = 0.0;
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0,                  eps, eps);
     ok   &= NearEqual(y1[1], - x[2] / (x[1]*x[1]), eps, eps);
     // -------------------------------------------------------------------
     // check second order forward mode
     CPPAD_TESTVECTOR(scalar) x2(n), y2(m);
     x2[0] = 0.0;
     x2[1] = 0.0;
     x2[2] = 0.0;
     scalar  f1_x1_x1 = 2.0 * x[2] / (x[1] * x[1] * x[1] );
     y2    = f.Forward(2, x2);
     ok   &= NearEqual(y2[0], 0.0,            eps, eps);
     ok   &= NearEqual(y2[1], f1_x1_x1 / 2.0, eps, eps);
     // -------------------------------------------------------------------
     // check first order reverse
     CPPAD_TESTVECTOR(scalar) w(m), d1w(n);
     w[0] = 1.0;
     w[1] = 0.0;
     d1w  = f.Reverse(1, w);
     ok  &= NearEqual(d1w[0], 0.0, eps, eps);
     ok  &= NearEqual(d1w[1], 0.0, eps, eps);
     ok  &= NearEqual(d1w[2], 0.0, eps, eps);
     w[0] = 0.0;
     w[1] = 1.0;
     d1w  = f.Reverse(1, w);
     ok  &= NearEqual(d1w[0], 0.0,                  eps, eps);
     ok  &= NearEqual(d1w[1], - x[2] / (x[1]*x[1]), eps, eps);
     ok  &= NearEqual(d1w[2], 1.0 / x[1],           eps, eps);
     // -------------------------------------------------------------------
     // check second order reverse
     CPPAD_TESTVECTOR(scalar) d2w(2 * n);
     d2w  = f.Reverse(2, w);
     // partial f_1 w.r.t x_0
     ok  &= NearEqual(d2w[0 * 2 + 0], 0.0,                  eps, eps);
     // partial f_1 w.r.t x_1
     ok  &= NearEqual(d2w[1 * 2 + 0], - x[2] / (x[1]*x[1]), eps, eps);
     // partial f_1 w.r.t x_2
     ok  &= NearEqual(d2w[2 * 2 + 0], 1.0 / x[1],           eps, eps);
     // partial f_1 w.r.t x_1, x_0
     ok  &= NearEqual(d2w[0 * 2 + 1], 0.0,                  eps, eps);
     // partial f_1 w.r.t x_1, x_1
     ok  &= NearEqual(d2w[1 * 2 + 1], f1_x1_x1,             eps, eps);
     // partial f_1 w.r.t x_1, x_2
     ok  &= NearEqual(d2w[2 * 2 + 1], - 1.0 / (x[1]*x[1]),  eps, eps);
     // -------------------------------------------------------------------
     return ok;
}

Input File: example/atomic/eigen_mat_inv.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.17.1: Atomic Eigen Matrix Inversion Class

4.4.7.2.17.1.a: Purpose
Construct an atomic operation that computes the matrix inverse @(@ R = A^{-1} @)@ for any positive integer @(@ p @)@ and invertible matrix @(@ A \in \B{R}^{p \times p} @)@.

4.4.7.2.17.1.b: Matrix Dimensions
This example puts the matrix dimension @(@ p @)@ in the atomic function arguments, instead of the 4.4.7.2.1: constructor , so it can be different for different calls to the atomic function.

4.4.7.2.17.1.c: Theory

4.4.7.2.17.1.c.a: Forward
The zero order forward mode Taylor coefficient is give by @[@ R_0 = A_0^{-1} @]@ For @(@ k = 1 , \ldots @)@, the k-th order Taylor coefficient of @(@ A R @)@ is given by @[@ 0 = \sum_{\ell=0}^k A_\ell R_{k-\ell} @]@ Solving for @(@ R_k @)@ in terms of the coefficients for @(@ A @)@ and the lower order coefficients for @(@ R @)@ we have @[@ R_k = - R_0 \left( \sum_{\ell=1}^k A_\ell R_{k-\ell} \right) @]@ Furthermore, once we have @(@ R_k @)@ we can compute the sum using @[@ A_0 R_k = - \left( \sum_{\ell=1}^k A_\ell R_{k-\ell} \right) @]@

4.4.7.2.17.1.c.b: Product of Three Matrices
Suppose @(@ \bar{E} @)@ is the derivative of the scalar value function @(@ s(E) @)@ with respect to @(@ E @)@; i.e., @[@ \bar{E}_{i,j} = \frac{ \partial s } { \partial E_{i,j} } @]@ Also suppose that @(@ t @)@ is a scalar valued argument and @[@ E(t) = B(t) C(t) D(t) @]@ It follows that @[@ E'(t) = B'(t) C(t) D(t) + B(t) C'(t) D(t) + B(t) C(t) D'(t) @]@ @[@ (s \circ E)'(t) = \R{tr} [ \bar{E}^\R{T} E'(t) ] @]@ @[@ = \R{tr} [ \bar{E}^\R{T} B'(t) C(t) D(t) ] + \R{tr} [ \bar{E}^\R{T} B(t) C'(t) D(t) ] + \R{tr} [ \bar{E}^\R{T} B(t) C(t) D'(t) ] @]@ @[@ = \R{tr} [ B(t) D(t) \bar{E}^\R{T} B'(t) ] + \R{tr} [ D(t) \bar{E}^\R{T} B(t) C'(t) ] + \R{tr} [ \bar{E}^\R{T} B(t) C(t) D'(t) ] @]@ @[@ \bar{B} = \bar{E} (C D)^\R{T} \W{,} \bar{C} = B^\R{T} \bar{E} D^\R{T} \W{,} \bar{D} = (B C)^\R{T} \bar{E} @]@

4.4.7.2.17.1.c.c: Reverse
For @(@ k > 0 @)@, reverse mode eliminates @(@ R_k @)@ and expresses the function values @(@ s @)@ in terms of the coefficients of @(@ A @)@ and the lower order coefficients of @(@ R @)@. The effect on @(@ \bar{R}_0 @)@ (of eliminating @(@ R_k @)@) is @[@ \bar{R}_0 = \bar{R}_0 - \bar{R}_k \left( \sum_{\ell=1}^k A_\ell R_{k-\ell} \right)^\R{T} = \bar{R}_0 + \bar{R}_k ( A_0 R_k )^\R{T} @]@ For @(@ \ell = 1 , \ldots , k @)@, the effect on @(@ \bar{R}_{k-\ell} @)@ and @(@ A_\ell @)@ (of eliminating @(@ R_k @)@) is @[@ \bar{A}_\ell = \bar{A}_\ell - R_0^\R{T} \bar{R}_k R_{k-\ell}^\R{T} @]@ @[@ \bar{R}_{k-\ell} = \bar{R}_{k-\ell} - ( R_0 A_\ell )^\R{T} \bar{R}_k @]@ We note that @[@ R_0 '(t) A_0 (t) + R_0 (t) A_0 '(t) = 0 @]@ @[@ R_0 '(t) = - R_0 (t) A_0 '(t) R_0 (t) @]@ The reverse mode formula that eliminates @(@ R_0 @)@ is @[@ \bar{A}_0 = \bar{A}_0 - R_0^\R{T} \bar{R}_0 R_0^\R{T} @]@

4.4.7.2.17.1.d: Start Class Definition
# include <cppad/cppad.hpp>
# include <Eigen/Core>
# include <Eigen/LU>


4.4.7.2.17.1.e: Public

4.4.7.2.17.1.e.a: Types
namespace { // BEGIN_EMPTY_NAMESPACE

template <class Base>
class atomic_eigen_mat_inv : public CppAD::atomic_base<Base> {
public:
     // -----------------------------------------------------------
     // type of elements during calculation of derivatives
     typedef Base              scalar;
     // type of elements during taping
     typedef CppAD::AD<scalar> ad_scalar;
     // type of matrix during calculation of derivatives
     typedef Eigen::Matrix<
          scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>     matrix;
     // type of matrix during taping
     typedef Eigen::Matrix<
          ad_scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > ad_matrix;

4.4.7.2.17.1.e.b: Constructor
     // constructor
     atomic_eigen_mat_inv(void) : CppAD::atomic_base<Base>(
          "atom_eigen_mat_inv"                             ,
          CppAD::atomic_base<Base>::set_sparsity_enum
     )
     { }

4.4.7.2.17.1.e.c: op
     // use atomic operation to invert an AD matrix
     ad_matrix op(const ad_matrix& arg)
     {     size_t nr = size_t( arg.rows() );
          size_t ny = nr * nr;
          size_t nx = 1 + ny;
          assert( nr == size_t( arg.cols() ) );
          // -------------------------------------------------------------------
          // packed version of arg
          CPPAD_TESTVECTOR(ad_scalar) packed_arg(nx);
          packed_arg[0] = ad_scalar( nr );
          for(size_t i = 0; i < ny; i++)
               packed_arg[1 + i] = arg.data()[i];
          // -------------------------------------------------------------------
          // packed version of result = arg^{-1}.
          // This is an atomic_base function call that CppAD uses to
          // store the atomic operation on the tape.
          CPPAD_TESTVECTOR(ad_scalar) packed_result(ny);
          (*this)(packed_arg, packed_result);
          // -------------------------------------------------------------------
          // unpack result matrix
          ad_matrix result(nr, nr);
          for(size_t i = 0; i < ny; i++)
               result.data()[i] = packed_result[i];
          return result;
     }

4.4.7.2.17.1.f: Private

4.4.7.2.17.1.f.a: Variables
private:
     // -------------------------------------------------------------
     // one forward mode vector of matrices for argument and result
     CppAD::vector<matrix> f_arg_, f_result_;
     // one reverse mode vector of matrices for argument and result
     CppAD::vector<matrix> r_arg_, r_result_;
     // -------------------------------------------------------------

4.4.7.2.17.1.f.b: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          // lowest order Taylor coefficient we are evaluating
          size_t                          p ,
          // highest order Taylor coefficient we are evaluating
          size_t                          q ,
          // which components of x are variables
          const CppAD::vector<bool>&      vx ,
          // which components of y are variables
          CppAD::vector<bool>&            vy ,
          // tx [ j * (q+1) + k ] is x_j^k
          const CppAD::vector<scalar>&    tx ,
          // ty [ i * (q+1) + k ] is y_i^k
          CppAD::vector<scalar>&          ty
     )
     {     size_t n_order = q + 1;
          size_t nr      = size_t( CppAD::Integer( tx[ 0 * n_order + 0 ] ) );
          size_t ny      = nr * nr;
# ifndef NDEBUG
          size_t nx      = 1 + ny;
# endif
          assert( vx.size() == 0 || nx == vx.size() );
          assert( vx.size() == 0 || ny == vy.size() );
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          //
          // -------------------------------------------------------------------
          // make sure f_arg_ and f_result_ are large enough
          assert( f_arg_.size() == f_result_.size() );
          if( f_arg_.size() < n_order )
          {     f_arg_.resize(n_order);
               f_result_.resize(n_order);
               //
               for(size_t k = 0; k < n_order; k++)
               {     f_arg_[k].resize(nr, nr);
                    f_result_[k].resize(nr, nr);
               }
          }
          // -------------------------------------------------------------------
          // unpack tx into f_arg_
          for(size_t k = 0; k < n_order; k++)
          {     // unpack arg values for this order
               for(size_t i = 0; i < ny; i++)
                    f_arg_[k].data()[i] = tx[ (1 + i) * n_order + k ];
          }
          // -------------------------------------------------------------------
          // result for each order
          // (we could avoid recalculting f_result_[k] for k=0,...,p-1)
          //
          f_result_[0] = f_arg_[0].inverse();
          for(size_t k = 1; k < n_order; k++)
          {     // initialize sum
               matrix f_sum = matrix::Zero(nr, nr);
               // compute sum
               for(size_t ell = 1; ell <= k; ell++)
                    f_sum -= f_arg_[ell] * f_result_[k-ell];
               // result_[k] = arg_[0]^{-1} * sum_
               f_result_[k] = f_result_[0] * f_sum;
          }
          // -------------------------------------------------------------------
          // pack result_ into ty
          for(size_t k = 0; k < n_order; k++)
          {     for(size_t i = 0; i < ny; i++)
                    ty[ i * n_order + k ] = f_result_[k].data()[i];
          }
          // -------------------------------------------------------------------
          // check if we are computing vy
          if( vx.size() == 0 )
               return true;
          // ------------------------------------------------------------------
          // This is a very dumb algorithm that over estimates which
          // elements of the inverse are variables (which is not efficient).
          bool var = false;
          for(size_t i = 0; i < ny; i++)
               var |= vx[1 + i];
          for(size_t i = 0; i < ny; i++)
               vy[i] = var;
          return true;
     }

4.4.7.2.17.1.f.c: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          // highest order Taylor coefficient that we are computing derivative of
          size_t                     q ,
          // forward mode Taylor coefficients for x variables
          const CppAD::vector<double>&     tx ,
          // forward mode Taylor coefficients for y variables
          const CppAD::vector<double>&     ty ,
          // upon return, derivative of G[ F[ {x_j^k} ] ] w.r.t {x_j^k}
          CppAD::vector<double>&           px ,
          // derivative of G[ {y_i^k} ] w.r.t. {y_i^k}
          const CppAD::vector<double>&     py
     )
     {     size_t n_order = q + 1;
          size_t nr      = size_t( CppAD::Integer( tx[ 0 * n_order + 0 ] ) );
          size_t ny      = nr * nr;
# ifndef NDEBUG
          size_t nx      = 1 + ny;
# endif
          //
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          assert( px.size()    == tx.size() );
          assert( py.size()    == ty.size() );
          // -------------------------------------------------------------------
          // make sure f_arg_ is large enough
          assert( f_arg_.size() == f_result_.size() );
          // must have previous run forward with order >= n_order
          assert( f_arg_.size() >= n_order );
          // -------------------------------------------------------------------
          // make sure r_arg_, r_result_ are large enough
          assert( r_arg_.size() == r_result_.size() );
          if( r_arg_.size() < n_order )
          {     r_arg_.resize(n_order);
               r_result_.resize(n_order);
               //
               for(size_t k = 0; k < n_order; k++)
               {     r_arg_[k].resize(nr, nr);
                    r_result_[k].resize(nr, nr);
               }
          }
          // -------------------------------------------------------------------
          // unpack tx into f_arg_
          for(size_t k = 0; k < n_order; k++)
          {     // unpack arg values for this order
               for(size_t i = 0; i < ny; i++)
                    f_arg_[k].data()[i] = tx[ (1 + i) * n_order + k ];
          }
          // -------------------------------------------------------------------
          // unpack py into r_result_
          for(size_t k = 0; k < n_order; k++)
          {     for(size_t i = 0; i < ny; i++)
                    r_result_[k].data()[i] = py[ i * n_order + k ];
          }
          // -------------------------------------------------------------------
          // initialize r_arg_ as zero
          for(size_t k = 0; k < n_order; k++)
               r_arg_[k]   = matrix::Zero(nr, nr);
          // -------------------------------------------------------------------
          // matrix reverse mode calculation
          //
          for(size_t k1 = n_order; k1 > 1; k1--)
          {     size_t k = k1 - 1;
               // bar{R}_0 = bar{R}_0 + bar{R}_k (A_0 R_k)^T
               r_result_[0] +=
               r_result_[k] * f_result_[k].transpose() * f_arg_[0].transpose();
               //
               for(size_t ell = 1; ell <= k; ell++)
               {     // bar{A}_l = bar{A}_l - R_0^T bar{R}_k R_{k-l}^T
                    r_arg_[ell] -= f_result_[0].transpose()
                         * r_result_[k] * f_result_[k-ell].transpose();
                    // bar{R}_{k-l} = bar{R}_{k-1} - (R_0 A_l)^T bar{R}_k
                    r_result_[k-ell] -= f_arg_[ell].transpose()
                         * f_result_[0].transpose() * r_result_[k];
               }
          }
          r_arg_[0] -=
          f_result_[0].transpose() * r_result_[0] * f_result_[0].transpose();
          // -------------------------------------------------------------------
          // pack r_arg into px
          for(size_t k = 0; k < n_order; k++)
          {     for(size_t i = 0; i < ny; i++)
                    px[ (1 + i) * n_order + k ] = r_arg_[k].data()[i];
          }
          //
          return true;
     }

4.4.7.2.17.1.g: End Class Definition

}; // End of atomic_eigen_mat_inv class

}  // END_EMPTY_NAMESPACE

Input File: cppad/example/eigen_mat_inv.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test

4.4.7.2.18.a: Description
The 5: ADFun function object f for this example is @[@ f(x) = \R{chol} \left( \begin{array}{cc} x_0 & x_1 \\ x_1 & x_2 \end{array} \right) = \frac{1}{ \sqrt{x_0} } \left( \begin{array}{cc} x_0 & 0 \\ x_1 & \sqrt{ x_0 x_2 - x_1 x_1 } \end{array} \right) @]@ where the matrix is positive definite; i.e., @(@ x_0 > 0 @)@, @(@ x_2 > 0 @)@ and @(@ x_0 x_2 - x_1 x_1 > 0 @)@.

4.4.7.2.18.b: Contents
cholesky_theory: 4.4.7.2.18.1AD Theory for Cholesky Factorization
atomic_eigen_cholesky.hpp: 4.4.7.2.18.2Atomic Eigen Cholesky Factorization Class

4.4.7.2.18.c: Use Atomic Function
# include <cppad/cppad.hpp>
# include <cppad/example/eigen_cholesky.hpp>


bool eigen_cholesky(void)
{
     typedef double scalar;
     typedef typename atomic_eigen_cholesky<scalar>::ad_scalar ad_scalar;
     typedef typename atomic_eigen_cholesky<scalar>::ad_matrix ad_matrix;
     //
     bool ok    = true;
     scalar eps = 10. * std::numeric_limits<scalar>::epsilon();
     using CppAD::NearEqual;
     //

4.4.7.2.18.c.a: Constructor
     // -------------------------------------------------------------------
     // object that computes cholesky factor of a matrix
     atomic_eigen_cholesky<scalar> cholesky;
     // -------------------------------------------------------------------
     // declare independent variable vector x
     size_t n = 3;
     CPPAD_TESTVECTOR(ad_scalar) ad_x(n);
     ad_x[0] = 2.0;
     ad_x[1] = 0.5;
     ad_x[2] = 3.0;
     CppAD::Independent(ad_x);
     // -------------------------------------------------------------------
     // A = [ x[0]  x[1] ]
     //     [ x[1]  x[2] ]
     size_t nr  = 2;
     ad_matrix ad_A(nr, nr);
     ad_A(0, 0) = ad_x[0];
     ad_A(1, 0) = ad_x[1];
     ad_A(0, 1) = ad_x[1];
     ad_A(1, 1) = ad_x[2];
     // -------------------------------------------------------------------
     // use atomic operation to L such that A = L * L^T
     ad_matrix ad_L = cholesky.op(ad_A);
     // -------------------------------------------------------------------
     // declare the dependent variable vector y
     size_t m = 3;
     CPPAD_TESTVECTOR(ad_scalar) ad_y(m);
     ad_y[0] = ad_L(0, 0);
     ad_y[1] = ad_L(1, 0);
     ad_y[2] = ad_L(1, 1);
     CppAD::ADFun<scalar> f(ad_x, ad_y);
     // -------------------------------------------------------------------
     // check zero order forward mode
     CPPAD_TESTVECTOR(scalar) x(n), y(m);
     x[0] = 2.0;
     x[1] = 0.5;
     x[2] = 5.0;
     y   = f.Forward(0, x);
     scalar check;
     check = std::sqrt( x[0] );
     ok   &= NearEqual(y[0], check, eps, eps);
     check = x[1] / std::sqrt( x[0] );
     ok   &= NearEqual(y[1], check, eps, eps);
     check = std::sqrt( x[2] - x[1] * x[1] / x[0] );
     ok   &= NearEqual(y[2], check, eps, eps);
     // -------------------------------------------------------------------
     // check first order forward mode
     CPPAD_TESTVECTOR(scalar) x1(n), y1(m);
     //
     // partial w.r.t. x[0]
     x1[0] = 1.0;
     x1[1] = 0.0;
     x1[2] = 0.0;
     //
     y1    = f.Forward(1, x1);
     check = 1.0 / (2.0 * std::sqrt( x[0] ) );
     ok   &= NearEqual(y1[0], check, eps, eps);
     //
     check = - x[1] / (2.0 * x[0] * std::sqrt( x[0] ) );
     ok   &= NearEqual(y1[1], check, eps, eps);
     //
     check = std::sqrt( x[2] - x[1] * x[1] / x[0] );
     check = x[1] * x[1] / (x[0] * x[0] * 2.0 * check);
     ok   &= NearEqual(y1[2], check, eps, eps);
     //
     // partial w.r.t. x[1]
     x1[0] = 0.0;
     x1[1] = 1.0;
     x1[2] = 0.0;
     //
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0, eps, eps);
     //
     check = 1.0 / std::sqrt( x[0] );
     ok   &= NearEqual(y1[1], check, eps, eps);
     //
     check = std::sqrt( x[2] - x[1] * x[1] / x[0] );
     check = - 2.0 * x[1] / (2.0 * check * x[0] );
     ok   &= NearEqual(y1[2], check, eps, eps);
     //
     // partial w.r.t. x[2]
     x1[0] = 0.0;
     x1[1] = 0.0;
     x1[2] = 1.0;
     //
     y1    = f.Forward(1, x1);
     ok   &= NearEqual(y1[0], 0.0, eps, eps);
     ok   &= NearEqual(y1[1], 0.0, eps, eps);
     //
     check = std::sqrt( x[2] - x[1] * x[1] / x[0] );
     check = 1.0 / (2.0 * check);
     ok   &= NearEqual(y1[2], check, eps, eps);
     // -------------------------------------------------------------------
     // check second order forward mode
     CPPAD_TESTVECTOR(scalar) x2(n), y2(m);
     //
     // second partial w.r.t x[2]
     x2[0] = 0.0;
     x2[1] = 0.0;
     x2[2] = 0.0;
     y2    = f.Forward(2, x2);
     ok   &= NearEqual(y2[0], 0.0, eps, eps);
     ok   &= NearEqual(y2[1], 0.0, eps, eps);
     //
     check = std::sqrt( x[2] - x[1] * x[1] / x[0] );  // funciton value
     check = - 1.0 / ( 4.0 * check * check * check ); // second derivative
     check = 0.5 * check;                             // taylor coefficient
     ok   &= NearEqual(y2[2], check, eps, eps);
     // -------------------------------------------------------------------
     // check first order reverse mode
     CPPAD_TESTVECTOR(scalar) w(m), d1w(n);
     w[0] = 0.0;
     w[1] = 0.0;
     w[2] = 1.0;
     d1w  = f.Reverse(1, w);
     //
     // partial of f[2] w.r.t x[0]
     scalar f2    = std::sqrt( x[2] - x[1] * x[1] / x[0] );
     scalar f2_x0 = x[1] * x[1] / (2.0 * f2 * x[0] * x[0] );
     ok          &= NearEqual(d1w[0], f2_x0, eps, eps);
     //
     // partial of f[2] w.r.t x[1]
     scalar f2_x1 = - x[1] / (f2 * x[0] );
     ok          &= NearEqual(d1w[1], f2_x1, eps, eps);
     //
     // partial of f[2] w.r.t x[2]
     scalar f2_x2 = 1.0 / (2.0 * f2 );
     ok          &= NearEqual(d1w[2], f2_x2, eps, eps);
     // -------------------------------------------------------------------
     // check second order reverse mode
     CPPAD_TESTVECTOR(scalar) d2w(2 * n);
     d2w  = f.Reverse(2, w);
     //
     // check first order results
     ok &= NearEqual(d2w[0 * 2 + 0], f2_x0, eps, eps);
     ok &= NearEqual(d2w[1 * 2 + 0], f2_x1, eps, eps);
     ok &= NearEqual(d2w[2 * 2 + 0], f2_x2, eps, eps);
     //
     // check second order results
     scalar f2_x2_x0 = - 0.5 * f2_x0 / (f2 * f2 );
     ok             &= NearEqual(d2w[0 * 2 + 1], f2_x2_x0, eps, eps);
     scalar f2_x2_x1 = - 0.5 * f2_x1 / (f2 * f2 );
     ok             &= NearEqual(d2w[1 * 2 + 1], f2_x2_x1, eps, eps);
     scalar f2_x2_x2 = - 0.5 * f2_x2 / (f2 * f2 );
     ok             &= NearEqual(d2w[2 * 2 + 1], f2_x2_x2, eps, eps);
     // -------------------------------------------------------------------
     // check third order reverse mode
     CPPAD_TESTVECTOR(scalar) d3w(3 * n);
     d3w  = f.Reverse(3, w);
     //
     // check first order results
     ok &= NearEqual(d3w[0 * 3 + 0], f2_x0, eps, eps);
     ok &= NearEqual(d3w[1 * 3 + 0], f2_x1, eps, eps);
     ok &= NearEqual(d3w[2 * 3 + 0], f2_x2, eps, eps);
     //
     // check second order results
     ok             &= NearEqual(d3w[0 * 3 + 1], f2_x2_x0, eps, eps);
     ok             &= NearEqual(d3w[1 * 3 + 1], f2_x2_x1, eps, eps);
     ok             &= NearEqual(d3w[2 * 3 + 1], f2_x2_x2, eps, eps);
     // -------------------------------------------------------------------
     scalar f2_x2_x2_x0 = - 0.5 * f2_x2_x0 / (f2 * f2);
     f2_x2_x2_x0 += f2_x2 * f2_x0 / (f2 * f2 * f2);
     ok          &= NearEqual(d3w[0 * 3 + 2], 0.5 * f2_x2_x2_x0, eps, eps);
     scalar f2_x2_x2_x1 = - 0.5 * f2_x2_x1 / (f2 * f2);
     f2_x2_x2_x1 += f2_x2 * f2_x1 / (f2 * f2 * f2);
     ok          &= NearEqual(d3w[1 * 3 + 2], 0.5 * f2_x2_x2_x1, eps, eps);
     scalar f2_x2_x2_x2 = - 0.5 * f2_x2_x2 / (f2 * f2);
     f2_x2_x2_x2 += f2_x2 * f2_x2 / (f2 * f2 * f2);
     ok          &= NearEqual(d3w[2 * 3 + 2], 0.5 * f2_x2_x2_x2, eps, eps);
     return ok;
}

Input File: example/atomic/eigen_cholesky.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.18.1: AD Theory for Cholesky Factorization

4.4.7.2.18.1.a: Reference
See section 3.6 of Sebastian F. Walter's Ph.D. thesis, Structured Higher-Order Algorithmic Differentiation in the Forward and Reverse Mode with Application in Optimum Experimental Design , Humboldt-Universitat zu Berlin, 2011.

4.4.7.2.18.1.b: Notation

4.4.7.2.18.1.b.a: Cholesky Factor
We are given a positive definite symmetric matrix @(@ A \in \B{R}^{n \times n} @)@ and a Cholesky factorization @[@ A = L L^\R{T} @]@ where @(@ L \in \B{R}^{n \times n} @)@ is lower triangular.

4.4.7.2.18.1.b.b: Taylor Coefficient
The matrix @(@ A @)@ is a function of a scalar argument @(@ t @)@. For @(@ k = 0 , \ldots , K @)@, we use @(@ A_k @)@ for the corresponding Taylor coefficients; i.e., @[@ A(t) = o( t^K ) + \sum_{k = 0}^K A_k t^k @]@ where @(@ o( t^K ) / t^K \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. We use a similar notation for @(@ L(t) @)@.

4.4.7.2.18.1.b.c: Lower Triangular Part
For a square matrix @(@ C @)@, @(@ \R{lower} (C) @)@ is the lower triangular part of @(@ C @)@, @(@ \R{diag} (C) @)@ is the diagonal matrix with the same diagonal as @(@ C @)@ and @[@ \R{low} ( C ) = \R{lower} (C) - \frac{1}{2} \R{diag} (C) @]@

4.4.7.2.18.1.c: Forward Mode
For Taylor coefficient order @(@ k = 0 , \ldots , K @)@ the coefficients @(@ A_k \in \B{R}^{n \times n} @)@, and satisfy the equation @[@ A_k = \sum_{\ell=0}^k L_\ell L_{k-\ell}^\R{T} @]@ In the case where @(@ k=0 @)@, the @[@ A_0 = L_0 L_0^\R{T} @]@ The value of @(@ L_0 @)@ can be computed using the Cholesky factorization. In the case where @(@ k > 0 @)@, @[@ A_k = L_k L_0^\R{T} + L_0 L_k^\R{T} + B_k @]@ where @[@ B_k = \sum_{\ell=1}^{k-1} L_\ell L_{k-\ell}^\R{T} @]@ Note that @(@ B_k @)@ is defined in terms of Taylor coefficients of @(@ L(t) @)@ that have order less than @(@ k @)@. We also note that @[@ L_0^{-1} ( A_k - B_k ) L_0^\R{-T} = L_0^{-1} L_k + L_k^\R{T} L_0^\R{-T} @]@ The first matrix on the right hand side is lower triangular, the second is upper triangular, and the diagonals are equal. It follows that @[@ L_0^{-1} L_k = \R{low} [ L_0^{-1} ( A_k - B_k ) L_0^\R{-T} ] @]@ @[@ L_k = L_0 \R{low} [ L_0^{-1} ( A_k - B_k ) L_0^\R{-T} ] @]@ This expresses @(@ L_k @)@ in term of the Taylor coefficients of @(@ A(t) @)@ and the lower order coefficients of @(@ L(t) @)@.

4.4.7.2.18.1.d: Lemma 1
We use the notation @(@ \dot{C} @)@ for the derivative of a matrix valued function @(@ C(s) @)@ with respect to a scalar argument @(@ s @)@. We use the notation @(@ \bar{S} @)@ and @(@ \bar{L} @)@ for the partial derivative of a scalar value function @(@ \bar{F}( S, L) @)@ with respect to a symmetric matrix @(@ S @)@ and an lower triangular matrix @(@ L @)@. Define the scalar valued function @[@ \hat{F}( C ) = \bar{F} [ S , \hat{L} (S) ] @]@ We use @(@ \hat{S} @)@ for the total derivative of @(@ \hat{F} @)@ with respect to @(@ S @)@. Suppose that @(@ \hat{L} ( S ) @)@ is such that @[@ \dot{L} = L_0 \R{low} ( L_0^{-1} \dot{S} L_0^\R{-T} ) @]@ for any @(@ S(s) @)@. It follows that @[@ \hat{S} = \bar{S} + \frac{1}{2} ( M + M^\R{T} ) @]@ where @[@ M = L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} )^\R{T} L_0^{-1} @]@

4.4.7.2.18.1.d.a: Proof
@[@ \partial_s \hat{F} [ S(s) , L(s) ] = \R{tr} ( \bar{S}^\R{T} \dot{S} ) + \R{tr} ( \bar{L}^\R{T} \dot{L} ) @]@@[@ \R{tr} ( \bar{L}^\R{T} \dot{L} ) = \R{tr} [ \bar{L}^\R{T} L_0 \R{low} ( L_0^{-1} \dot{S} L_0^\R{-T} ) ] @]@@[@ = \R{tr} [ \R{low} ( L_0^{-1} \dot{S} L_0^\R{-T} )^\R{T} L_0^\R{T} \bar{L} ] @]@@[@ = \R{tr} [ L_0^{-1} \dot{S} L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} ) ] @]@@[@ = \R{tr} [ L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} ) L_0^{-1} \dot{S} ] @]@@[@ \partial_s \hat{F} [ S(s) , L(s) ] = \R{tr} ( \bar{S}^\R{T} \dot{S} ) + \R{tr} [ L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} ) L_0^{-1} \dot{S} ] @]@We now consider the @(@ (i, j) @)@ component function, for a symmetric matrix @(@ S(s) @)@, defined by @[@ S_{k, \ell} (s) = \left\{ \begin{array}{ll} 1 & \R{if} \; k = i \; \R{and} \; \ell = j \\ 1 & \R{if} \; k = j \; \R{and} \; \ell = i \\ 0 & \R{otherwise} \end{array} \right\} @]@ This shows that the formula in the lemma is correct for @(@ \hat{S}_{i,j} @)@ and @(@ \hat{S}_{j,i} @)@. This completes the proof because the component @(@ (i, j) @)@ was arbitrary.

4.4.7.2.18.1.e: Lemma 2
We use the same assumptions as in Lemma 1 except that the matrix @(@ S @)@ is lower triangular (instead of symmetric). It follows that @[@ \hat{S} = \bar{S} + \R{lower}(M) @]@ where @[@ M = L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} )^\R{T} L_0^{-1} @]@ The proof of this lemma is identical to Lemma 2 except that component function is defined by @[@ S_{k, \ell} (s) = \left\{ \begin{array}{ll} 1 & \R{if} \; k = i \; \R{and} \; \ell = j \\ 0 & \R{otherwise} \end{array} \right\} @]@

4.4.7.2.18.1.f: Reverse Mode

4.4.7.2.18.1.f.a: Case k = 0
For the case @(@ k = 0 @)@, @[@ \dot{A}_0 = \dot{L}_0 L_0^\R{T} + L_0 \dot{L}_0^\R{T} @]@ @[@ L_0^{-1} \dot{A}_0 L_0^\R{-T} = L_0^{-1} \dot{L}_0 + \dot{L}_0^\R{T} L_0^\R{-T} @]@ @[@ \R{low} ( L_0^{-1} \dot{A}_0 L_0^\R{-T} ) = L_0^{-1} \dot{L}_0 @]@ @[@ \dot{L}_0 = L_0 \R{low} ( L_0^{-1} \dot{A}_0 L_0^\R{-T} ) @]@ It follows from Lemma 1 that @[@ \bar{A}_0 \stackrel{+}{=} \frac{1}{2} ( M + M^\R{T} ) @]@ where @[@ M = L_0^\R{-T} \R{low} ( L_0^\R{T} \bar{L}_0 )^\R{T} L_0^{-1} @]@ and @(@ \bar{A}_0 @)@ is the partial before and after is before and after @(@ L_0 @)@ is removed from the scalar function dependency.

4.4.7.2.18.1.f.b: Case k > 0
In the case where @(@ k > 0 @)@, @[@ A_k = L_k L_0^\R{T} + L_0 L_k^\R{T} + B_k @]@ where @(@ B_k @)@ is defined in terms of Taylor coefficients of @(@ L(t) @)@ that have order less than @(@ k @)@. It follows that @[@ \dot{L}_k L_0^\R{T} + L_0 \dot{L}_k^\R{T} = \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} @]@ @[@ L_0^{-1} \dot{L}_k + \dot{L}_k^\R{T} L_0^\R{-T} = L_0^{-1} ( \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} ) L_0^\R{-T} @]@ @[@ L_0^{-1} \dot{L}_k = \R{low} [ L_0^{-1} ( \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} ) L_0^\R{-T} ] @]@ @[@ \dot{L}_k = L_0 \R{low} [ L_0^{-1} ( \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} ) L_0^\R{-T} ] @]@ The matrix @(@ A_k @)@ is symmetric, it follows that @[@ \bar{A}_k \stackrel{+}{=} \frac{1}{2} ( M_k + M_k^\R{T} ) @]@ where @[@ M_k = L_0^\R{-T} \R{low} ( L_0^\R{T} \bar{L}_k )^\R{T} L_0^{-1} @]@ The matrix @(@ B_k @)@ is also symmetric, hence @[@ \bar{B}_k = - \; \frac{1}{2} ( M_k + M_k^\R{T} ) @]@ We define the symmetric matrix @(@ C_k (s) @)@ by @[@ \dot{C}_k = \dot{L}_0 L_k^\R{T} + L_k \dot{L}_0^\R{T} @]@ and remove the dependency on @(@ C_k @)@ with @[@ \R{tr}( \bar{C}_k^\R{T} \dot{C}_k ) = \R{tr}( \bar{B}_k^\R{T} \dot{C}_k ) = \R{tr}( \bar{B}_k^\R{T} \dot{L}_0 L_k^\R{T} ) + \R{tr}( \bar{B}_k^\R{T} L_k \dot{L}_0^\R{T} ) @]@ @[@ = \R{tr}( L_k^\R{T} \bar{B}_k^\R{T} \dot{L}_0 ) + \R{tr}( L_k^\R{T} \bar{B}_k \dot{L}_0 ) @]@ @[@ = \R{tr}[ L_k^\R{T} ( \bar{B}_k + \bar{B}_k^\R{T} ) \dot{L}_0 ] @]@ Thus, removing @(@ C_k @)@ from the dependency results in the following update to @(@ \bar{L}_0 @)@: @[@ \bar{L}_0 \stackrel{+}{=} \R{lower} [ ( \bar{B}_k + \bar{B}_k^\R{T} ) L_k ] @]@ which is the same as @[@ \bar{L}_0 \stackrel{+}{=} 2 \; \R{lower} [ \bar{B}_k L_k ] @]@ We still need to remove @(@ B_k @)@ from the dependency. It follows from its definition that @[@ \dot{B}_k = \sum_{\ell=1}^{k-1} \dot{L}_\ell L_{k-\ell}^\R{T} + L_\ell \dot{L}_{k-\ell}^\R{T} @]@ @[@ \R{tr}( \bar{B}_k^\R{T} \dot{B}_k ) = \sum_{\ell=1}^{k-1} \R{tr}( \bar{B}_k^\R{T} \dot{L}_\ell L_{k-\ell}^\R{T} ) + \R{tr}( \bar{B}_k^\R{T} L_\ell \dot{L}_{k-\ell}^\R{T} ) @]@ @[@ = \sum_{\ell=1}^{k-1} \R{tr}( L_{k-\ell}^\R{T} \bar{B}_k^\R{T} \dot{L}_\ell ) + \sum_{\ell=1}^{k-1} \R{tr}( L_\ell^\R{T} \bar{B}_k \dot{L}_{k-\ell} ) @]@ We now use the fact that @(@ \bar{B}_k @)@ is symmetric to conclude @[@ \R{tr}( \bar{B}_k^\R{T} \dot{B}_k ) = 2 \sum_{\ell=1}^{k-1} \R{tr}( L_{k-\ell}^\R{T} \bar{B}_k^\R{T} \dot{L}_\ell ) @]@ Each of the @(@ \dot{L}_\ell @)@ matrices is lower triangular. Thus, removing @(@ B_k @)@ from the dependency results in the following update for @(@ \ell = 1 , \ldots , k-1 @)@: @[@ \bar{L}_\ell \stackrel{+}{=} 2 \; \R{lower}( \bar{B}_k L_{k-\ell} ) @]@
Input File: omh/appendix/theory/cholesky.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.18.2: Atomic Eigen Cholesky Factorization Class

4.4.7.2.18.2.a: Purpose
Construct an atomic operation that computes a lower triangular matrix @(@ L @)@ such that @(@ L L^\R{T} = A @)@ for any positive integer @(@ p @)@ and symmetric positive definite matrix @(@ A \in \B{R}^{p \times p} @)@.

4.4.7.2.18.2.b: Start Class Definition
# include <cppad/cppad.hpp>
# include <Eigen/Dense>

4.4.7.2.18.2.c: Public

4.4.7.2.18.2.c.a: Types
namespace { // BEGIN_EMPTY_NAMESPACE

template <class Base>
class atomic_eigen_cholesky : public CppAD::atomic_base<Base> {
public:
     // -----------------------------------------------------------
     // type of elements during calculation of derivatives
     typedef Base              scalar;
     // type of elements during taping
     typedef CppAD::AD<scalar> ad_scalar;
     //
     // type of matrix during calculation of derivatives
     typedef Eigen::Matrix<
          scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>        matrix;
     // type of matrix during taping
     typedef Eigen::Matrix<
          ad_scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > ad_matrix;
     //
     // lower triangular scalar matrix
     typedef Eigen::TriangularView<matrix, Eigen::Lower>             lower_view;

4.4.7.2.18.2.c.b: Constructor
     // constructor
     atomic_eigen_cholesky(void) : CppAD::atomic_base<Base>(
          "atom_eigen_cholesky"                             ,
          CppAD::atomic_base<Base>::set_sparsity_enum
     )
     { }

4.4.7.2.18.2.c.c: op
     // use atomic operation to invert an AD matrix
     ad_matrix op(const ad_matrix& arg)
     {     size_t nr = size_t( arg.rows() );
          size_t ny = ( (nr + 1 ) * nr ) / 2;
          size_t nx = 1 + ny;
          assert( nr == size_t( arg.cols() ) );
          // -------------------------------------------------------------------
          // packed version of arg
          CPPAD_TESTVECTOR(ad_scalar) packed_arg(nx);
          size_t index = 0;
          packed_arg[index++] = ad_scalar( nr );
          // lower triangle of symmetric matrix A
          for(size_t i = 0; i < nr; i++)
          {     for(size_t j = 0; j <= i; j++)
                    packed_arg[index++] = arg(i, j);
          }
          assert( index == nx );
          // -------------------------------------------------------------------
          // packed version of result = arg^{-1}.
          // This is an atomic_base function call that CppAD uses to
          // store the atomic operation on the tape.
          CPPAD_TESTVECTOR(ad_scalar) packed_result(ny);
          (*this)(packed_arg, packed_result);
          // -------------------------------------------------------------------
          // unpack result matrix L
          ad_matrix result = ad_matrix::Zero(nr, nr);
          index = 0;
          for(size_t i = 0; i < nr; i++)
          {     for(size_t j = 0; j <= i; j++)
                    result(i, j) = packed_result[index++];
          }
          return result;
     }

4.4.7.2.18.2.d: Private

4.4.7.2.18.2.d.a: Variables
private:
     // -------------------------------------------------------------
     // one forward mode vector of matrices for argument and result
     CppAD::vector<matrix> f_arg_, f_result_;
     // one reverse mode vector of matrices for argument and result
     CppAD::vector<matrix> r_arg_, r_result_;
     // -------------------------------------------------------------

4.4.7.2.18.2.d.b: forward
     // forward mode routine called by CppAD
     virtual bool forward(
          // lowest order Taylor coefficient we are evaluating
          size_t                          p ,
          // highest order Taylor coefficient we are evaluating
          size_t                          q ,
          // which components of x are variables
          const CppAD::vector<bool>&      vx ,
          // which components of y are variables
          CppAD::vector<bool>&            vy ,
          // tx [ j * (q+1) + k ] is x_j^k
          const CppAD::vector<scalar>&    tx ,
          // ty [ i * (q+1) + k ] is y_i^k
          CppAD::vector<scalar>&          ty
     )
     {     size_t n_order = q + 1;
          size_t nr      = size_t( CppAD::Integer( tx[ 0 * n_order + 0 ] ) );
          size_t ny      = ((nr + 1) * nr) / 2;
# ifndef NDEBUG
          size_t nx      = 1 + ny;
# endif
          assert( vx.size() == 0 || nx == vx.size() );
          assert( vx.size() == 0 || ny == vy.size() );
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          //
          // -------------------------------------------------------------------
          // make sure f_arg_ and f_result_ are large enough
          assert( f_arg_.size() == f_result_.size() );
          if( f_arg_.size() < n_order )
          {     f_arg_.resize(n_order);
               f_result_.resize(n_order);
               //
               for(size_t k = 0; k < n_order; k++)
               {     f_arg_[k].resize(nr, nr);
                    f_result_[k].resize(nr, nr);
               }
          }
          // -------------------------------------------------------------------
          // unpack tx into f_arg_
          for(size_t k = 0; k < n_order; k++)
          {     size_t index = 1;
               // unpack arg values for this order
               for(size_t i = 0; i < nr; i++)
               {     for(size_t j = 0; j <= i; j++)
                    {     f_arg_[k](i, j) = tx[ index * n_order + k ];
                         f_arg_[k](j, i) = f_arg_[k](i, j);
                         index++;
                    }
               }
          }
          // -------------------------------------------------------------------
          // result for each order
          // (we could avoid recalculting f_result_[k] for k=0,...,p-1)
          //
          Eigen::LLT<matrix> cholesky(f_arg_[0]);
          f_result_[0]   = cholesky.matrixL();
          lower_view L_0 = f_result_[0].template triangularView<Eigen::Lower>();
          for(size_t k = 1; k < n_order; k++)
          {     // initialize sum as A_k
               matrix f_sum = f_arg_[k];
               // compute A_k - B_k
               for(size_t ell = 1; ell < k; ell++)
                    f_sum -= f_result_[ell] * f_result_[k-ell].transpose();
               // compute L_0^{-1} * (A_k - B_k) * L_0^{-T}
               matrix temp = L_0.template solve<Eigen::OnTheLeft>(f_sum);
               temp   = L_0.transpose().template solve<Eigen::OnTheRight>(temp);
               // divide the diagonal by 2
               for(size_t i = 0; i < nr; i++)
                    temp(i, i) /= scalar(2.0);
               // L_k = L_0 * low[ L_0^{-1} * (A_k - B_k) * L_0^{-T} ]
               lower_view view = temp.template triangularView<Eigen::Lower>();
               f_result_[k] = f_result_[0] * view;
          }
          // -------------------------------------------------------------------
          // pack result_ into ty
          for(size_t k = 0; k < n_order; k++)
          {     size_t index = 0;
               for(size_t i = 0; i < nr; i++)
               {     for(size_t j = 0; j <= i; j++)
                    {     ty[ index * n_order + k ] = f_result_[k](i, j);
                         index++;
                    }
               }
          }
          // -------------------------------------------------------------------
          // check if we are computing vy
          if( vx.size() == 0 )
               return true;
          // ------------------------------------------------------------------
          // This is a very dumb algorithm that over estimates which
          // elements of the inverse are variables (which is not efficient).
          bool var = false;
          for(size_t i = 0; i < ny; i++)
               var |= vx[1 + i];
          for(size_t i = 0; i < ny; i++)
               vy[i] = var;
          //
          return true;
     }

4.4.7.2.18.2.d.c: reverse
     // reverse mode routine called by CppAD
     virtual bool reverse(
          // highest order Taylor coefficient that we are computing derivative of
          size_t                     q ,
          // forward mode Taylor coefficients for x variables
          const CppAD::vector<double>&     tx ,
          // forward mode Taylor coefficients for y variables
          const CppAD::vector<double>&     ty ,
          // upon return, derivative of G[ F[ {x_j^k} ] ] w.r.t {x_j^k}
          CppAD::vector<double>&           px ,
          // derivative of G[ {y_i^k} ] w.r.t. {y_i^k}
          const CppAD::vector<double>&     py
     )
     {     size_t n_order = q + 1;
          size_t nr = size_t( CppAD::Integer( tx[ 0 * n_order + 0 ] ) );
# ifndef NDEBUG
          size_t ny = ( (nr + 1 ) * nr ) / 2;
          size_t nx = 1 + ny;
# endif
          //
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          assert( px.size()    == tx.size() );
          assert( py.size()    == ty.size() );
          // -------------------------------------------------------------------
          // make sure f_arg_ is large enough
          assert( f_arg_.size() == f_result_.size() );
          // must have previous run forward with order >= n_order
          assert( f_arg_.size() >= n_order );
          // -------------------------------------------------------------------
          // make sure r_arg_, r_result_ are large enough
          assert( r_arg_.size() == r_result_.size() );
          if( r_arg_.size() < n_order )
          {     r_arg_.resize(n_order);
               r_result_.resize(n_order);
               //
               for(size_t k = 0; k < n_order; k++)
               {     r_arg_[k].resize(nr, nr);
                    r_result_[k].resize(nr, nr);
               }
          }
          // -------------------------------------------------------------------
          // unpack tx into f_arg_
          for(size_t k = 0; k < n_order; k++)
          {     size_t index = 1;
               // unpack arg values for this order
               for(size_t i = 0; i < nr; i++)
               {     for(size_t j = 0; j <= i; j++)
                    {     f_arg_[k](i, j) = tx[ index * n_order + k ];
                         f_arg_[k](j, i) = f_arg_[k](i, j);
                         index++;
                    }
               }
          }
          // -------------------------------------------------------------------
          // unpack py into r_result_
          for(size_t k = 0; k < n_order; k++)
          {     r_result_[k] = matrix::Zero(nr, nr);
               size_t index = 0;
               for(size_t i = 0; i < nr; i++)
               {     for(size_t j = 0; j <= i; j++)
                    {     r_result_[k](i, j) = py[ index * n_order + k ];
                         index++;
                    }
               }
          }
          // -------------------------------------------------------------------
          // initialize r_arg_ as zero
          for(size_t k = 0; k < n_order; k++)
               r_arg_[k]   = matrix::Zero(nr, nr);
          // -------------------------------------------------------------------
          // matrix reverse mode calculation
          lower_view L_0 = f_result_[0].template triangularView<Eigen::Lower>();
          //
          for(size_t k1 = n_order; k1 > 1; k1--)
          {     size_t k = k1 - 1;
               //
               // L_0^T * bar{L}_k
               matrix tmp1 = L_0.transpose() * r_result_[k];
               //
               //low[ L_0^T * bar{L}_k ]
               for(size_t i = 0; i < nr; i++)
                    tmp1(i, i) /= scalar(2.0);
               matrix tmp2 = tmp1.template triangularView<Eigen::Lower>();
               //
               // L_0^{-T} low[ L_0^T * bar{L}_k ]
               tmp1 = L_0.transpose().template solve<Eigen::OnTheLeft>( tmp2 );
               //
               // M_k = L_0^{-T} * low[ L_0^T * bar{L}_k ]^{T} L_0^{-1}
               matrix M_k = L_0.transpose().template
                    solve<Eigen::OnTheLeft>( tmp1.transpose() );
               //
               // remove L_k and compute bar{B}_k
               matrix barB_k = scalar(0.5) * ( M_k + M_k.transpose() );
               r_arg_[k]    += barB_k;
               barB_k        = scalar(-1.0) * barB_k;
               //
               // 2.0 * lower( bar{B}_k L_k )
               matrix temp = scalar(2.0) * barB_k * f_result_[k];
               temp        = temp.template triangularView<Eigen::Lower>();
               //
               // remove C_k
               r_result_[0] += temp;
               //
               // remove B_k
               for(size_t ell = 1; ell < k; ell++)
               {     // bar{L}_ell = 2 * lower( \bar{B}_k * L_{k-ell} )
                    temp = scalar(2.0) * barB_k * f_result_[k-ell];
                    r_result_[ell] += temp.template triangularView<Eigen::Lower>();
               }
          }
          // M_0 = L_0^{-T} * low[ L_0^T * bar{L}_0 ]^{T} L_0^{-1}
          matrix M_0 = L_0.transpose() * r_result_[0];
          for(size_t i = 0; i < nr; i++)
               M_0(i, i) /= scalar(2.0);
          M_0 = M_0.template triangularView<Eigen::Lower>();
          M_0 = L_0.template solve<Eigen::OnTheRight>( M_0 );
          M_0 = L_0.transpose().template solve<Eigen::OnTheLeft>( M_0 );
          // remove L_0
          r_arg_[0] += scalar(0.5) * ( M_0 + M_0.transpose() );
          // -------------------------------------------------------------------
          // pack r_arg into px
          // note that only the lower triangle of barA_k is stored in px
          for(size_t k = 0; k < n_order; k++)
          {     size_t index = 0;
               px[ index * n_order + k ] = 0.0;
               index++;
               for(size_t i = 0; i < nr; i++)
               {     for(size_t j = 0; j < i; j++)
                    {     px[ index * n_order + k ] = 2.0 * r_arg_[k](i, j);
                         index++;
                    }
                    px[ index * n_order + k] = r_arg_[k](i, i);
                    index++;
               }
          }
          // -------------------------------------------------------------------
          return true;
     }

4.4.7.2.18.2.e: End Class Definition

}; // End of atomic_eigen_cholesky class

}  // END_EMPTY_NAMESPACE

Input File: cppad/example/eigen_cholesky.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.19: User Atomic Matrix Multiply: Example and Test

4.4.7.2.19.a: See Also
4.4.7.2.16: atomic_eigen_mat_mul.cpp

4.4.7.2.19.b: Class Definition
This example uses the file 4.4.7.2.19.1: atomic_mat_mul.hpp which defines matrix multiply as a 4.4.7.2: atomic_base operation.

4.4.7.2.19.c: Use Atomic Function
# include <cppad/cppad.hpp>
# include <cppad/example/mat_mul.hpp>

bool mat_mul(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::vector;
     size_t i, j;

4.4.7.2.19.c.a: Constructor

     // -------------------------------------------------------------------
     // object that multiplies  2 x 2  matrices
     atomic_mat_mul afun;

4.4.7.2.19.c.b: Recording
     // start recording with four independent varables
     size_t n = 4;
     vector<double> x(n);
     vector< AD<double> > ax(n);
     for(j = 0; j < n; j++)
          ax[j] = x[j] = double(j + 1);
     CppAD::Independent(ax);

     // ------------------------------------------------------------------
     size_t nr_left = 2;
     size_t n_middle  = 2;
     size_t nc_right = 2;
     vector< AD<double> > atom_x(3 + (nr_left + nc_right) * n_middle );

     // matrix dimensions
     atom_x[0] = AD<double>( nr_left );
     atom_x[1] = AD<double>( n_middle );
     atom_x[2] = AD<double>( nc_right );

     // left matrix
     atom_x[3] = ax[0];  // left[0, 0] = x0
     atom_x[4] = ax[1];  // left[0, 1] = x1
     atom_x[5] = 5.;     // left[1, 0] = 5
     atom_x[6] = 6.;     // left[1, 1] = 6

     // right matix
     atom_x[7] = ax[2];  // right[0, 0] = x2
     atom_x[8] = 7.;     // right[0, 1] = 7
     atom_x[9] = ax[3];  // right[1, 0] = x3
     atom_x[10] = 8.;     // right[1, 1] = 8
     // ------------------------------------------------------------------
     /*
     [ x0 , x1 ] * [ x2 , 7 ] = [ x0*x2 + x1*x3 , x0*7 + x1*8 ]
     [ 5  , 6  ]   [ x3 , 8 ]   [  5*x2 +  6*x3 ,  5*7 +  6*8 ]
     */
     vector< AD<double> > atom_y(nr_left * nc_right);
     afun(atom_x, atom_y);

     ok &= (atom_y[0] == x[0]*x[2] + x[1]*x[3]) & Variable(atom_y[0]);
     ok &= (atom_y[1] == x[0]*7.   + x[1]*8.  ) & Variable(atom_y[1]);
     ok &= (atom_y[2] ==   5.*x[2] +   6.*x[3]) & Variable(atom_y[2]);
     ok &= (atom_y[3] ==   5.*7.   +   6.*8.  ) & Parameter(atom_y[3]);

     // ------------------------------------------------------------------
     // define the function g : x -> atom_y
     // g(x) = [ x0*x2 + x1*x3 , x0*7 + x1*8 , 5*x2  + 6*x3  , 5*7 + 6*8 ]^T
     CppAD::ADFun<double> g(ax, atom_y);

4.4.7.2.19.c.c: forward
     // Test zero order forward mode evaluation of g(x)
     size_t m = atom_y.size();
     vector<double> y(m);
     for(j = 0; j <  n; j++)
          x[j] = double(j + 2);
     y = g.Forward(0, x);
     ok &= y[0] == x[0] * x[2] + x[1] * x[3];
     ok &= y[1] == x[0] * 7.   + x[1] * 8.;
     ok &= y[2] == 5. * x[2]   + 6. * x[3];
     ok &= y[3] == 5. * 7.     + 6. * 8.;

     //----------------------------------------------------------------------
     // Test first order forward mode evaluation of g'(x) * [1, 2, 3, 4]^T
     // g'(x) = [ x2, x3, x0, x1 ]
     //         [ 7 ,  8,  0, 0  ]
     //         [ 0 ,  0,  5, 6  ]
     //         [ 0 ,  0,  0, 0  ]
     CppAD::vector<double> dx(n), dy(m);
     for(j = 0; j <  n; j++)
          dx[j] = double(j + 1);
     dy = g.Forward(1, dx);
     ok &= dy[0] == 1. * x[2] + 2. * x[3] + 3. * x[0] + 4. * x[1];
     ok &= dy[1] == 1. * 7.   + 2. * 8.   + 3. * 0.   + 4. * 0.;
     ok &= dy[2] == 1. * 0.   + 2. * 0.   + 3. * 5.   + 4. * 6.;
     ok &= dy[3] == 1. * 0.   + 2. * 0.   + 3. * 0.   + 4. * 0.;

     //----------------------------------------------------------------------
     // Test second order forward mode
     // g_0^2 (x) = [ 0, 0, 1, 0 ], g_0^2 (x) * [1] = [3]
     //             [ 0, 0, 0, 1 ]              [2]   [4]
     //             [ 1, 0, 0, 0 ]              [3]   [1]
     //             [ 0, 1, 0, 0 ]              [4]   [2]
     CppAD::vector<double> ddx(n), ddy(m);
     for(j = 0; j <  n; j++)
          ddx[j] = 0.;
     ddy = g.Forward(2, ddx);

     // [1, 2, 3, 4] * g_0^2 (x) * [1, 2, 3, 4]^T = 1*3 + 2*4 + 3*1 + 4*2
     ok &= 2. * ddy[0] == 1. * 3. + 2. * 4. + 3. * 1. + 4. * 2.;

     // for i > 0, [1, 2, 3, 4] * g_i^2 (x) * [1, 2, 3, 4]^T = 0
     ok &= ddy[1] == 0.;
     ok &= ddy[2] == 0.;
     ok &= ddy[3] == 0.;

4.4.7.2.19.c.d: reverse
     // Test second order reverse mode
     CppAD::vector<double> w(m), dw(2 * n);
     for(i = 0; i < m; i++)
          w[i] = 0.;
     w[0] = 1.;
     dw = g.Reverse(2, w);

     // g_0'(x) = [ x2, x3, x0, x1 ]
     ok &= dw[0*2 + 0] == x[2];
     ok &= dw[1*2 + 0] == x[3];
     ok &= dw[2*2 + 0] == x[0];
     ok &= dw[3*2 + 0] == x[1];

     // g_0'(x)   * [1, 2, 3, 4]  = 1 * x2 + 2 * x3 + 3 * x0 + 4 * x1
     // g_0^2 (x) * [1, 2, 3, 4]  = [3, 4, 1, 2]
     ok &= dw[0*2 + 1] == 3.;
     ok &= dw[1*2 + 1] == 4.;
     ok &= dw[2*2 + 1] == 1.;
     ok &= dw[3*2 + 1] == 2.;

4.4.7.2.19.c.e: option
     //----------------------------------------------------------------------
     // Test both the boolean and set sparsity at the atomic level
     for(size_t sparse_index = 0; sparse_index < 2; sparse_index++)
     {     if( sparse_index == 0 )
               afun.option( CppAD::atomic_base<double>::bool_sparsity_enum );
          else     afun.option( CppAD::atomic_base<double>::set_sparsity_enum );

4.4.7.2.19.c.f: for_sparse_jac
     // Test forward Jacobian sparsity pattern
     /*
     g(x) = [ x0*x2 + x1*x3 , x0*7 + x1*8 , 5*x2  + 6*x3  , 5*7 + 6*8 ]^T
     so the sparsity pattern should be
     s[0] = {0, 1, 2, 3}
     s[1] = {0, 1}
     s[2] = {2, 3}
     s[3] = {}
     */
     CppAD::vector< std::set<size_t> > r(n), s(m);
     for(j = 0; j <  n; j++)
     {     assert( r[j].empty() );
          r[j].insert(j);
     }
     s = g.ForSparseJac(n, r);
     for(j = 0; j <  n; j++)
     {     // s[0] = {0, 1, 2, 3}
          ok &= s[0].find(j) != s[0].end();
          // s[1] = {0, 1}
          if( j == 0 || j == 1 )
               ok &= s[1].find(j) != s[1].end();
          else     ok &= s[1].find(j) == s[1].end();
          // s[2] = {2, 3}
          if( j == 2 || j == 3 )
               ok &= s[2].find(j) != s[2].end();
          else     ok &= s[2].find(j) == s[2].end();
     }
     // s[3] == {}
     ok &= s[3].empty();

4.4.7.2.19.c.g: rev_sparse_jac
     // Test reverse Jacobian sparsity pattern
     for(i = 0; i <  m; i++)
     {     s[i].clear();
          s[i].insert(i);
     }
     r = g.RevSparseJac(m, s);
     for(j = 0; j <  n ; j++)
     {     // r[0] = {0, 1, 2, 3}
          ok &= r[0].find(j) != r[0].end();
          // r[1] = {0, 1}
          if( j == 0 || j == 1 )
               ok &= r[1].find(j) != r[1].end();
          else     ok &= r[1].find(j) == r[1].end();
          // r[2] = {2, 3}
          if( j == 2 || j == 3 )
               ok &= r[2].find(j) != r[2].end();
          else     ok &= r[2].find(j) == r[2].end();
     }
     // r[3] == {}
     ok &= r[3].empty();

4.4.7.2.19.c.h: rev_sparse_hes
     /* Test reverse Hessian sparsity pattern
     g_0^2 (x) = [ 0, 0, 1, 0 ] and for i > 0, g_i^2 = 0
                 [ 0, 0, 0, 1 ]
                 [ 1, 0, 0, 0 ]
                 [ 0, 1, 0, 0 ]
     so for the sparsity pattern for the first component of g is
     h[0] = {2}
     h[1] = {3}
     h[2] = {0}
     h[3] = {1}
     */
     CppAD::vector< std::set<size_t> > h(n), t(1);
     t[0].clear();
     t[0].insert(0);
     h = g.RevSparseHes(n, t);
     size_t check[] = {2, 3, 0, 1};
     for(j = 0; j <  n; j++)
     {     // h[j] = { check[j] }
          for(i = 0; i < n; i++)
          {     if( i == check[j] )
                    ok &= h[j].find(i) != h[j].end();
               else     ok &= h[j].find(i) == h[j].end();
          }
     }
     t[0].clear();
     for( j = 1; j < n; j++)
               t[0].insert(j);
     h = g.RevSparseHes(n, t);
     for(j = 0; j <  n; j++)
     {     // h[j] = { }
          for(i = 0; i < n; i++)
               ok &= h[j].find(i) == h[j].end();
     }

     //-----------------------------------------------------------------
     } // end for(size_t sparse_index  ...
     //-----------------------------------------------------------------

     return ok;
}

Input File: example/atomic/mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.4.7.2.19.1: Matrix Multiply as an Atomic Operation

4.4.7.2.19.1.a: See Also
4.4.7.2.16.1: atomic_eigen_mat_mul.hpp

4.4.7.2.19.1.b: Matrix Dimensions
This example puts the matrix dimensions in the atomic function arguments, instead of the 4.4.7.2.1: constructor , so that they can be different for different calls to the atomic function. These dimensions are:
nr_left number of rows in the left matrix
n_middle rows in the left matrix and columns in right
nc_right number of columns in the right matrix

4.4.7.2.19.1.c: Start Class Definition
# include <cppad/cppad.hpp>
namespace { // Begin empty namespace
using CppAD::vector;
//
using CppAD::set_union;
//
// matrix result = left * right
class atomic_mat_mul : public CppAD::atomic_base<double> {

4.4.7.2.19.1.d: Constructor
public:
     // ---------------------------------------------------------------------
     // constructor
     atomic_mat_mul(void) : CppAD::atomic_base<double>("mat_mul")
     { }
private:

4.4.7.2.19.1.e: Left Operand Element Index
Index in the Taylor coefficient matrix tx of a left matrix element.
     size_t left(
          size_t i        , // left matrix row index
          size_t j        , // left matrix column index
          size_t k        , // Taylor coeffocient order
          size_t nk       , // number of Taylor coefficients in tx
          size_t nr_left  , // rows in left matrix
          size_t n_middle , // rows in left and columns in right
          size_t nc_right ) // columns in right matrix
     {     assert( i < nr_left );
          assert( j < n_middle );
          return (3 + i * n_middle + j) * nk + k;
     }

4.4.7.2.19.1.f: Right Operand Element Index
Index in the Taylor coefficient matrix tx of a right matrix element.
     size_t right(
          size_t i        , // right matrix row index
          size_t j        , // right matrix column index
          size_t k        , // Taylor coeffocient order
          size_t nk       , // number of Taylor coefficients in tx
          size_t nr_left  , // rows in left matrix
          size_t n_middle , // rows in left and columns in right
          size_t nc_right ) // columns in right matrix
     {     assert( i < n_middle );
          assert( j < nc_right );
          size_t offset = 3 + nr_left * n_middle;
          return (offset + i * nc_right + j) * nk + k;
     }

4.4.7.2.19.1.g: Result Element Index
Index in the Taylor coefficient matrix ty of a result matrix element.
     size_t result(
          size_t i        , // result matrix row index
          size_t j        , // result matrix column index
          size_t k        , // Taylor coeffocient order
          size_t nk       , // number of Taylor coefficients in ty
          size_t nr_left  , // rows in left matrix
          size_t n_middle , // rows in left and columns in right
          size_t nc_right ) // columns in right matrix
     {     assert( i < nr_left  );
          assert( j < nc_right );
          return (i * nc_right + j) * nk + k;
     }

4.4.7.2.19.1.h: Forward Matrix Multiply
Forward mode multiply Taylor coefficients in tx and sum into ty (for one pair of left and right orders)
     void forward_multiply(
          size_t                 k_left   , // order for left coefficients
          size_t                 k_right  , // order for right coefficients
          const vector<double>&  tx       , // domain space Taylor coefficients
                vector<double>&  ty       , // range space Taylor coefficients
          size_t                 nr_left  , // rows in left matrix
          size_t                 n_middle , // rows in left and columns in right
          size_t                 nc_right ) // columns in right matrix
     {
          size_t nx       = 3 + (nr_left + nc_right) * n_middle;
          size_t nk       = tx.size() / nx;
# ifndef NDEBUG
          size_t ny       = nr_left * nc_right;
          assert( nk == ty.size() / ny );
# endif
          //
          size_t k_result = k_left + k_right;
          assert( k_result < nk );
          //
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     double sum = 0.0;
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k_left, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j,  k_right, nk, nr_left, n_middle, nc_right
                         );
                         sum           += tx[i_left] * tx[i_right];
                    }
                    size_t i_result = result(
                         i, j, k_result, nk, nr_left, n_middle, nc_right
                    );
                    ty[i_result]   += sum;
               }
          }
     }

4.4.7.2.19.1.i: Reverse Matrix Multiply
Reverse mode partials of Taylor coefficients and sum into px (for one pair of left and right orders)
     void reverse_multiply(
          size_t                 k_left  , // order for left coefficients
          size_t                 k_right , // order for right coefficients
          const vector<double>&  tx      , // domain space Taylor coefficients
          const vector<double>&  ty      , // range space Taylor coefficients
                vector<double>&  px      , // partials w.r.t. tx
          const vector<double>&  py      , // partials w.r.t. ty
          size_t                 nr_left  , // rows in left matrix
          size_t                 n_middle , // rows in left and columns in right
          size_t                 nc_right ) // columns in right matrix
     {
          size_t nx       = 3 + (nr_left + nc_right) * n_middle;
          size_t nk       = tx.size() / nx;
# ifndef NDEBUG
          size_t ny       = nr_left * nc_right;
          assert( nk == ty.size() / ny );
# endif
          assert( tx.size() == px.size() );
          assert( ty.size() == py.size() );
          //
          size_t k_result = k_left + k_right;
          assert( k_result < nk );
          //
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k_result, nk, nr_left, n_middle, nc_right
                    );
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k_left, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j,  k_right, nk, nr_left, n_middle, nc_right
                         );
                         // sum        += tx[i_left] * tx[i_right];
                         px[i_left]    += tx[i_right] * py[i_result];
                         px[i_right]   += tx[i_left]  * py[i_result];
                    }
               }
          }
          return;
     }

4.4.7.2.19.1.j: forward
Routine called by CppAD during 5.3: Forward mode.
     virtual bool forward(
          size_t                    q ,
          size_t                    p ,
          const vector<bool>&      vx ,
                vector<bool>&      vy ,
          const vector<double>&    tx ,
                vector<double>&    ty
     )
     {     size_t n_order  = p + 1;
          size_t nr_left  = size_t( tx[ 0 * n_order + 0 ] );
          size_t n_middle = size_t( tx[ 1 * n_order + 0 ] );
          size_t nc_right = size_t( tx[ 2 * n_order + 0 ] );
# ifndef NDEBUG
          size_t nx       = 3 + (nr_left + nc_right) * n_middle;
          size_t ny       = nr_left * nc_right;
# endif
          assert( vx.size() == 0 || nx == vx.size() );
          assert( vx.size() == 0 || ny == vy.size() );
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          size_t i, j, ell;

          // check if we are computing vy information
          if( vx.size() > 0 )
          {     size_t nk = 1;
               size_t k  = 0;
               for(i = 0; i < nr_left; i++)
               {     for(j = 0; j < nc_right; j++)
                    {     bool var = false;
                         for(ell = 0; ell < n_middle; ell++)
                         {     size_t i_left  = left(
                                   i, ell, k, nk, nr_left, n_middle, nc_right
                              );
                              size_t i_right = right(
                                   ell, j, k, nk, nr_left, n_middle, nc_right
                              );
                              bool   nz_left = vx[i_left] |(tx[i_left]  != 0.);
                              bool  nz_right = vx[i_right]|(tx[i_right] != 0.);
                              // if not multiplying by the constant zero
                              if( nz_left & nz_right )
                                        var |= bool(vx[i_left]) | bool(vx[i_right]);
                         }
                         size_t i_result = result(
                              i, j, k, nk, nr_left, n_middle, nc_right
                         );
                         vy[i_result] = var;
                    }
               }
          }

          // initialize result as zero
          size_t k;
          for(i = 0; i < nr_left; i++)
          {     for(j = 0; j < nc_right; j++)
               {     for(k = q; k <= p; k++)
                    {     size_t i_result = result(
                              i, j, k, n_order, nr_left, n_middle, nc_right
                         );
                         ty[i_result] = 0.0;
                    }
               }
          }
          for(k = q; k <= p; k++)
          {     // sum the produces that result in order k
               for(ell = 0; ell <= k; ell++)
                    forward_multiply(
                         ell, k - ell, tx, ty, nr_left, n_middle, nc_right
                    );
          }

          // all orders are implented, so always return true
          return true;
     }

4.4.7.2.19.1.k: reverse
Routine called by CppAD during 5.4: Reverse mode.
     virtual bool reverse(
          size_t                     p ,
          const vector<double>&     tx ,
          const vector<double>&     ty ,
                vector<double>&     px ,
          const vector<double>&     py
     )
     {     size_t n_order  = p + 1;
          size_t nr_left  = size_t( tx[ 0 * n_order + 0 ] );
          size_t n_middle = size_t( tx[ 1 * n_order + 0 ] );
          size_t nc_right = size_t( tx[ 2 * n_order + 0 ] );
# ifndef NDEBUG
          size_t nx       = 3 + (nr_left + nc_right) * n_middle;
          size_t ny       = nr_left * nc_right;
# endif
          assert( nx * n_order == tx.size() );
          assert( ny * n_order == ty.size() );
          assert( px.size() == tx.size() );
          assert( py.size() == ty.size() );

          // initialize summation
          for(size_t i = 0; i < px.size(); i++)
               px[i] = 0.0;

          // number of orders to differentiate
          size_t k = n_order;
          while(k--)
          {     // differentiate the produces that result in order k
               for(size_t ell = 0; ell <= k; ell++)
                    reverse_multiply(
                         ell, k - ell, tx, ty, px, py, nr_left, n_middle, nc_right
                    );
          }

          // all orders are implented, so always return true
          return true;
     }

4.4.7.2.19.1.l: for_sparse_jac
Routines called by CppAD during 5.5.2: ForSparseJac .
     // boolean sparsity patterns
     virtual bool for_sparse_jac(
          size_t                                q ,
          const vector<bool>&                   r ,
                vector<bool>&                   s ,
          const vector<double>&                 x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
# ifndef NDEBUG
          size_t  nx      = 3 + (nr_left + nc_right) * n_middle;
          size_t  ny      = nr_left * nc_right;
# endif
          assert( nx     == x.size() );
          assert( nx * q == r.size() );
          assert( ny * q == s.size() );
          size_t p;

          // sparsity for S(x) = f'(x) * R
          size_t nk = 1;
          size_t k  = 0;
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k, nk, nr_left, n_middle, nc_right
                    );
                    for(p = 0; p < q; p++)
                         s[i_result * q + p] = false;
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j, k, nk, nr_left, n_middle, nc_right
                         );
                         for(p = 0; p < q; p++)
                         {     // cast avoids Microsoft warning (should not be needed)
                              s[i_result * q + p] |= bool( r[i_left * q + p ] );
                              s[i_result * q + p] |= bool( r[i_right * q + p ] );
                         }
                    }
               }
          }
          return true;
     }
     // set sparsity patterns
     virtual bool for_sparse_jac(
          size_t                                q ,
          const vector< std::set<size_t> >&     r ,
                vector< std::set<size_t> >&     s ,
          const vector<double>&                 x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
# ifndef NDEBUG
          size_t  nx      = 3 + (nr_left + nc_right) * n_middle;
          size_t  ny      = nr_left * nc_right;
# endif
          assert( nx == x.size() );
          assert( nx == r.size() );
          assert( ny == s.size() );

          // sparsity for S(x) = f'(x) * R
          size_t nk = 1;
          size_t k  = 0;
          for(size_t i = 0; i < nr_left; i++)
          {     for(size_t j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k, nk, nr_left, n_middle, nc_right
                    );
                    s[i_result].clear();
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j, k, nk, nr_left, n_middle, nc_right
                         );
                         //
                         s[i_result] = set_union(s[i_result], r[i_left] );
                         s[i_result] = set_union(s[i_result], r[i_right] );
                    }
               }
          }
          return true;
     }

4.4.7.2.19.1.m: rev_sparse_jac
Routines called by CppAD during 5.5.4: RevSparseJac .
     // boolean sparsity patterns
     virtual bool rev_sparse_jac(
          size_t                                q ,
          const vector<bool>&                  rt ,
                vector<bool>&                  st ,
          const vector<double>&                 x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx      = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny      = nr_left * nc_right;
# endif
          assert( nx     == x.size() );
          assert( nx * q == st.size() );
          assert( ny * q == rt.size() );
          size_t i, j, p;

          // initialize
          for(i = 0; i < nx; i++)
          {     for(p = 0; p < q; p++)
                    st[ i * q + p ] = false;
          }

          // sparsity for S(x)^T = f'(x)^T * R^T
          size_t nk = 1;
          size_t k  = 0;
          for(i = 0; i < nr_left; i++)
          {     for(j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k, nk, nr_left, n_middle, nc_right
                    );
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j, k, nk, nr_left, n_middle, nc_right
                         );
                         for(p = 0; p < q; p++)
                         {     st[i_left * q + p] |= bool( rt[i_result * q + p] );
                              st[i_right* q + p] |= bool( rt[i_result * q + p] );
                         }
                    }
               }
          }
          return true;
     }
     // set sparsity patterns
     virtual bool rev_sparse_jac(
          size_t                                q ,
          const vector< std::set<size_t> >&    rt ,
                vector< std::set<size_t> >&    st ,
          const vector<double>&                 x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx      = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny        = nr_left * nc_right;
# endif
          assert( nx == x.size() );
          assert( nx == st.size() );
          assert( ny == rt.size() );
          size_t i, j;

          // initialize
          for(i = 0; i < nx; i++)
               st[i].clear();

          // sparsity for S(x)^T = f'(x)^T * R^T
          size_t nk = 1;
          size_t k  = 0;
          for(i = 0; i < nr_left; i++)
          {     for(j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k, nk, nr_left, n_middle, nc_right
                    );
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j, k, nk, nr_left, n_middle, nc_right
                         );
                         //
                         st[i_left]  = set_union(st[i_left],  rt[i_result]);
                         st[i_right] = set_union(st[i_right], rt[i_result]);
                    }
               }
          }
          return true;
     }

4.4.7.2.19.1.n: rev_sparse_hes
Routines called by 5.5.6: RevSparseHes .
     // set sparsity patterns
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          size_t                                q ,
          const vector< std::set<size_t> >&     r ,
          const vector< std::set<size_t> >&     u ,
                vector< std::set<size_t> >&     v ,
          const vector<double>&                 x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny        = nr_left * nc_right;
# endif
          assert( x.size()  == nx );
          assert( vx.size() == nx );
          assert( t.size()  == nx );
          assert( r.size()  == nx );
          assert( v.size()  == nx );
          assert( s.size()  == ny );
          assert( u.size()  == ny );
          //
          size_t i, j;
          //
          // initilaize sparsity patterns as false
          for(j = 0; j < nx; j++)
          {     t[j] = false;
               v[j].clear();
          }
          size_t nk = 1;
          size_t k  = 0;
          for(i = 0; i < nr_left; i++)
          {     for(j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k, nk, nr_left, n_middle, nc_right
                    );
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j, k, nk, nr_left, n_middle, nc_right
                         );
                         //
                         // Compute sparsity for T(x) = S(x) * f'(x).
                         // We need not use vx with f'(x) back propagation.
                         t[i_left]  |= bool( s[i_result] );
                         t[i_right] |= bool( s[i_result] );

                         // V(x) = f'(x)^T * U(x) +  S(x) * f''(x) * R
                         // U(x) = g''(y) * f'(x) * R
                         // S(x) = g'(y)

                         // back propagate f'(x)^T * U(x)
                         // (no need to use vx with f'(x) propogation)
                         v[i_left]  = set_union(v[i_left],  u[i_result] );
                         v[i_right] = set_union(v[i_right], u[i_result] );

                         // back propagate S(x) * f''(x) * R
                         // (here is where we must check for cross terms)
                         if( s[i_result] & vx[i_left] & vx[i_right] )
                         {     v[i_left]  = set_union(v[i_left],  r[i_right] );
                              v[i_right] = set_union(v[i_right], r[i_left]  );
                         }
                    }
               }
          }
          return true;
     }
     // bool sparsity
     virtual bool rev_sparse_hes(
          const vector<bool>&                   vx,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          size_t                                q ,
          const vector<bool>&                   r ,
          const vector<bool>&                   u ,
                vector<bool>&                   v ,
          const vector<double>&                 x )
     {
          size_t nr_left  = size_t( CppAD::Integer( x[0] ) );
          size_t n_middle = size_t( CppAD::Integer( x[1] ) );
          size_t nc_right = size_t( CppAD::Integer( x[2] ) );
          size_t  nx        = 3 + (nr_left + nc_right) * n_middle;
# ifndef NDEBUG
          size_t  ny        = nr_left * nc_right;
# endif
          assert( x.size()  == nx );
          assert( vx.size() == nx );
          assert( t.size()  == nx );
          assert( r.size()  == nx * q );
          assert( v.size()  == nx * q );
          assert( s.size()  == ny );
          assert( u.size()  == ny * q );
          size_t i, j, p;
          //
          // initilaize sparsity patterns as false
          for(j = 0; j < nx; j++)
          {     t[j] = false;
               for(p = 0; p < q; p++)
                    v[j * q + p] = false;
          }
          size_t nk = 1;
          size_t k  = 0;
          for(i = 0; i < nr_left; i++)
          {     for(j = 0; j < nc_right; j++)
               {     size_t i_result = result(
                         i, j, k, nk, nr_left, n_middle, nc_right
                    );
                    for(size_t ell = 0; ell < n_middle; ell++)
                    {     size_t i_left  = left(
                              i, ell, k, nk, nr_left, n_middle, nc_right
                         );
                         size_t i_right = right(
                              ell, j, k, nk, nr_left, n_middle, nc_right
                         );
                         //
                         // Compute sparsity for T(x) = S(x) * f'(x).
                         // We so not need to use vx with f'(x) propagation.
                         t[i_left]  |= bool( s[i_result] );
                         t[i_right] |= bool( s[i_result] );

                         // V(x) = f'(x)^T * U(x) +  S(x) * f''(x) * R
                         // U(x) = g''(y) * f'(x) * R
                         // S(x) = g'(y)

                         // back propagate f'(x)^T * U(x)
                         // (no need to use vx with f'(x) propogation)
                         for(p = 0; p < q; p++)
                         {     v[ i_left  * q + p] |= bool( u[ i_result * q + p] );
                              v[ i_right * q + p] |= bool( u[ i_result * q + p] );
                         }

                         // back propagate S(x) * f''(x) * R
                         // (here is where we must check for cross terms)
                         if( s[i_result] & vx[i_left] & vx[i_right] )
                         {     for(p = 0; p < q; p++)
                              {     v[i_left * q + p]  |= bool( r[i_right * q + p] );
                                   v[i_right * q + p] |= bool( r[i_left * q + p] );
                              }
                         }
                    }
               }
          }
          return true;
     }

4.4.7.2.19.1.o: End Class Definition

}; // End of mat_mul class
}  // End empty namespace

Input File: cppad/example/mat_mul.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5: Bool Valued Operations and Functions with AD Arguments
4.5.1: Compare AD Binary Comparison Operators
4.5.2: NearEqualExt Compare AD and Base Objects for Nearly Equal
4.5.3: BoolFun AD Boolean Functions
4.5.4: ParVar Is an AD Object a Parameter or Variable
4.5.5: EqualOpSeq Check if Two Value are Identically Equal

Input File: cppad/core/bool_valued.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.1: AD Binary Comparison Operators

4.5.1.a: Syntax
b = x Op y

4.5.1.b: Purpose
Compares two operands where one of the operands is an AD<Base> object. The comparison has the same interpretation as for the Base type.

4.5.1.c: Op
The operator Op is one of the following:
Op   Meaning
< is x less than y
<= is x less than or equal y
> is x greater than y
>= is x greater than or equal y
== is x equal to y
!= is x not equal to y

4.5.1.d: x
The operand x has prototype
     const 
Type &x
where Type is AD<Base> , Base , or int.

4.5.1.e: y
The operand y has prototype
     const 
Type &y
where Type is AD<Base> , Base , or int.

4.5.1.f: b
The result b has type
     bool 
b

4.5.1.g: Operation Sequence
The result of this operation is a bool value (not an 12.4.b: AD of Base object). Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

For example, suppose x and y are AD<Base> objects, the tape corresponding to AD<Base> is recording, b is true, and the subsequent code is
     if( 
b )
          
y = cos(x);
     else 
y = sin(x);
only the assignment y = cos(x) is recorded on the tape (if x is a 12.4.h: parameter , nothing is recorded). The 12.8.3: CompareChange function can yield some information about changes in comparison operation results. You can use 4.4.4: CondExp to obtain comparison operations that depends on the 12.4.k.c: independent variable values with out re-taping the AD sequence of operations.

4.5.1.h: Assumptions
If one of the Op operators listed above is used with an AD<Base> object, it is assumed that the same operator is supported by the base type Base .

4.5.1.i: Example
The file 4.5.1.1: compare.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: cppad/core/compare.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.1.1: AD Binary Comparison Operators: Example and Test
# include <cppad/cppad.hpp>

bool Compare(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // declare independent variables and start tape recording
     size_t n  = 2;
     double x0 = 0.5;
     double x1 = 1.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;
     x[1]      = x1;
     CppAD::Independent(x);

     // some binary comparision operations
     AD<double> p;
     if( x[0] < x[1] )
          p = x[0];   // values in x choose this case
     else     p = x[1];
     if( x[0] <= x[1] )
          p *= x[0];  // values in x choose this case
     else     p *= x[1];
     if( x[0] >  x[1] )
          p *= x[0];
     else     p *= x[1];  // values in x choose this case
     if( x[0] >= x[1] )
          p *= x[0];
     else     p *= x[1];  // values in x choose this case
     if( x[0] == x[1] )
          p *= x[0];
     else     p *= x[1];  // values in x choose this case
     if( x[0] != x[1] )
          p *= x[0];  // values in x choose this case
     else     p *= x[1];

     // dependent variable vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = p;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     ok &= NearEqual(y[0] , x0*x0*x1*x1*x1*x0, eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dx[1] = 0.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 3.*x0*x0*x1*x1*x1, eps99, eps99);

     // forward computation of partials w.r.t. x[1]
     dx[0] = 0.;
     dx[1] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0], 3.*x0*x0*x1*x1*x0, eps99, eps99);

     // reverse computation of derivative of y[0]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0]  = 1.;
     dw    = f.Reverse(1, w);
     ok   &= NearEqual(dw[0], 3.*x0*x0*x1*x1*x1, eps99, eps99);
     ok   &= NearEqual(dw[1], 3.*x0*x0*x1*x1*x0, eps99, eps99);

     return ok;
}

Input File: example/general/compare.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.2: Compare AD and Base Objects for Nearly Equal

4.5.2.a: Syntax
b = NearEqual(xyra)

4.5.2.b: Purpose
The routine 8.2: NearEqual determines if two objects of the same type are nearly. This routine is extended to the case where one object can have type Type while the other can have type AD<Type> or AD< std::complex<Type> > .

4.5.2.c: x
The arguments x has one of the following possible prototypes:
     const 
Type                     &x
     const AD<
Type>                 &x
     const AD< std::complex<
Type> > &x

4.5.2.d: y
The arguments y has one of the following possible prototypes:
     const 
Type                     &y
     const AD<
Type>                 &y
     const AD< std::complex<
Type> > &x

4.5.2.e: r
The relative error criteria r has prototype
     const 
Type &r
It must be greater than or equal to zero. The relative error condition is defined as: @[@ \frac{ | x - y | } { |x| + |y| } \leq r @]@

4.5.2.f: a
The absolute error criteria a has prototype
     const 
Type &a
It must be greater than or equal to zero. The absolute error condition is defined as: @[@ | x - y | \leq a @]@

4.5.2.g: b
The return value b has prototype
     bool 
b
If either x or y is infinite or not a number, the return value is false. Otherwise, if either the relative or absolute error condition (defined above) is satisfied, the return value is true. Otherwise, the return value is false.

4.5.2.h: Type
The type Type must be a 8.7: NumericType . The routine 8.8: CheckNumericType will generate an error message if this is not the case. If a and b have type Type , the following operation must be defined
Operation Description
a <= b less that or equal operator (returns a bool object)

4.5.2.i: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.5.2.j: Example
The file 4.5.2.1: near_equal_ext.cpp contains an example and test of this extension of 8.2: NearEqual . It return true if it succeeds and false otherwise.
Input File: cppad/core/near_equal_ext.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.2.1: Compare AD with Base Objects: Example and Test

# include <cppad/cppad.hpp>
# include <complex>

bool NearEqualExt(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;

     // double
     double x    = 1.00000;
     double y    = 1.00001;
     double a    =  .00005;
     double r    =  .00005;
     double zero = 0.;

     // AD<double>
     AD<double> ax(x);
     AD<double> ay(y);

     ok &= NearEqual(ax, ay, zero, a);
     ok &= NearEqual(ax, y,  r, zero);
     ok &= NearEqual(x, ay,  r,    a);

     // std::complex<double>
     AD<double> cx(x);
     AD<double> cy(y);

     // AD< std::complex<double> >
     AD<double> acx(x);
     AD<double> acy(y);

     ok &= NearEqual(acx, acy, zero, a);
     ok &= NearEqual(acx,  cy, r, zero);
     ok &= NearEqual(acx,   y, r,    a);
     ok &= NearEqual( cx, acy, r,    a);
     ok &= NearEqual(  x, acy, r,    a);

     return ok;
}

Input File: example/general/near_equal_ext.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.3: AD Boolean Functions

4.5.3.a: Syntax
CPPAD_BOOL_UNARY(Baseunary_name)
b = unary_name(u)
b = unary_name(x)
CPPAD_BOOL_BINARY(Basebinary_name)
b = binary_name(uv)
b = binary_name(xy)

4.5.3.b: Purpose
Create a bool valued function that has AD<Base> arguments.

4.5.3.c: unary_name
This is the name of the bool valued function with one argument (as it is used in the source code). The user must provide a version of unary_name where the argument has type Base . CppAD uses this to create a version of unary_name where the argument has type AD<Base> .

4.5.3.d: u
The argument u has prototype
     const 
Base &u
It is the value at which the user provided version of unary_name is to be evaluated. It is also used for the first argument to the user provided version of binary_name .

4.5.3.e: x
The argument x has prototype
     const AD<
Base> &x
It is the value at which the CppAD provided version of unary_name is to be evaluated. It is also used for the first argument to the CppAD provided version of binary_name .

4.5.3.f: b
The result b has prototype
     bool 
b

4.5.3.g: Create Unary
The preprocessor macro invocation
     CPPAD_BOOL_UNARY(
Baseunary_name)
defines the version of unary_name with a AD<Base> argument. This can with in a namespace (not the CppAD namespace) but must be outside of any routine.

4.5.3.h: binary_name
This is the name of the bool valued function with two arguments (as it is used in the source code). The user must provide a version of binary_name where the arguments have type Base . CppAD uses this to create a version of binary_name where the arguments have type AD<Base> .

4.5.3.i: v
The argument v has prototype
     const 
Base &v
It is the second argument to the user provided version of binary_name .

4.5.3.j: y
The argument x has prototype
     const AD<
Base> &y
It is the second argument to the CppAD provided version of binary_name .

4.5.3.k: Create Binary
The preprocessor macro invocation
     CPPAD_BOOL_BINARY(
Basebinary_name)
defines the version of binary_name with AD<Base> arguments. This can with in a namespace (not the CppAD namespace) but must be outside of any routine.

4.5.3.l: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.5.3.m: Example
The file 4.5.3.1: bool_fun.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.

4.5.3.n: Deprecated 2007-07-31
The preprocessor symbols CppADCreateUnaryBool and CppADCreateBinaryBool are defined to be the same as CPPAD_BOOL_UNARY and CPPAD_BOOL_BINARY respectively (but their use is deprecated).
Input File: cppad/core/bool_fun.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.3.1: AD Boolean Functions: Example and Test

# include <cppad/cppad.hpp>
# include <complex>


// define abbreviation for double precision complex
typedef std::complex<double> Complex;

namespace {
     // a unary bool function with Complex argument
     static bool IsReal(const Complex &x)
     {     return x.imag() == 0.; }

     // a binary bool function with Complex arguments
     static bool AbsGeq(const Complex &x, const Complex &y)
     {     double axsq = x.real() * x.real() + x.imag() * x.imag();
          double aysq = y.real() * y.real() + y.imag() * y.imag();

          return axsq >= aysq;
     }

     // Create version of IsReal with AD<Complex> argument
     // inside of namespace and outside of any other function.
     CPPAD_BOOL_UNARY(Complex, IsReal)

     // Create version of AbsGeq with AD<Complex> arguments
     // inside of namespace and outside of any other function.
     CPPAD_BOOL_BINARY(Complex, AbsGeq)

}
bool BoolFun(void)
{     bool ok = true;

     CppAD::AD<Complex> x = Complex(1.,  0.);
     CppAD::AD<Complex> y = Complex(1.,  1.);

     ok &= IsReal(x);
     ok &= ! AbsGeq(x, y);

     return ok;
}

Input File: example/general/bool_fun.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.4: Is an AD Object a Parameter or Variable

4.5.4.a: Syntax
b = Parameter(x)
b = Variable(x)

4.5.4.b: Purpose
Determine if x is a 12.4.h: parameter or 12.4.m: variable .

4.5.4.c: x
The argument x has prototype
     const AD<
Base>    &x
     const VecAD<
Base> &x

4.5.4.d: b
The return value b has prototype
     bool 
b
The return value for Parameter (Variable) is true if and only if x is a parameter (variable). Note that a 4.6: VecAD<Base> object is a variable if any element of the vector depends on the independent variables.

4.5.4.e: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.5.4.f: Example
The file 4.5.4.1: par_var.cpp contains an example and test of these functions. It returns true if it succeeds and false otherwise.
Input File: cppad/core/par_var.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.4.1: AD Parameter and Variable Functions: Example and Test

# include <cppad/cppad.hpp>

bool ParVar(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::VecAD;
     using CppAD::Parameter;
     using CppAD::Variable;

     // declare independent variables and start tape recording
     size_t n = 1;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]     = 0.;
     ok &= Parameter(x[0]);     // x[0] is a paraemter here
     CppAD::Independent(x);
     ok &= Variable(x[0]);      // now x[0] is a variable

     // dependent variable vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = 2.;
     ok  &= Parameter(y[0]);    // y[0] does not depend on x[0]
     y[1] = fabs(x[0]);
     ok  &= Variable(y[1]);     // y[1] does depends on x[0]

     // VecAD objects
     VecAD<double> z(2);
     z[0] = 0.;
     z[1] = 1.;
     ok  &= Parameter(z);      // z does not depend on x[0]
     z[x[0]] = 2.;
     ok  &= Variable(z);       // z depends on x[0]


     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check that now all AD<double> objects are parameters
     ok &= Parameter(x[0]); ok &= ! Variable(x[0]);
     ok &= Parameter(y[0]); ok &= ! Variable(y[0]);
     ok &= Parameter(y[1]); ok &= ! Variable(y[1]);

     // check that the VecAD<double> object is a parameter
     ok &= Parameter(z);

     return ok;
}

Input File: example/general/par_var.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.5: Check if Two Value are Identically Equal

4.5.5.a: Syntax
b = EqualOpSeq(xy)

4.5.5.b: Purpose
Determine if two x and y are identically equal; i.e., not only is x == y true, but if they are 12.4.m: variables , they correspond have the same 12.4.g.b: operation sequence .

4.5.5.c: Motivation
Sometimes it is useful to cache information and only recalculate when a function's arguments change. In the case of AD variables, it may be important not only when the argument values are equal, but when they are related to the 12.4.k.c: independent variables by the same operation sequence. After the assignment
     
y = x
these two AD objects would not only have equal values, but would also correspond to the same operation sequence.

4.5.5.d: x
The argument x has prototype
     const AD<
Base> &x

4.5.5.e: y
The argument y has prototype
     const AD<
Base> &y

4.5.5.f: b
The result b has prototype
     bool 
b
The result is true if and only if one of the following cases holds:
  1. Both x and y are variables and correspond to the same operation sequence.
  2. Both x and y are parameters, Base is an AD type, and EqualOpSeq( Value(x) , Value(y) ) is true.
  3. Both x and y are parameters, Base is not an AD type, and x == y is true.


4.5.5.g: Example
The file 4.5.5.1: equal_op_seq.cpp contains an example and test of EqualOpSeq. It returns true if it succeeds and false otherwise.
Input File: cppad/core/equal_op_seq.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.5.5.1: EqualOpSeq: Example and Test
# include <cppad/cppad.hpp>

bool EqualOpSeq(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::EqualOpSeq;

     // domain space vector
     size_t n  = 1;
     double x0 = 1.;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     AD<double> a = 1. + x[0];  // this variable is 1 + x0
     AD<double> b = 2. * x[0];  // this variable is 2 * x0

     // both a and b are variables
     ok &= (a == b);            // 1 + 1     == 2 * 1
     ok &= ! EqualOpSeq(a, b);  // 1 + x[0]  != 2 * x[0]

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = a;

     // both y[0] and a are variables
     EqualOpSeq(y[0], a);       // 2 * x[0] == 2 * x[0]

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // both a and b are parameters (after the creation of f above)
     ok &= EqualOpSeq(a, b);    // 1 + 1 == 2 * 1

     return ok;
}

Input File: example/general/equal_op_seq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.6: AD Vectors that Record Index Operations

4.6.a: Syntax
VecAD<Basev(n)
v.size()
b = v[i]
r = v[x]

4.6.b: Purpose
If either v or x is a 12.4.m: variable , the indexing operation
     
r = v[x]
is recorded in the corresponding AD of Base 12.4.g.b: operation sequence and transferred to the corresponding 5: ADFun object f . Such an index can change each time zero order 5.3: f.Forward is used; i.e., f is evaluated with new value for the 12.4.k.c: independent variables . Note that the value of y depends on the value of x in a discrete fashion and CppAD computes its partial derivative with respect to x as zero.

4.6.c: Alternatives
If only the values in the vector, and not the indices, depend on the independent variables, the class Vector< AD<Base> > is much more efficient for storing AD values where Vector is any 8.9: SimpleVector template class, If only the indices, and not the values in the vector, depend on the independent variables, The 4.4.5: Discrete functions are a much more efficient way to represent these vectors.

4.6.d: VecAD<Base>::reference
The result r has type
     VecAD<
Base>::reference
which is very much like the AD<Base> type with some notable exceptions:

4.6.d.a: Exceptions
  1. The object r cannot be used with the 4.3.1: Value function to compute the corresponding Base value. If v and i are not 12.4.m: variables
         
    b = v[i]
    can be used to compute the corresponding Base value.
  2. The object r cannot be used with the 4.4.1: compound assignments operators +=, -=, *=, or /=. For example, the following syntax is not valid:
         
    v[x] += z;
    no matter what the types of z .
  3. Assignment to r returns a void. For example, the following syntax is not valid:
         
    z = v[x] = u;
    no matter what the types of z , and u .
  4. The 4.4.4: CondExp functions do not accept VecAD<Base>::reference arguments. For example, the following syntax is not valid:
         CondExpGt(
    v[x], zuv)
    no matter what the types of z , u , and v .
  5. The 4.5.4: Parameter and Variable functions cannot be used with VecAD<Base>::reference arguments like r , use the entire VecAD<Base> vector instead; i.e. v .
  6. The vectors passed to 5.1.1: Independent must have elements of type AD<Base> ; i.e., 4.6: VecAD vectors cannot be passed to Independent.
  7. If one uses this type in a AD of Base 12.4.g.b: operation sequence , 12.4.j: sparsity pattern calculations (5.5: sparsity_pattern ) are less efficient because the dependence of different elements of the vector cannot be separated.


4.6.e: Constructor

4.6.e.a: v
The syntax
     VecAD<
Basev(n)
creates an VecAD object v with n elements. The initial value of the elements of v is unspecified.

4.6.f: n
The argument n has prototype
     size_t 
n

4.6.g: size
The syntax
     
v.size()
returns the number of elements in the vector v ; i.e., the value of n when it was constructed.

4.6.h: size_t Indexing
We refer to the syntax
     
b = v[i]
as size_t indexing of a VecAD object. This indexing is only valid if the vector v is a 4.5.4: parameter ; i.e., it does not depend on the independent variables.

4.6.h.a: i
The operand i has prototype
     size_t 
i
It must be greater than or equal zero and less than n ; i.e., less than the number of elements in v .

4.6.h.b: b
The result b has prototype
     
Base b
and is a reference to the i-th element in the vector v . It can be used to change the element value; for example,
     
v[i] = c
is valid where c is a Base object. The reference b is no longer valid once the destructor for v is called; for example, when v falls out of scope.

4.6.i: AD Indexing
We refer to the syntax
     
r = v[x]
as AD indexing of a VecAD object.

4.6.i.a: x
The argument x has prototype
     const AD<
Base> &x
The value of x must be greater than or equal zero and less than n ; i.e., less than the number of elements in v .

4.6.i.b: r
The result r has prototype
     VecAD<
Base>::reference r
The object r has an AD type and its operations are recorded as part of the same AD of Base 12.4.g.b: operation sequence as for AD<Base> objects. It acts as a reference to the element with index @(@ {\rm floor} (x) @)@ in the vector v (@(@ {\rm floor} (x) @)@ is the greatest integer less than or equal x ). Because it is a reference, it can be used to change the element value; for example,
     
v[x] = z
is valid where z is an VecAD<Base>::reference object. As a reference, r is no longer valid once the destructor for v is called; for example, when v falls out of scope.

4.6.j: Example
The file 4.6.1: vec_ad.cpp contains an example and test using VecAD vectors. It returns true if it succeeds and false otherwise.

4.6.k: Speed and Memory
The 4.6: VecAD vector type is inefficient because every time an element of a vector is accessed, a new CppAD 12.4.m: variable is created on the tape using either the Ldp or Ldv operation (unless all of the elements of the vector are 12.4.h: parameters ). The effect of this can be seen by executing the following steps:
  1. In the file cppad/local/forward1sweep.h, change the definition of CPPAD_FORWARD1SWEEP_TRACE to
     
         # define CPPAD_FORWARD1SWEEP_TRACE 1
    
  2. In the Example directory, execute the command
     
         ./test_one.sh lu_vec_ad_ok.cpp lu_vec_ad.cpp -DNDEBUG > lu_vec_ad_ok.log
    
    This will write a trace of all the forward tape operations, for the test case 10.3.3.1: lu_vec_ad_ok.cpp , to the file lu_vec_ad_ok.log.
  3. In the Example directory execute the commands
     
         grep "op="           lu_vec_ad_ok.log | wc -l
         grep "op=Ld[vp]"     lu_vec_ad_ok.log | wc -l
         grep "op=St[vp][vp]" lu_vec_ad_ok.log | wc -l
    
    The first command counts the number of operators in the tracing, the second counts the number of VecAD load operations, and the third counts the number of VecAD store operations. (For CppAD version 05-11-20 these counts were 956, 348, and 118 respectively.)

Input File: cppad/core/vec_ad.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.6.1: AD Vectors that Record Index Operations: Example and Test

# include <cppad/cppad.hpp>
# include <cassert>

namespace {
     // return the vector x that solves the following linear system
     //     a[0] * x[0] + a[1] * x[1] = b[0]
     //     a[2] * x[0] + a[3] * x[1] = b[1]
     // in a way that will record pivot operations on the AD<double> tape
     typedef CPPAD_TESTVECTOR(CppAD::AD<double>) Vector;
     Vector Solve(const Vector &a , const Vector &b)
     {     using namespace CppAD;
          assert(a.size() == 4 && b.size() == 2);

          // copy the vector b into the VecAD object B
          VecAD<double> B(2);
          AD<double>    u;
          for(u = 0; u < 2; u += 1.)
               B[u] = b[ Integer(u) ];

          // copy the matrix a into the VecAD object A
          VecAD<double> A(4);
          for(u = 0; u < 4; u += 1.)
               A[u] = a [ Integer(u) ];

          // tape AD operation sequence that determines the row of A
          // with maximum absolute element in column zero
          AD<double> zero(0), one(1);
          AD<double> rmax = CondExpGt(fabs(a[0]), fabs(a[2]), zero, one);

          // divide row rmax by A(rmax, 0)
          A[rmax * 2 + 1]  = A[rmax * 2 + 1] / A[rmax * 2 + 0];
          B[rmax]          = B[rmax]         / A[rmax * 2 + 0];
          A[rmax * 2 + 0]  = one;

          // subtract A(other,0) times row A(rmax, *) from row A(other,*)
          AD<double> other   = one - rmax;
          A[other * 2 + 1]   = A[other * 2 + 1]
                             - A[other * 2 + 0] * A[rmax * 2 + 1];
          B[other]           = B[other]
                             - A[other * 2 + 0] * B[rmax];
          A[other * 2 + 0] = zero;

          // back substitute to compute the solution vector x.
          // Note that the columns of A correspond to rows of x.
          // Also note that A[rmax * 2 + 0] is equal to one.
          CPPAD_TESTVECTOR(AD<double>) x(2);
          x[1] = B[other] / A[other * 2 + 1];
          x[0] = B[rmax] - A[rmax * 2 + 1] * x[1];

          return x;
     }
}

bool vec_ad(void)
{     bool ok = true;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 4;
     CPPAD_TESTVECTOR(double)       x(n);
     CPPAD_TESTVECTOR(AD<double>) X(n);
     // 2 * identity matrix (rmax in Solve will be 0)
     X[0] = x[0] = 2.; X[1] = x[1] = 0.;
     X[2] = x[2] = 0.; X[3] = x[3] = 2.;

     // declare independent variables and start tape recording
     CppAD::Independent(X);

     // define the vector b
     CPPAD_TESTVECTOR(double)       b(2);
     CPPAD_TESTVECTOR(AD<double>) B(2);
     B[0] = b[0] = 0.;
     B[1] = b[1] = 1.;

     // range space vector solves X * Y = b
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y = Solve(X, B);

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // By Cramer's rule:
     // y[0] = [ b[0] * x[3] - x[1] * b[1] ] / [ x[0] * x[3] - x[1] * x[2] ]
     // y[1] = [ x[0] * b[1] - b[0] * x[2] ] / [ x[0] * x[3] - x[1] * x[2] ]

     double den   = x[0] * x[3] - x[1] * x[2];
     double dsq   = den * den;
     double num0  = b[0] * x[3] - x[1] * b[1];
     double num1  = x[0] * b[1] - b[0] * x[2];

     // check value
     ok &= NearEqual(Y[0] , num0 / den, eps99, eps99);
     ok &= NearEqual(Y[1] , num1 / den, eps99, eps99);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.; dx[1] = 0.;
     dx[2] = 0.; dx[3] = 0.;
     dy    = f.Forward(1, dx);
     ok &= NearEqual(dy[0], 0.         - num0 * x[3] / dsq, eps99, eps99);
     ok &= NearEqual(dy[1], b[1] / den - num1 * x[3] / dsq, eps99, eps99);

     // compute the solution for a new x matrix such that pivioting
     // on the original rmax row would divide by zero
     CPPAD_TESTVECTOR(double) y(m);
     x[0] = 0.; x[1] = 2.;
     x[2] = 2.; x[3] = 0.;

     // new values for Cramer's rule
     den   = x[0] * x[3] - x[1] * x[2];
     dsq   = den * den;
     num0  = b[0] * x[3] - x[1] * b[1];
     num1  = x[0] * b[1] - b[0] * x[2];

     // check values
     y    = f.Forward(0, x);
     ok &= NearEqual(y[0] , num0 / den, eps99, eps99);
     ok &= NearEqual(y[1] , num1 / den, eps99, eps99);

     // forward computation of partials w.r.t. x[1]
     dx[0] = 0.; dx[1] = 1.;
     dx[2] = 0.; dx[3] = 0.;
     dy    = f.Forward(1, dx);
     ok   &= NearEqual(dy[0],-b[1] / den + num0 * x[2] / dsq, eps99, eps99);
     ok   &= NearEqual(dy[1], 0.         + num1 * x[2] / dsq, eps99, eps99);

     // reverse computation of derivative of y[0] w.r.t x
     CPPAD_TESTVECTOR(double) w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0] = 1.; w[1] = 0.;
     dw   = f.Reverse(1, w);
     ok  &= NearEqual(dw[0], 0.         - num0 * x[3] / dsq, eps99, eps99);
     ok  &= NearEqual(dw[1],-b[1] / den + num0 * x[2] / dsq, eps99, eps99);
     ok  &= NearEqual(dw[2], 0.         + num0 * x[1] / dsq, eps99, eps99);
     ok  &= NearEqual(dw[3], b[0] / den - num0 * x[0] / dsq, eps99, eps99);

     return ok;
}

Input File: example/general/vec_ad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7: AD<Base> Requirements for a CppAD Base Type

4.7.a: Syntax
# include <cppad/base_require.hpp>


4.7.b: Purpose
This section lists the requirements for the type Base so that the type AD<Base> can be used.

4.7.c: API Warning
Defining a CppAD Base type is an advanced use of CppAD. This part of the CppAD API changes with time. The most common change is adding more requirements. Search for base_require in the current 12.7: whats_new section for these changes.

4.7.d: Standard Base Types
In the case where Base is float, double, std::complex<float>, std::complex<double>, or AD<Other> , these requirements are provided by including the file cppad/cppad.hpp.

4.7.e: Include Order
If you are linking a non-standard base type to CppAD, you must first include the file cppad/base_require.hpp, then provide the specifications below, and then include the file cppad/cppad.hpp.

4.7.f: Numeric Type
The type Base must support all the operations for a 8.7: NumericType .

4.7.g: Output Operator
The type Base must support the syntax
     
os << x
where os is an std::ostream& and x is a const base_alloc&. For example, see 4.7.9.1.k: base_alloc .

4.7.h: Integer
The type Base must support the syntax
     
i = CppAD::Integer(x)
which converts x to an int. The argument x has prototype
     const 
Basex
and the return value i has prototype
     int 
i

4.7.h.a: Suggestion
In many cases, the Base version of the Integer function can be defined by
namespace CppAD {
     inline int Integer(const 
Base& x)
     {    return static_cast<int>(x); }
}
For example, see 4.7.9.4.e: base_float and 4.7.9.1.l: base_alloc .

4.7.i: Absolute Zero, azmul
The type Base must support the syntax
     
z = azmul(xy)
see; 4.4.3.3: azmul . The following preprocessor macro invocation suffices (for most Base types):
namespace CppAD {
     CPPAD_AZMUL(
Base)
}
where the macro is defined by
# define CPPAD_AZMUL(Base) \
    inline Base azmul(const Base& x, const Base& y) \
    {   Base zero(0.0);   \
        if( x == zero ) \
            return zero;  \
        return x * y;     \
    }

4.7.j: Contents
base_member: 4.7.1Required Base Class Member Functions
base_cond_exp: 4.7.2Base Type Requirements for Conditional Expressions
base_identical: 4.7.3Base Type Requirements for Identically Equal Comparisons
base_ordered: 4.7.4Base Type Requirements for Ordered Comparisons
base_std_math: 4.7.5Base Type Requirements for Standard Math Functions
base_limits: 4.7.6Base Type Requirements for Numeric Limits
base_to_string: 4.7.7Extending to_string To Another Floating Point Type
base_hash: 4.7.8Base Type Requirements for Hash Coding Values
base_example: 4.7.9Example AD Base Types That are not AD<OtherBase>

Input File: cppad/base_require.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.1: Required Base Class Member Functions

4.7.1.a: Notation
Symbol Meaning
Base The base type corresponding to AD<Base>
b An object of type bool
d An object of type double
x An object of type const Base&
y An object of type const Base&
z An object of type Base

4.7.1.b: Default Constructor
Base z

4.7.1.c: Double Constructor
Base z(d)

4.7.1.d: Copy Constructor
Base z(x)

4.7.1.e: Unary Operators
For op equal to +, - the following operation must be supported:
     
z = op x


4.7.1.f: Assignment Operators
For op equal to = , +=, -=, *=, and /= the following operation must be supported:
     
z op x


4.7.1.g: Binary Operators
For op equal to +, -, *, and / the following operation must be supported:
     
z = x op y


4.7.1.h: Bool Operators
For op equal to ==, !=, <=, the following operation must be supported:
     
b = x op y


4.7.1.i: Example
See the heading Class Definition in 4.7.9.1.f: base_alloc .
Input File: omh/base_require/base_member.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.2: Base Type Requirements for Conditional Expressions

4.7.2.a: Purpose
These definitions are required by the user's code to support the AD<Base> type for 4.4.4: CondExp operations:

4.7.2.b: CompareOp
The following enum type is used in the specifications below:
 
namespace CppAD {
     // The conditional expression operator enum type
     enum CompareOp
     {    CompareLt, // less than
          CompareLe, // less than or equal
          CompareEq, // equal
          CompareGe, // greater than or equal
          CompareGt, // greater than
          CompareNe  // not equal
     };
}

4.7.2.c: CondExpTemplate
The type Base must support the syntax
     
result = CppAD::CondExpOp(
          
copleftrightexp_if_trueexp_if_false
     )
which computes implements the corresponding 4.4.4: CondExp function when the result has prototype
     
Base result
The argument cop has prototype
     enum CppAD::CompareOp 
cop
The other arguments have the prototype
     const 
Base&  left
     const 
Base&  right
     const 
Base&  exp_if_true
     const 
Base&  exp_if_false

4.7.2.c.a: Ordered Type
If Base is a relatively simple type that supports <, <=, ==, >=, and > operators its CondExpOp function can be defined by
namespace CppAD {
     inline 
Base CondExpOp(
     enum CppAD::CompareOp  cop            ,
     const 
Base           &left          ,
     const 
Base           &right         ,
     const 
Base           &exp_if_true   ,
     const 
Base           &exp_if_false  )
     {    return CondExpTemplate(
               cop, left, right, trueCase, falseCase);
     }
}
For example, see 4.7.9.1.g: double CondExpOp . For an example of and implementation of CondExpOp with a more involved Base type see 4.7.9.3.d: adolc CondExpOp .

4.7.2.c.b: Not Ordered
If the type Base does not support ordering, the CondExpOp function does not make sense. In this case one might (but need not) define CondExpOp as follows:
namespace CppAD {
     inline 
Base CondExpOp(
     enum CompareOp cop           ,
     const 
Base   &left         ,
     const 
Base   &right        ,
     const 
Base   &exp_if_true  ,
     const 
Base   &exp_if_false )
     {    // attempt to use CondExp with a 
Base argument
          assert(0);
          return 
Base(0);
     }
}
For example, see 4.7.9.6.c: complex CondExpOp .

4.7.2.d: CondExpRel
The macro invocation
     CPPAD_COND_EXP_REL(
Base)
uses CondExpOp above to define the following functions
     CondExpLt(
leftrightexp_if_trueexp_if_false)
     CondExpLe(
leftrightexp_if_trueexp_if_false)
     CondExpEq(
leftrightexp_if_trueexp_if_false)
     CondExpGe(
leftrightexp_if_trueexp_if_false)
     CondExpGt(
leftrightexp_if_trueexp_if_false)
where the arguments have type Base . This should be done inside of the CppAD namespace. For example, see 4.7.9.1.h: base_alloc .
Input File: cppad/core/base_cond_exp.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.3: Base Type Requirements for Identically Equal Comparisons

4.7.3.a: EqualOpSeq
If function 4.5.5: EqualOpSeq is used with arguments of type AD<Base> , the type Base must support the syntax
     
b = CppAD::EqualOpSeq(uv)
This should return true if and only if u is identically equal to v and it makes no different which one is used. The arguments u and v have prototype
     const 
Baseu
     const 
Basev
The return value b has prototype
     bool 
b

4.7.3.a.a: The Simple Case
If Base is a relatively simple type, the EqualOpSeq function can be defined by
namespace CppAD {
     inline 
Base EqualOpSeq(const Base& u, const Base& v)
     {    return u == v; }
}
For example, see 4.7.9.1.i: base_alloc .

4.7.3.a.b: More Complicated Cases
The variables u and v are not identically equal in the following case (which CppAD automatically defines EqualOpSeq for): The type Base is AD<double> , x[0] = x[1] = 1. , then 5.1.1: independent is used to make x the independent variable vector, and then u = x[0] , v = x[1] , Note that during a future 5.3: Forward calculation, u and v could correspond to different values. For example, see 4.7.9.3.f: adolc EqualOpSeq .

4.7.3.b: Identical

4.7.3.b.a: IdenticalPar
A Base is a 12.4.h: parameter when used in an AD<Base> operation sequence. It is however still possible for a parameter to change its value. For example, the Base value u is not identically a parameter equal in the following case (which CppAD automatically defines IdenticalPar for): The type Base is AD<double> , x[0] = 1. , then 5.1.1: independent is used to make x the independent variable vector, and then u = x[0] , Note that during a future 5.3: Forward calculation, u could correspond to different values.

4.7.3.b.b: Prototypes
The argument u has prototype
     const 
Base u
If it is present, the argument v has prototype
     const 
Base v
The result b has prototype
     bool 
b

4.7.3.b.c: Identical Functions
The type Base must support the following functions (in the CppAD namespace):
Syntax Result
b = IdenticalPar(u)    the Base value will always be the same
b = IdenticalZero(u)    u equals zero and IdenticalPar(u)
b = IdenticalOne(u)    u equals one and IdenticalPar(u)
b = IdenticalEqualPar(uv)    u equals v , IdenticalPar(u) and IdenticalPar(v)

4.7.3.b.d: Examples
See 4.7.9.1.j: base_alloc .
Input File: omh/base_require/base_identical.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.4: Base Type Requirements for Ordered Comparisons

4.7.4.a: Purpose
The following operations (in the CppAD namespace) are required to use the type AD<Base> :
Syntax Result
b = GreaterThanZero(x)    @(@ x > 0 @)@
b = GreaterThanOrZero(x)    @(@ x \geq 0 @)@
b = LessThanZero(x)    @(@ x < 0 @)@
b = LessThanOrZero(x)    @(@ x \leq 0 @)@
b = abs_geq(xy)    @(@ |x| \geq |y| @)@.
where the arguments and return value have the prototypes
     const 
Basex
     const 
Basey
     bool  
      b

4.7.4.b: Ordered Type
If the type Base supports ordered operations, these functions should have their corresponding definitions. For example,
namespace CppAD {
     inline bool GreaterThanZero(const 
Base &x)
     {    return (x > 0);
     }
}
The other functions would replace > by the corresponding operator. For example, see 4.7.9.1.n: base_alloc .

4.7.4.c: Not Ordered
If the type Base does not support ordering, one might (but need not) define GreaterThanZero as follows:
namespace CppAD {
     inline bool GreaterThanZero(const 
Base &x)
     {    // attempt to use GreaterThanZero with a 
Base argument
          assert(0);
          return x;
     }
}
The other functions would have the corresponding definition. For example, see 4.7.9.6.g: complex Ordered .
Input File: omh/base_require/base_ordered.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.5: Base Type Requirements for Standard Math Functions

4.7.5.a: Purpose
These definitions are required for the user's code to use the type AD<Base> :

4.7.5.b: Unary Standard Math
The type Base must support the following functions unary standard math functions (in the CppAD namespace):
Syntax Result
y = abs(x) absolute value
y = acos(x) inverse cosine
y = asin(x) inverse sine
y = atan(x) inverse tangent
y = cos(x) cosine
y = cosh(x) hyperbolic cosine
y = exp(x) exponential
y = fabs(x) absolute value
y = log(x) natural logarithm
y = sin(x) sine
y = sinh(x) hyperbolic sine
y = sqrt(x) square root
y = tan(x) tangent
where the arguments and return value have the prototypes
     const 
Basex
     
Base        y
For example, 4.7.9.1.o: base_alloc ,

4.7.5.c: CPPAD_STANDARD_MATH_UNARY
The macro invocation, within the CppAD namespace,
     CPPAD_STANDARD_MATH_UNARY(
BaseFun)
defines the syntax
     
y = CppAD::Fun(x)
This macro uses the functions std::Fun which must be defined and have the same prototype as CppAD::Fun . For example, 4.7.9.4.h: float .

4.7.5.d: erf, asinh, acosh, atanh, expm1, log1p
If this preprocessor symbol CPPAD_USE_CPLUSPLUS_2011 is true (1), when compiling for c++11, the type double is supported for the functions listed below. In this case, the type Base must also support these functions:
Syntax Result
y = erf(x) error function
y = asinh(x) inverse hyperbolic sin
y = acosh(x) inverse hyperbolic cosine
y = atanh(x) inverse hyperbolic tangent
y = expm1(x) exponential of x minus one
y = log1p(x) logarithm of one plus x
where the arguments and return value have the prototypes
     const 
Basex
     
Base        y

4.7.5.e: sign
The type Base must support the syntax
     
y = CppAD::sign(x)
which computes @[@ y = \left\{ \begin{array}{ll} +1 & {\rm if} \; x > 0 \\ 0 & {\rm if} \; x = 0 \\ -1 & {\rm if} \; x < 0 \end{array} \right. @]@ where x and y have the same prototype as above. For example, see 4.7.9.1.q: base_alloc . Note that, if ordered comparisons are not defined for the type Base , the code sign function should generate an assert if it is used; see 4.7.9.6.l: complex invalid unary math .

4.7.5.f: pow
The type Base must support the syntax
     
z = CppAD::pow(xy)
which computes @(@ z = x^y @)@. The arguments x and y have prototypes
     const 
Basex
     const 
Basey
and the return value z has prototype
     
Base z
For example, see 4.7.9.1.r: base_alloc .

4.7.5.g: isnan
If Base defines the isnan function, you may also have to provide a definition in the CppAD namespace (to avoid a function ambiguity). For example, see 4.7.9.6.j: base_complex .
Input File: cppad/core/base_std_math.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.6: Base Type Requirements for Numeric Limits

4.7.6.a: CppAD::numeric_limits
A specialization for 4.4.6: CppAD::numeric_limits must be defined in order to use the type AD<Base> . CppAD does not use a specialization of std::numeric_limits<Base> . Since C++11, using a specialization of std::numeric_limits<Base> would require that Base be a literal type.

4.7.6.b: CPPAD_NUMERIC_LIMITS
In most cases, this macro can be used to define the specialization where the numeric limits for the type Base are the same as the standard numeric limits for the type Other . For most Base types, there is a choice of Other , for which the following preprocessor macro invocation suffices:
     namespace CppAD {
          CPPAD_NUMERIC_LIMITS(
OtherBase)
     }
where the macro is defined by
# define CPPAD_NUMERIC_LIMITS(Other, Base) \
template <> class numeric_limits<Base>\
{\
     public:\
     static Base min(void) \
     {     return static_cast<Base>( std::numeric_limits<Other>::min() ); }\
     static Base max(void) \
     {     return static_cast<Base>( std::numeric_limits<Other>::max() ); }\
     static Base epsilon(void) \
     {     return static_cast<Base>( std::numeric_limits<Other>::epsilon() ); }\
     static Base quiet_NaN(void) \
     {     return static_cast<Base>( std::numeric_limits<Other>::quiet_NaN() ); }\
     static const int digits10 = std::numeric_limits<Other>::digits10;\
};

Input File: cppad/core/base_limits.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.7: Extending to_string To Another Floating Point Type

4.7.7.a: Base Requirement
If the function 8.25: to_string is used by an 12.4.c: AD type above Base , A specialization for the template structure CppAD::to_string_struct must be defined.

4.7.7.b: CPPAD_TO_STRING
For most Base types, the following can be used to define the specialization:
     namespace CppAD {
          CPPAD_TO_STRING(
Base)
     }
Note that the CPPAD_TO_STRING macro assumes that the 4.7.6: base_limits and 4.7.5: base_std_math have already been defined for this type. This macro is defined as follows:
# define CPPAD_TO_STRING(Base) \
template <> struct to_string_struct<Base>\
{     std::string operator()(const Base& value) \
     {     std::stringstream os;\
          int n_digits = 1 + CppAD::numeric_limits<Base>::digits10; \
          os << std::setprecision(n_digits);\
          os << value;\
          return os.str();\
     }\
};

Input File: cppad/core/base_to_string.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.8: Base Type Requirements for Hash Coding Values

4.7.8.a: Syntax
code = hash_code(x)

4.7.8.b: Purpose
CppAD uses a table of Base type values when recording AD<Base> operations. A hashing function is used to reduce number of values stored in this table; for example, it is not necessary to store the value 3.0 every time it is used as a 4.5.4: parameter .

4.7.8.c: Default
The default hashing function works with the set of bits that correspond to a Base value. In most cases this works well, but in some cases it does not. For example, in the 4.7.9.3: base_adolc.hpp case, an adouble value can have fields that are not initialized and valgrind reported an error when these are used to form the hash code.

4.7.8.d: x
This argument has prototype
     const 
Basex
It is the value we are forming a hash code for.

4.7.8.e: code
The return value code has prototype
     unsigned short 
code
It is the hash code corresponding to x . This intention is the commonly used values will have different hash codes. The hash code must satisfy
     
code < CPPAD_HASH_TABLE_SIZE
so that it is a valid index into the hash code table.

4.7.8.f: inline
If you define this function, it should declare it to be inline, so that you do not get multiple definitions from different compilation units.

4.7.8.g: Example
See the base_alloc 4.7.9.1.u: hash_code and the adouble 4.7.9.3.r: hash_code .
Input File: cppad/core/base_hash.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9: Example AD Base Types That are not AD<OtherBase>

4.7.9.a: Contents
4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
4.7.9.2: Using a User Defined AD Base Type: Example and Test
4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
4.7.9.4: Enable use of AD<Base> where Base is float
4.7.9.5: Enable use of AD<Base> where Base is double
4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>

Input File: omh/base_require/base_example.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory

4.7.9.1.a: Purpose
Demonstrate use of AD<Base> where memory is allocated for each element of the type Base . In addition, this is a complete example where all the 4.7: required Base type operations are defined (as apposed to other examples where some of the operations for the Base type are already defined).

4.7.9.1.b: Include File
This file uses some of the definitions in 4.7: base_require and 8.23: thread_alloc .

# include <cppad/base_require.hpp>
# include <cppad/utility/thread_alloc.hpp>

4.7.9.1.c: Compound Assignment Macro
This macro is used for the base_alloc compound assignment operators; to be specific, used with op  equal to +=, -=, *=, /=.

# define BASE_ALLOC_ASSIGN_OPERATOR(op) \
     void operator op (const base_alloc& x) \
     {     *ptrdbl_ op *x.ptrdbl_; }

4.7.9.1.d: Binary Operator Macro
This macro is used for the base_alloc binary operators (as member functions); to be specific, used with op  equal to +, -, *, /.
# define BASE_ALLOC_BINARY_OPERATOR(op) const \
     base_alloc operator op (const base_alloc& x) const \
     {     base_alloc result; \
          double   dbl = *ptrdbl_; \
          double x_dbl = *x.ptrdbl_; \
          *result.ptrdbl_ = dbl op x_dbl; \
          return result; \
     }

4.7.9.1.e: Boolean Operator Macro
This macro can be used for the base_alloc binary operators that have a bool result; to be specific, used with op  equal to ==, !=, <, <=, >=, and >,
# define BASE_ALLOC_BOOL_OPERATOR(op) const \
     bool operator op (const base_alloc& x) const \
     {     double   dbl = *ptrdbl_; \
          double x_dbl = *x.ptrdbl_; \
          return dbl op x_dbl; \
     }

4.7.9.1.f: Class Definition
The following example class defines the necessary 4.7.1: base_member functions. It is made more complicated by storing a pointer to a double instead of the double value itself.

class base_alloc {
public:
     double* ptrdbl_;

     base_alloc(void)
     {     size_t cap;
          void* v  = CppAD::thread_alloc::get_memory(sizeof(double), cap);
          ptrdbl_  = static_cast<double*>(v);
     }
     base_alloc(double dbl)
     {     size_t cap;
          void *v  = CppAD::thread_alloc::get_memory(sizeof(double), cap);
          ptrdbl_  = static_cast<double*>(v);
          *ptrdbl_ = dbl;
     }
     base_alloc(const base_alloc& x)
     {     size_t cap;
          void *v  = CppAD::thread_alloc::get_memory(sizeof(double), cap);
          ptrdbl_  = static_cast<double*>(v);
          *ptrdbl_ = *x.ptrdbl_;
     }
     ~base_alloc(void)
     {     void* v  = static_cast<void*>(ptrdbl_);
          CppAD::thread_alloc::return_memory(v);
     }
     base_alloc operator-(void) const
     {     base_alloc result;
          *result.ptrdbl_ = - *ptrdbl_;
          return result;
     }
     base_alloc operator+(void) const
     {     return *this; }
     void operator=(const base_alloc& x)
     {     *ptrdbl_ = *x.ptrdbl_; }
     BASE_ALLOC_ASSIGN_OPERATOR(+=)
     BASE_ALLOC_ASSIGN_OPERATOR(-=)
     BASE_ALLOC_ASSIGN_OPERATOR(*=)
     BASE_ALLOC_ASSIGN_OPERATOR(/=)
     BASE_ALLOC_BINARY_OPERATOR(+)
     BASE_ALLOC_BINARY_OPERATOR(-)
     BASE_ALLOC_BINARY_OPERATOR(*)
     BASE_ALLOC_BINARY_OPERATOR(/)
     BASE_ALLOC_BOOL_OPERATOR(==)
     BASE_ALLOC_BOOL_OPERATOR(!=)
     // The <= operator is not necessary for the base type requirements
     // (needed so we can use NearEqual with base_alloc arguments).
     BASE_ALLOC_BOOL_OPERATOR(<=)
};

4.7.9.1.g: CondExpOp
The type base_alloc does not use 4.4.4: CondExp operations. Hence its CondExpOp function is defined by
namespace CppAD {
     inline base_alloc CondExpOp(
          enum CompareOp     cop          ,
          const base_alloc&       left         ,
          const base_alloc&       right        ,
          const base_alloc&       exp_if_true  ,
          const base_alloc&       exp_if_false )
     {     // not used
          assert(false);

          // to void compiler error
          return base_alloc();
     }
}

4.7.9.1.h: CondExpRel
The 4.7.2.d: CPPAD_COND_EXP_REL macro invocation

namespace CppAD {
     CPPAD_COND_EXP_REL(base_alloc)
}
uses CondExpOp above to define CondExpRel for base_alloc arguments and Rel equal to Lt, Le, Eq, Ge, and Gt.

4.7.9.1.i: EqualOpSeq
The type base_alloc is simple (in this respect) and so we define
namespace CppAD {
     inline bool EqualOpSeq(const base_alloc& x, const base_alloc& y)
     {     return *x.ptrdbl_ == *y.ptrdbl_; }
}

4.7.9.1.j: Identical
The type base_alloc is simple (in this respect) and so we define
namespace CppAD {
     inline bool IdenticalPar(const base_alloc& x)
     {     return true; }
     inline bool IdenticalZero(const base_alloc& x)
     {     return (*x.ptrdbl_ == 0.0); }
     inline bool IdenticalOne(const base_alloc& x)
     {     return (*x.ptrdbl_ == 1.0); }
     inline bool IdenticalEqualPar(const base_alloc& x, const base_alloc& y)
     {     return (*x.ptrdbl_ == *y.ptrdbl_); }
}

4.7.9.1.k: Output Operator
namespace CppAD {
     std::ostream& operator << (std::ostream &os, const base_alloc& x)
     {     os << *x.ptrdbl_;
          return os;
     }
}

4.7.9.1.l: Integer
namespace CppAD {
     inline int Integer(const base_alloc& x)
     {     return static_cast<int>(*x.ptrdbl_); }
}

4.7.9.1.m: azmul

namespace CppAD {
     CPPAD_AZMUL( base_alloc )
}

4.7.9.1.n: Ordered
The base_alloc type supports ordered comparisons
namespace CppAD {
     inline bool GreaterThanZero(const base_alloc& x)
     {     return *x.ptrdbl_ > 0.0; }
     inline bool GreaterThanOrZero(const base_alloc& x)
     {     return *x.ptrdbl_ >= 0.0; }
     inline bool LessThanZero(const base_alloc& x)
     {     return *x.ptrdbl_ < 0.0; }
     inline bool LessThanOrZero(const base_alloc& x)
     {     return *x.ptrdbl_ <= 0.f; }
     inline bool abs_geq(const base_alloc& x, const base_alloc& y)
     {     return std::fabs(*x.ptrdbl_) >= std::fabs(*y.ptrdbl_); }
}

4.7.9.1.o: Unary Standard Math
The macro 4.7.5.c: CPPAD_STANDARD_MATH_UNARY would not work with the type base_alloc so we define a special macro for this type:

# define BASE_ALLOC_STD_MATH(fun) \
     inline base_alloc fun (const base_alloc& x) \
     { return   std::fun(*x.ptrdbl_); }
The following invocations of the macro above define the 4.7.5.b: unary standard math functions (except for abs):
namespace CppAD {
     BASE_ALLOC_STD_MATH(acos)
     BASE_ALLOC_STD_MATH(asin)
     BASE_ALLOC_STD_MATH(atan)
     BASE_ALLOC_STD_MATH(cos)
     BASE_ALLOC_STD_MATH(cosh)
     BASE_ALLOC_STD_MATH(exp)
     BASE_ALLOC_STD_MATH(fabs)
     BASE_ALLOC_STD_MATH(log)
     BASE_ALLOC_STD_MATH(log10)
     BASE_ALLOC_STD_MATH(sin)
     BASE_ALLOC_STD_MATH(sinh)
     BASE_ALLOC_STD_MATH(sqrt)
     BASE_ALLOC_STD_MATH(tan)
     BASE_ALLOC_STD_MATH(tanh)
}
The absolute value function is special because it std name is fabs
namespace CppAD {
     inline base_alloc abs(const base_alloc& x)
     {     return fabs(x); }
}

4.7.9.1.p: erf, asinh, acosh, atanh, expm1, log1p
The following defines the 4.7.5.d: erf, asinh, acosh, atanh, expm1, log1p functions required by AD<base_alloc>:
# if CPPAD_USE_CPLUSPLUS_2011
     BASE_ALLOC_STD_MATH(erf)
     BASE_ALLOC_STD_MATH(asinh)
     BASE_ALLOC_STD_MATH(acosh)
     BASE_ALLOC_STD_MATH(atanh)
     BASE_ALLOC_STD_MATH(expm1)
     BASE_ALLOC_STD_MATH(log1p)
# endif

4.7.9.1.q: sign
The following defines the CppAD::sign function that is required to use AD<base_alloc>:
namespace CppAD {
     inline base_alloc sign(const base_alloc& x)
     {     if( *x.ptrdbl_ > 0.0 )
               return 1.0;
          if( *x.ptrdbl_ == 0.0 )
               return 0.0;
          return -1.0;
     }
}

4.7.9.1.r: pow
The following defines a CppAD::pow function that is required to use AD<base_alloc>:
namespace CppAD {
     inline base_alloc pow(const base_alloc& x, const base_alloc& y)
     { return std::pow(*x.ptrdbl_, *y.ptrdbl_); }
}

4.7.9.1.s: numeric_limits
The following defines the CppAD 4.4.6: numeric_limits for the type base_alloc:

namespace CppAD {
     CPPAD_NUMERIC_LIMITS(double, base_alloc)
}

4.7.9.1.t: to_string
The following defines the CppAD 8.25: to_string function for the type base_alloc:

namespace CppAD {
     CPPAD_TO_STRING(base_alloc)
}

4.7.9.1.u: hash_code
The 4.7.8.c: default hashing function does not work well for this case because two different pointers can have the same value.
namespace CppAD {
     inline unsigned short hash_code(const base_alloc& x)
     {     unsigned short code = 0;
          if( *x.ptrdbl_ == 0.0 )
               return code;
          double log_x = log( std::fabs( *x.ptrdbl_ ) );
          // assume log( std::numeric_limits<double>::max() ) is near 700
          code = static_cast<unsigned short>(
               (CPPAD_HASH_TABLE_SIZE / 700 + 1) * log_x
          );
          code = code % CPPAD_HASH_TABLE_SIZE;
          return code;
     }
}

Input File: example/general/base_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.2: Using a User Defined AD Base Type: Example and Test
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include "base_alloc.hpp"
# include <cppad/cppad.hpp>

bool base_require(void)
{     bool ok = true;
     using CppAD::thread_alloc;
     typedef CppAD::AD<base_alloc> ad_base_alloc;

     // check the amount of memory inuse by this thread (thread zero)
     size_t thread = thread_alloc::thread_num();
     ok &= thread == 0;

     // y = x^2
     size_t n = 1, m = 1;
     CPPAD_TESTVECTOR(ad_base_alloc) a_x(n), a_y(m);
     a_x[0] = ad_base_alloc(1.);
     CppAD::Independent(a_x);
     a_y[0] = a_x[0] * a_x[0];
     CppAD::ADFun<base_alloc> f(a_x, a_y);

     // check function value f(x) = x^2
     CPPAD_TESTVECTOR(base_alloc) x(n), y(m);
     base_alloc eps =
          base_alloc(100.) * CppAD::numeric_limits<base_alloc>::epsilon();
     x[0] = base_alloc(3.);
     y    = f.Forward(0, x);
     ok  &= CppAD::NearEqual(y[0], x[0] * x[0], eps, eps);

     // check derivative value f'(x) = 2 * x
     CPPAD_TESTVECTOR(base_alloc) dy(m * n);
     dy   = f.Jacobian(x);
     ok  &= CppAD::NearEqual(dy[0], base_alloc(2.) * x[0], eps, eps);

     return ok;
}
4.7.9.2.a: Purpose
The type base_alloc, defined in 4.7.9.1: base_alloc.hpp , meets the requirements specified by 4.7: base_require for Base in AD<Base> . The program below is an example use of AD<base_alloc> .
Input File: example/general/base_require.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type

4.7.9.3.a: Syntax
# include <cppad/example/base_adolc.hpp>

4.7.9.3.b: Example
The file 4.7.9.3.1: mul_level_adolc.cpp contains an example use of Adolc's adouble type for a CppAD Base type. It returns true if it succeeds and false otherwise. The file 10.2.13: mul_level_adolc_ode.cpp contains a more realistic (and complex) example.

4.7.9.3.c: Include Files
This file base_adolc.hpp requires adouble to be defined. In addition, it is included before <cppad/cppad.hpp>, but it needs to include parts of CppAD that are used by this file. This is done with the following include commands:

# include <adolc/adolc.h>
# include <cppad/base_require.hpp>

4.7.9.3.d: CondExpOp
The type adouble supports a conditional assignment function with the syntax
     condassign(
abcd)
which evaluates to
     
a = (b > 0) ? c : d;
This enables one to include conditionals in the recording of adouble operations and later evaluation for different values of the independent variables (in the same spirit as the CppAD 4.4.4: CondExp function).
namespace CppAD {
     inline adouble CondExpOp(
          enum  CppAD::CompareOp     cop ,
          const adouble            &left ,
          const adouble           &right ,
          const adouble        &trueCase ,
          const adouble       &falseCase )
     {     adouble result;
          switch( cop )
          {
               case CompareLt: // left < right
               condassign(result, right - left, trueCase, falseCase);
               break;

               case CompareLe: // left <= right
               condassign(result, left - right, falseCase, trueCase);
               break;

               case CompareEq: // left == right
               condassign(result, left - right, falseCase, trueCase);
               condassign(result, right - left, falseCase, result);
               break;

               case CompareGe: // left >= right
               condassign(result, right - left, falseCase, trueCase);
               break;

               case CompareGt: // left > right
               condassign(result, left - right, trueCase, falseCase);
               break;
               default:
               CppAD::ErrorHandler::Call(
                    true     , __LINE__ , __FILE__ ,
                    "CppAD::CondExp",
                    "Error: for unknown reason."
               );
               result = trueCase;
          }
          return result;
     }
}

4.7.9.3.e: CondExpRel
The 4.7.2.d: CPPAD_COND_EXP_REL macro invocation

namespace CppAD {
     CPPAD_COND_EXP_REL(adouble)
}

4.7.9.3.f: EqualOpSeq
The Adolc user interface does not specify a way to determine if two adouble variables correspond to the same operations sequence. Make EqualOpSeq an error if it gets used:
namespace CppAD {
     inline bool EqualOpSeq(const adouble &x, const adouble &y)
     {     CppAD::ErrorHandler::Call(
               true     , __LINE__ , __FILE__ ,
               "CppAD::EqualOpSeq(x, y)",
               "Error: adouble does not support EqualOpSeq."
          );
          return false;
     }
}

4.7.9.3.g: Identical
The Adolc user interface does not specify a way to determine if an adouble depends on the independent variables. To be safe (but slow) return false in all the cases below.
namespace CppAD {
     inline bool IdenticalPar(const adouble &x)
     {     return false; }
     inline bool IdenticalZero(const adouble &x)
     {     return false; }
     inline bool IdenticalOne(const adouble &x)
     {     return false; }
     inline bool IdenticalEqualPar(const adouble &x, const adouble &y)
     {     return false; }
}

4.7.9.3.h: Integer

     inline int Integer(const adouble &x)
     {    return static_cast<int>( x.getValue() ); }

4.7.9.3.i: azmul

namespace CppAD {
     CPPAD_AZMUL( adouble )
}

4.7.9.3.j: Ordered
namespace CppAD {
     inline bool GreaterThanZero(const adouble &x)
     {    return (x > 0); }
     inline bool GreaterThanOrZero(const adouble &x)
     {    return (x >= 0); }
     inline bool LessThanZero(const adouble &x)
     {    return (x < 0); }
     inline bool LessThanOrZero(const adouble &x)
     {    return (x <= 0); }
     inline bool abs_geq(const adouble& x, const adouble& y)
     {     return fabs(x) >= fabs(y); }
}

4.7.9.3.k: Unary Standard Math
The following 4.7: required functions are defined by the Adolc package for the adouble base case:
acos, asin, atan, cos, cosh, exp, fabs, log, sin, sinh, sqrt, tan.

4.7.9.3.l: erf, asinh, acosh, atanh, expm1, log1p
If the 4.7.5.d: erf, asinh, acosh, atanh, expm1, log1p , functions are supported by the compiler, they must also be supported by a Base type; The adolc package does not support these functions so make their use an error:
namespace CppAD {
# define CPPAD_BASE_ADOLC_NO_SUPPORT(fun)                         \
    inline adouble fun(const adouble& x)                          \
    {   CPPAD_ASSERT_KNOWN(                                       \
            false,                                                \
            #fun ": adolc does not support this function"         \
        );                                                        \
        return 0.0;                                               \
    }
# if CPPAD_USE_CPLUSPLUS_2011
     CPPAD_BASE_ADOLC_NO_SUPPORT(erf)
     CPPAD_BASE_ADOLC_NO_SUPPORT(asinh)
     CPPAD_BASE_ADOLC_NO_SUPPORT(acosh)
     CPPAD_BASE_ADOLC_NO_SUPPORT(atanh)
     CPPAD_BASE_ADOLC_NO_SUPPORT(expm1)
     CPPAD_BASE_ADOLC_NO_SUPPORT(log1p)
# endif
# undef CPPAD_BASE_ADOLC_NO_SUPPORT
}

4.7.9.3.m: sign
This 4.7: required function is defined using the codassign function so that its adouble operation sequence does not depend on the value of x .
namespace CppAD {
     inline adouble sign(const adouble& x)
     {     adouble s_plus, s_minus, half(.5);
          // set s_plus to sign(x)/2,  except for case x == 0, s_plus = -.5
          condassign(s_plus,  +x, -half, +half);
          // set s_minus to -sign(x)/2, except for case x == 0, s_minus = -.5
          condassign(s_minus, -x, -half, +half);
          // set s to sign(x)
          return s_plus - s_minus;
     }
}

4.7.9.3.n: abs
This 4.7: required function uses the adolc fabs function:
namespace CppAD {
     inline adouble abs(const adouble& x)
     {     return fabs(x); }
}

4.7.9.3.o: pow
This 4.7: required function is defined by the Adolc package for the adouble base case.

4.7.9.3.p: numeric_limits
The following defines the CppAD 4.4.6: numeric_limits for the type adouble:

namespace CppAD {
     CPPAD_NUMERIC_LIMITS(double, adouble)
}

4.7.9.3.q: to_string
The following defines the CppAD 8.25: to_string function for the type adouble:
namespace CppAD {
     template <> struct to_string_struct<adouble>
     {     std::string operator()(const adouble& x)
          {     std::stringstream os;
               int n_digits = 1 + std::numeric_limits<double>::digits10;
               os << std::setprecision(n_digits);
               os << x.value();
               return os.str();
          }
     };
}

4.7.9.3.r: hash_code
It appears that an adouble object can have fields that are not initialized. This results in a valgrind error when these fields are used by the 4.7.8.c: default hashing function. For this reason, the adouble class overrides the default definition.
namespace CppAD {
     inline unsigned short hash_code(const adouble& x)
     {     unsigned short code = 0;
          double value = x.value();
          if( value == 0.0 )
               return code;
          double log_x = std::log( fabs( value ) );
          // assume log( std::numeric_limits<double>::max() ) is near 700
          code = static_cast<unsigned short>(
               (CPPAD_HASH_TABLE_SIZE / 700 + 1) * log_x
          );
          code = code % CPPAD_HASH_TABLE_SIZE;
          return code;
     }
}
Note that after the hash codes match, the 4.7.9.3.g: Identical function will be used to make sure two values are the same and one can replace the other. A more sophisticated implementation of the Identical function would detect which adouble values depend on the adouble independent variables (and hence can change).
Input File: cppad/example/base_adolc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test

4.7.9.3.1.a: Purpose
In this example, we use AD< adouble> > (level two taping), the compute values of the function @(@ f : \B{R}^n \rightarrow \B{R} @)@ where @[@ f(x) = \frac{1}{2} \left( x_0^2 + \cdots + x_{n-1}^2 \right) @]@ We then use Adolc's adouble (level one taping) to compute the directional derivative @[@ f^{(1)} (x) * v = x_0 v_0 + \cdots + x_{n-1} v_{n-1} @]@. where @(@ v \in \B{R}^n @)@. We then use double (no taping) to compute @[@ \frac{d}{dx} \left[ f^{(1)} (x) * v \right] = v @]@ This is only meant as an example of multiple levels of taping. The example 5.4.2.2: hes_times_dir.cpp computes the same value more efficiently by using the identity: @[@ \frac{d}{dx} \left[ f^{(1)} (x) * v \right] = f^{(2)} (x) * v @]@ The example 10.2.10.1: mul_level.cpp computes the same values using AD< AD<double> > and AD<double>.

4.7.9.3.1.b: Memory Management
Adolc uses raw memory arrays that depend on the number of dependent and independent variables. The memory management utility 8.23: thread_alloc is used to manage this memory allocation.

4.7.9.3.1.c: Configuration Requirement
This example will be compiled and tested provided that the value 2.2.1: adolc_prefix is specified on the 2.2: cmake command line.

4.7.9.3.1.d: Source

// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//

# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/interfaces.h>

// adouble definitions not in Adolc distribution and
// required in order to use CppAD::AD<adouble>
# include <cppad/example/base_adolc.hpp>

# include <cppad/cppad.hpp>

namespace {
     // f(x) = |x|^2 / 2 = .5 * ( x[0]^2 + ... + x[n-1]^2 )
     template <class Type>
     Type f(const CPPAD_TESTVECTOR(Type)& x)
     {     Type sum;

          sum  = 0.;
          size_t i = size_t(x.size());
          for(i = 0; i < size_t(x.size()); i++)
               sum += x[i] * x[i];

          return .5 * sum;
     }
}

bool mul_level_adolc(void)
{     bool ok = true;                // initialize test result
     using CppAD::thread_alloc;        // The CppAD memory allocator

     typedef adouble           a1type;  // for first level of taping
     typedef CppAD::AD<a1type> a2type; // for second level of taping
     size_t n = 5;                          // number independent variables
     size_t j;

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     CPPAD_TESTVECTOR(double) x(n);
     CPPAD_TESTVECTOR(a1type) a1x(n);
     CPPAD_TESTVECTOR(a2type) a2x(n);

     // Values for the independent variables while taping the function f(x)
     for(j = 0; j < n; j++)
          a2x[j] = double(j);
     // Declare the independent variable for taping f(x)
     CppAD::Independent(a2x);

     // Use AD<adouble> to tape the evaluation of f(x)
     CPPAD_TESTVECTOR(a2type) a2y(1);
     a2y[0] = f(a2x);

     // Declare a1f as the corresponding ADFun<adouble> function f(x)
     // (make sure we do not run zero order forward during constructor)
     CppAD::ADFun<a1type> a1f;
     a1f.Dependent(a2x, a2y);

     // Value of the independent variables whitle taping f'(x) * v
     int tag = 0;
     int keep = 1;
     trace_on(tag, keep);
     for(j = 0; j < n; j++)
          a1x[j] <<= double(j);

     // set the argument value x for computing f'(x) * v
     a1f.Forward(0, a1x);

     // compute f'(x) * v
     CPPAD_TESTVECTOR(a1type) a1v(n);
     CPPAD_TESTVECTOR(a1type) a1df(1);
     for(j = 0; j < n; j++)
          a1v[j] = double(n - j);
     a1df = a1f.Forward(1, a1v);

     // declare Adolc function corresponding to f'(x) * v
     double df;
     a1df[0] >>= df;
     trace_off();

     // compute the d/dx of f'(x) * v = f''(x) * v
     size_t m      = 1;                     // # dependent in f'(x) * v

     // w = new double[capacity] where capacity >= m
     size_t capacity;
     double* w  = thread_alloc::create_array<double>(m, capacity);

     // dw = new double[capacity] where capacity >= n
     double* dw = thread_alloc::create_array<double>(n, capacity);

     w[0]  = 1.;
     fos_reverse(tag, int(m), int(n), w, dw);

     for(j = 0; j < n; j++)
     {     double vj = a1v[j].value();
          ok &= CppAD::NearEqual(dw[j], vj, eps, eps);
     }

     // make memory avaialble for other use by this thread
     thread_alloc::delete_array(w);
     thread_alloc::delete_array(dw);
     return ok;
}

Input File: example/general/mul_level_adolc.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.4: Enable use of AD<Base> where Base is float

4.7.9.4.a: CondExpOp
The type float is a relatively simple type that supports <, <=, ==, >=, and > operators; see 4.7.2.c.a: ordered type . Hence its CondExpOp function is defined by
namespace CppAD {
     inline float CondExpOp(
          enum CompareOp     cop          ,
          const float&       left         ,
          const float&       right        ,
          const float&       exp_if_true  ,
          const float&       exp_if_false )
     {     return CondExpTemplate(cop, left, right, exp_if_true, exp_if_false);
     }
}

4.7.9.4.b: CondExpRel
The 4.7.2.d: CPPAD_COND_EXP_REL macro invocation

namespace CppAD {
     CPPAD_COND_EXP_REL(float)
}
uses CondExpOp above to define CondExpRel for float arguments and Rel equal to Lt, Le, Eq, Ge, and Gt.

4.7.9.4.c: EqualOpSeq
The type float is simple (in this respect) and so we define
namespace CppAD {
     inline bool EqualOpSeq(const float& x, const float& y)
     {     return x == y; }
}

4.7.9.4.d: Identical
The type float is simple (in this respect) and so we define
namespace CppAD {
     inline bool IdenticalPar(const float& x)
     {     return true; }
     inline bool IdenticalZero(const float& x)
     {     return (x == 0.f); }
     inline bool IdenticalOne(const float& x)
     {     return (x == 1.f); }
     inline bool IdenticalEqualPar(const float& x, const float& y)
     {     return (x == y); }
}

4.7.9.4.e: Integer
namespace CppAD {
     inline int Integer(const float& x)
     {     return static_cast<int>(x); }
}

4.7.9.4.f: azmul

namespace CppAD {
     CPPAD_AZMUL( float )
}

4.7.9.4.g: Ordered
The float type supports ordered comparisons
namespace CppAD {
     inline bool GreaterThanZero(const float& x)
     {     return x > 0.f; }
     inline bool GreaterThanOrZero(const float& x)
     {     return x >= 0.f; }
     inline bool LessThanZero(const float& x)
     {     return x < 0.f; }
     inline bool LessThanOrZero(const float& x)
     {     return x <= 0.f; }
     inline bool abs_geq(const float& x, const float& y)
     {     return std::fabs(x) >= std::fabs(y); }
}

4.7.9.4.h: Unary Standard Math
The following macro invocations import the float versions of the unary standard math functions into the CppAD namespace. Importing avoids ambiguity errors when using both the CppAD and std namespaces. Note this also defines the 4.7.9.5.h: double versions of these functions.
namespace CppAD {
     using std::acos;
     using std::asin;
     using std::atan;
     using std::cos;
     using std::cosh;
     using std::exp;
     using std::fabs;
     using std::log;
     using std::log10;
     using std::sin;
     using std::sinh;
     using std::sqrt;
     using std::tan;
     using std::tanh;
# if CPPAD_USE_CPLUSPLUS_2011
     using std::erf;
     using std::asinh;
     using std::acosh;
     using std::atanh;
     using std::expm1;
     using std::log1p;
# endif
}
The absolute value function is special because its std name is fabs
namespace CppAD {
     inline float abs(const float& x)
     {     return std::fabs(x); }
}

4.7.9.4.i: sign
The following defines the CppAD::sign function that is required to use AD<float>:
namespace CppAD {
     inline float sign(const float& x)
     {     if( x > 0.f )
               return 1.f;
          if( x == 0.f )
               return 0.f;
          return -1.f;
     }
}

4.7.9.4.j: pow
The following defines a CppAD::pow function that is required to use AD<float>. As with the unary standard math functions, this has the exact same signature as std::pow, so use it instead of defining another function.

namespace CppAD {
     using std::pow;
}

4.7.9.4.k: numeric_limits
The following defines the CppAD 4.4.6: numeric_limits for the type float:

namespace CppAD {
     CPPAD_NUMERIC_LIMITS(float, float)
}

4.7.9.4.l: to_string
There is no need to define to_string for float because it is defined by including cppad/utility/to_string.hpp; see 8.25: to_string . See 4.7.9.6.o: base_complex.hpp for an example where it is necessary to define to_string for a Base type.
Input File: cppad/core/base_float.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.5: Enable use of AD<Base> where Base is double

4.7.9.5.a: CondExpOp
The type double is a relatively simple type that supports <, <=, ==, >=, and > operators; see 4.7.2.c.a: ordered type . Hence its CondExpOp function is defined by
namespace CppAD {
     inline double CondExpOp(
          enum CompareOp     cop          ,
          const double&       left         ,
          const double&       right        ,
          const double&       exp_if_true  ,
          const double&       exp_if_false )
     {     return CondExpTemplate(cop, left, right, exp_if_true, exp_if_false);
     }
}

4.7.9.5.b: CondExpRel
The 4.7.2.d: CPPAD_COND_EXP_REL macro invocation

namespace CppAD {
     CPPAD_COND_EXP_REL(double)
}
uses CondExpOp above to define CondExpRel for double arguments and Rel equal to Lt, Le, Eq, Ge, and Gt.

4.7.9.5.c: EqualOpSeq
The type double is simple (in this respect) and so we define
namespace CppAD {
     inline bool EqualOpSeq(const double& x, const double& y)
     {     return x == y; }
}

4.7.9.5.d: Identical
The type double is simple (in this respect) and so we define
namespace CppAD {
     inline bool IdenticalPar(const double& x)
     {     return true; }
     inline bool IdenticalZero(const double& x)
     {     return (x == 0.); }
     inline bool IdenticalOne(const double& x)
     {     return (x == 1.); }
     inline bool IdenticalEqualPar(const double& x, const double& y)
     {     return (x == y); }
}

4.7.9.5.e: Integer
namespace CppAD {
     inline int Integer(const double& x)
     {     return static_cast<int>(x); }
}

4.7.9.5.f: azmul

namespace CppAD {
     CPPAD_AZMUL( double )
}

4.7.9.5.g: Ordered
The double type supports ordered comparisons
namespace CppAD {
     inline bool GreaterThanZero(const double& x)
     {     return x > 0.; }
     inline bool GreaterThanOrZero(const double& x)
     {     return x >= 0.; }
     inline bool LessThanZero(const double& x)
     {     return x < 0.; }
     inline bool LessThanOrZero(const double& x)
     {     return x <= 0.; }
     inline bool abs_geq(const double& x, const double& y)
     {     return std::fabs(x) >= std::fabs(y); }
}

4.7.9.5.h: Unary Standard Math
The following macro invocations import the double versions of the unary standard math functions into the CppAD namespace. Importing avoids ambiguity errors when using both the CppAD and std namespaces. Note this also defines the 4.7.9.4.h: float versions of these functions.
namespace CppAD {
     using std::acos;
     using std::asin;
     using std::atan;
     using std::cos;
     using std::cosh;
     using std::exp;
     using std::fabs;
     using std::log;
     using std::log10;
     using std::sin;
     using std::sinh;
     using std::sqrt;
     using std::tan;
     using std::tanh;
# if CPPAD_USE_CPLUSPLUS_2011
     using std::erf;
     using std::asinh;
     using std::acosh;
     using std::atanh;
     using std::expm1;
     using std::log1p;
# endif
}
The absolute value function is special because its std name is fabs
namespace CppAD {
     inline double abs(const double& x)
     {     return std::fabs(x); }
}

4.7.9.5.i: sign
The following defines the CppAD::sign function that is required to use AD<double>:
namespace CppAD {
     inline double sign(const double& x)
     {     if( x > 0. )
               return 1.;
          if( x == 0. )
               return 0.;
          return -1.;
     }
}

4.7.9.5.j: pow
The following defines a CppAD::pow function that is required to use AD<double>. As with the unary standard math functions, this has the exact same signature as std::pow, so use it instead of defining another function.

namespace CppAD {
     using std::pow;
}

4.7.9.5.k: numeric_limits
The following defines the CppAD 4.4.6: numeric_limits for the type double:

namespace CppAD {
     CPPAD_NUMERIC_LIMITS(double, double)
}

4.7.9.5.l: to_string
There is no need to define to_string for double because it is defined by including cppad/utility/to_string.hpp; see 8.25: to_string . See 4.7.9.6.o: base_complex.hpp for an example where it is necessary to define to_string for a Base type.
Input File: cppad/core/base_double.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>

4.7.9.6.a: Example
The file 4.7.9.6.1: complex_poly.cpp contains an example use of std::complex<double> type for a CppAD Base type. It returns true if it succeeds and false otherwise.

4.7.9.6.b: Include Order
This file is included before <cppad/cppad.hpp> so it is necessary to define the error handler in addition to including 4.7.e: base_require.hpp
# include <limits>
# include <complex>
# include <cppad/base_require.hpp>
# include <cppad/core/cppad_assert.hpp>
4.7.9.6.c: CondExpOp
The type std::complex<double> does not supports the <, <=, ==, >=, and > operators; see 4.7.2.c.b: not ordered . Hence its CondExpOp function is defined by
namespace CppAD {
     inline std::complex<double> CondExpOp(
          enum CppAD::CompareOp      cop        ,
          const std::complex<double> &left      ,
          const std::complex<double> &right     ,
          const std::complex<double> &trueCase  ,
          const std::complex<double> &falseCase )
     {     CppAD::ErrorHandler::Call(
               true     , __LINE__ , __FILE__ ,
               "std::complex<float> CondExpOp(...)",
               "Error: cannot use CondExp with a complex type"
          );
          return std::complex<double>(0);
     }
}

4.7.9.6.d: CondExpRel
The 4.7.2.d: CPPAD_COND_EXP_REL macro invocation

namespace CppAD {
     CPPAD_COND_EXP_REL( std::complex<double> )
}
used CondExpOp above to define CondExpRel for std::complex<double> arguments and Rel equal to Lt, Le, Eq, Ge, and Gt.

4.7.9.6.e: EqualOpSeq
Complex numbers do not carry operation sequence information. Thus they are equal in this sense if and only if there values are equal.
namespace CppAD {
     inline bool EqualOpSeq(
          const std::complex<double> &x ,
          const std::complex<double> &y )
     {     return x == y;
     }
}

4.7.9.6.f: Identical
Complex numbers do not carry operation sequence information. Thus they are all parameters so the identical functions just check values.
namespace CppAD {
     inline bool IdenticalPar(const std::complex<double> &x)
     {     return true; }
     inline bool IdenticalZero(const std::complex<double> &x)
     {     return (x == std::complex<double>(0., 0.) ); }
     inline bool IdenticalOne(const std::complex<double> &x)
     {     return (x == std::complex<double>(1., 0.) ); }
     inline bool IdenticalEqualPar(
          const std::complex<double> &x, const std::complex<double> &y)
     {     return (x == y); }
}

4.7.9.6.g: Ordered
Complex types do not support comparison operators,
# undef  CPPAD_USER_MACRO
# define CPPAD_USER_MACRO(Fun)                                     \
inline bool Fun(const std::complex<double>& x)                     \
{      CppAD::ErrorHandler::Call(                                  \
               true     , __LINE__ , __FILE__ ,                    \
               #Fun"(x)",                                          \
               "Error: cannot use " #Fun " with x complex<double> " \
       );                                                          \
       return false;                                               \
}
namespace CppAD {
     CPPAD_USER_MACRO(LessThanZero)
     CPPAD_USER_MACRO(LessThanOrZero)
     CPPAD_USER_MACRO(GreaterThanOrZero)
     CPPAD_USER_MACRO(GreaterThanZero)
     inline bool abs_geq(
          const std::complex<double>& x ,
          const std::complex<double>& y )
     {     return std::abs(x) >= std::abs(y); }
}

4.7.9.6.h: Integer
The implementation of this function must agree with the CppAD user specifications for complex arguments to the 4.3.2.d.b: Integer function:
namespace CppAD {
     inline int Integer(const std::complex<double> &x)
     {     return static_cast<int>( x.real() ); }
}

4.7.9.6.i: azmul

namespace CppAD {
     CPPAD_AZMUL( std::complex<double> )
}

4.7.9.6.j: isnan
The gcc 4.1.1 complier defines the function
     int std::complex<double>::isnan( std::complex<double> 
z )
(which is not specified in the C++ 1998 standard ISO/IEC 14882). This causes an ambiguity between the function above and the CppAD 8.11: isnan template function. We avoid this ambiguity by defining a non-template version of this function in the CppAD namespace.
namespace CppAD {
     inline bool isnan(const std::complex<double>& z)
     {     return (z != z);
     }
}

4.7.9.6.k: Valid Unary Math
The following macro invocations define the standard unary math functions that are valid with complex arguments and are required to use AD< std::complex<double> >.
namespace CppAD {
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, cos)
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, cosh)
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, exp)
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, log)
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, sin)
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, sinh)
     CPPAD_STANDARD_MATH_UNARY(std::complex<double>, sqrt)
}

4.7.9.6.l: Invalid Unary Math
The following macro definition and invocations define the standard unary math functions that are invalid with complex arguments and are required to use AD< std::complex<double> >.
# undef  CPPAD_USER_MACRO
# define CPPAD_USER_MACRO(Fun)                                     \
inline std::complex<double> Fun(const std::complex<double>& x)     \
{      CppAD::ErrorHandler::Call(                                  \
               true     , __LINE__ , __FILE__ ,                    \
               #Fun"(x)",                                          \
               "Error: cannot use " #Fun " with x complex<double> " \
       );                                                          \
       return std::complex<double>(0);                             \
}
namespace CppAD {
     CPPAD_USER_MACRO(abs)
     CPPAD_USER_MACRO(fabs)
     CPPAD_USER_MACRO(acos)
     CPPAD_USER_MACRO(asin)
     CPPAD_USER_MACRO(atan)
     CPPAD_USER_MACRO(sign)
# if CPPAD_USE_CPLUSPLUS_2011
     CPPAD_USER_MACRO(erf)
     CPPAD_USER_MACRO(asinh)
     CPPAD_USER_MACRO(acosh)
     CPPAD_USER_MACRO(atanh)
     CPPAD_USER_MACRO(expm1)
     CPPAD_USER_MACRO(log1p)
# endif
}

4.7.9.6.m: pow
The following defines a CppAD::pow function that is required to use AD< std::complex<double> >:
namespace CppAD {
     inline std::complex<double> pow(
          const std::complex<double> &x ,
          const std::complex<double> &y )
     {     return std::pow(x, y); }
}

4.7.9.6.n: numeric_limits
The following defines the CppAD 4.4.6: numeric_limits for the type std::complex<double>:

namespace CppAD {
     CPPAD_NUMERIC_LIMITS(double, std::complex<double>)
}

4.7.9.6.o: to_string
The following defines the function CppAD 8.25: to_string for the type std::complex<double>:

namespace CppAD {
     CPPAD_TO_STRING(std::complex<double>)
}

Input File: cppad/core/base_complex.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
4.7.9.6.1: Complex Polynomial: Example and Test

4.7.9.6.1.a: Poly
Select this link to view specifications for 8.13: Poly :
// Complex examples should supppress conversion warnings
# include <cppad/wno_conversion.hpp>

# include <cppad/cppad.hpp>
# include <complex>

bool complex_poly(void)
{     bool ok    = true;
     size_t deg = 4;

     using CppAD::AD;
     using CppAD::Poly;
     typedef std::complex<double> Complex;

     // polynomial coefficients
     CPPAD_TESTVECTOR( Complex )   a   (deg + 1); // coefficients for p(z)
     CPPAD_TESTVECTOR(AD<Complex>) A   (deg + 1);
     size_t i;
     for(i = 0; i <= deg; i++)
          A[i] = a[i] = Complex(double(i), double(i));

     // independent variable vector
     CPPAD_TESTVECTOR(AD<Complex>) Z(1);
     Complex z = Complex(1., 2.);
     Z[0]      = z;
     Independent(Z);

     // dependent variable vector and indices
     CPPAD_TESTVECTOR(AD<Complex>) P(1);

     // dependent variable values
     P[0] = Poly(0, A, Z[0]);

     // create f: Z -> P and vectors used for derivative calculations
     CppAD::ADFun<Complex> f(Z, P);
     CPPAD_TESTVECTOR(Complex) v( f.Domain() );
     CPPAD_TESTVECTOR(Complex) w( f.Range() );

     // check first derivative w.r.t z
     v[0]      = 1.;
     w         = f.Forward(1, v);
     Complex p = Poly(1, a, z);
     ok &= ( w[0]  == p );

     // second derivative w.r.t z is 2 times its second order Taylor coeff
     v[0] = 0.;
     w    = f.Forward(2, v);
     p    = Poly(2, a, z);
     ok &= ( 2. * w[0]  == p );

     return ok;
}

Input File: example/general/complex_poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5: ADFun Objects

5.a: Purpose
An AD of Base 12.4.g.b: operation sequence is stored in an ADFun object by its 5.1.2: FunConstruct . The ADFun object can then be used to calculate function values, derivative values, and other values related to the corresponding function.

5.b: Contents
record_adfun: 5.1Create an ADFun Object (Record an Operation Sequence)
drivers: 5.2First and Second Order Derivatives: Easy Drivers
Forward: 5.3Forward Mode
Reverse: 5.4Reverse Mode
sparsity_pattern: 5.5Calculating Sparsity Patterns
sparse_derivative: 5.6Calculating Sparse Derivatives
optimize: 5.7Optimize an ADFun Object Tape
abs_normal: 5.8Abs-normal Representation of Non-Smooth Functions
FunCheck: 5.9Check an ADFun Sequence of Operations
check_for_nan: 5.10Check an ADFun Object For Nan Results

Input File: cppad/core/ad_fun.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1: Create an ADFun Object (Record an Operation Sequence)

5.1.a: Contents
Independent: 5.1.1Declare Independent Variables and Start Recording
FunConstruct: 5.1.2Construct an ADFun Object and Stop Recording
Dependent: 5.1.3Stop Recording and Store Operation Sequence
abort_recording: 5.1.4Abort Recording of an Operation Sequence
seq_property: 5.1.5ADFun Sequence Properties

Input File: omh/adfun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.1: Declare Independent Variables and Start Recording

5.1.1.a: Syntax
Independent(x)
Independent(xabort_op_index)

5.1.1.b: Purpose
Start recording 12.4.b: AD of Base operations with x as the independent variable vector. Once the 12.4.g.b: operation sequence is completed, it must be transferred to a function object; see below.

5.1.1.c: Start Recording
An operation sequence recording is started by the commands
     Independent(
x)
     Independent(
xabort_op_index)

5.1.1.d: Stop Recording
The recording is stopped, and the operation sequence is transferred to the AD function object f , using either the 5.1.2: function constructor
     ADFun<
Basef(xy)
or the 5.1.3: dependent variable specifier
     
f.Dependent(xy)
The only other way to stop a recording is using 5.1.4: abort_recording . Between when the recording is started and when it stopped, we refer to the elements of x , and the values that depend on the elements of x , as AD<Base> variables.

5.1.1.e: x
The vector x has prototype
     
VectorAD &x
(see VectorAD below). The size of the vector x , must be greater than zero, and is the number of independent variables for this AD operation sequence.

5.1.1.f: abort_op_index
It specifies the operator index at which the execution is be aborted by calling the CppAD 8.1: error handler . When this error handler leads to an assert, the user can inspect the call stack to see the source code corresponding to this operator index; see 5.3.7.f.a: purpose . No abort will occur if abort_op_index is zero, of if 12.1.j.a: NDEBUG is defined.

5.1.1.g: VectorAD
The type VectorAD must be a 8.9: SimpleVector class with 8.9.b: elements of type AD<Base> . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.1.1.h: Parallel Mode
Each thread can have one, and only one, active recording. A call to Independent starts the recording for the current thread. The recording must be stopped by a corresponding call to
     ADFun<
Basefxy)
or
     
f.Dependent( xy)
or 5.1.4: abort_recording preformed by the same thread; i.e., 8.23.5: thread_alloc::thread_num must be the same.

5.1.1.i: Example
The file 5.1.1.1: independent.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/independent.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.1.1: Independent and ADFun Constructor: Example and Test
# include <cppad/cppad.hpp>

namespace { // --------------------------------------------------------
// define the template function Test<VectorAD>(void) in empty namespace
template <class VectorAD>
bool Test(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t  n  = 2;
     VectorAD X(n);  // VectorAD is the template parameter in call to Test
     X[0] = 0.;
     X[1] = 1.;

     // declare independent variables and start recording
     // use the template parameter VectorAD for the vector type
     CppAD::Independent(X);

     AD<double> a = X[0] + X[1];      // first AD operation
     AD<double> b = X[0] * X[1];      // second AD operation

     // range space vector
     size_t m = 2;
     VectorAD Y(m);  // VectorAD is the template paraemter in call to Test
     Y[0] = a;
     Y[1] = b;

     // create f: X -> Y and stop tape recording
     // use the template parameter VectorAD for the vector type
     CppAD::ADFun<double> f(X, Y);

     // check value
     ok &= NearEqual(Y[0] , 1.,  eps99 , eps99);
     ok &= NearEqual(Y[1] , 0.,  eps99 , eps99);

     // compute f(1, 2)
     CPPAD_TESTVECTOR(double) x(n);
     CPPAD_TESTVECTOR(double) y(m);
     x[0] = 1.;
     x[1] = 2.;
     y    = f.Forward(0, x);
     ok &= NearEqual(y[0] , 3.,  eps99 , eps99);
     ok &= NearEqual(y[1] , 2.,  eps99 , eps99);

     // compute partial of f w.r.t x[0] at (1, 2)
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dx[1] = 0.;
     dy    = f.Forward(1, dx);
     ok &= NearEqual(dy[0] ,   1.,  eps99 , eps99);
     ok &= NearEqual(dy[1] , x[1],  eps99 , eps99);

     // compute partial of f w.r.t x[1] at (1, 2)
     dx[0] = 0.;
     dx[1] = 1.;
     dy    = f.Forward(1, dx);
     ok &= NearEqual(dy[0] ,   1.,  eps99 , eps99);
     ok &= NearEqual(dy[1] , x[0],  eps99 , eps99);

     return ok;
}
} // End of empty namespace -------------------------------------------

# include <vector>
# include <valarray>
bool Independent(void)
{     bool ok = true;
     typedef CppAD::AD<double> ADdouble;
     // Run with VectorAD equal to three different cases
     // all of which are Simple Vectors with elements of type AD<double>.
     ok &= Test< CppAD::vector  <ADdouble> >();
     ok &= Test< std::vector    <ADdouble> >();
     ok &= Test< std::valarray  <ADdouble> >();
     return ok;
}

Input File: example/general/independent.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.2: Construct an ADFun Object and Stop Recording

5.1.2.a: Syntax
ADFun<Basefg
ADFun<Basef(xy)
g = f

5.1.2.b: Purpose
The AD<Base> object f can store an AD of Base 12.4.g.b: operation sequence . It can then be used to calculate derivatives of the corresponding 12.4.a: AD function @[@ F : B^n \rightarrow B^m @]@ where @(@ B @)@ is the space corresponding to objects of type Base .

5.1.2.c: x
If the argument x is present, it has prototype
     const 
VectorAD &x
It must be the vector argument in the previous call to 5.1.1: Independent . Neither its size, or any of its values, are allowed to change between calling
     Independent(
x)
and
     ADFun<
Basef(xy)

5.1.2.d: y
If the argument y is present, it has prototype
     const 
VectorAD &y
The sequence of operations that map x to y are stored in the ADFun object f .

5.1.2.e: VectorAD
The type VectorAD must be a 8.9: SimpleVector class with 8.9.b: elements of type AD<Base> . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.1.2.f: Default Constructor
The default constructor
     ADFun<
Basef
creates an AD<Base> object with no corresponding operation sequence; i.e.,
     
f.size_var()
returns the value zero (see 5.1.5.g: size_var ).

5.1.2.g: Sequence Constructor
The sequence constructor
     ADFun<
Basef(xy)
creates the AD<Base> object f , stops the recording of AD of Base operations corresponding to the call
     Independent(
x)
and stores the corresponding operation sequence in the object f . It then stores the zero order Taylor coefficients (corresponding to the value of x ) in f . This is equivalent to the following steps using the default constructor:
  1. Create f with the default constructor
         ADFun<
    Basef;
  2. Stop the tape and storing the operation sequence using
         
    f.Dependent(xy);
    (see 5.1.3: Dependent ).
  3. Calculate the zero order Taylor coefficients for all the variables in the operation sequence using
         
    f.Forward(px_p)
    with p equal to zero and the elements of x_p equal to the corresponding elements of x (see 5.3: Forward ).


5.1.2.h: Copy Constructor
It is an error to attempt to use the ADFun<Base> copy constructor; i.e., the following syntax is not allowed:
     ADFun<
Baseg(f)
where f is an ADFun<Base> object. Use its 5.1.2.f: default constructor instead and its assignment operator.

5.1.2.i: Assignment Operator
The ADFun<Base> assignment operation
     
g = f
makes a copy of the operation sequence currently stored in f in the object g . The object f is not affected by this operation and can be const. All of information (state) stored in f is copied to g and any information originally in g is lost.

5.1.2.i.a: Taylor Coefficients
The Taylor coefficient information currently stored in f (computed by 5.3: f.Forward ) is copied to g . Hence, directly after this operation
     
g.size_order() == f.size_order()

5.1.2.i.b: Sparsity Patterns
The forward Jacobian sparsity pattern currently stored in f (computed by 5.5.2: f.ForSparseJac ) is copied to g . Hence, directly after this operation
     
g.size_forward_bool() == f.size_forward_bool()
     
g.size_forward_set()  == f.size_forward_set()

5.1.2.j: Parallel Mode
The call to Independent, and the corresponding call to
     ADFun<
Basefxy)
or
     
f.Dependent( xy)
or 5.1.4: abort_recording , must be preformed by the same thread; i.e., 8.23.5: thread_alloc::thread_num must be the same.

5.1.2.k: Example

5.1.2.k.a: Sequence Constructor
The file 5.1.1.1: independent.cpp contains an example and test of the sequence constructor. It returns true if it succeeds and false otherwise.

5.1.2.k.b: Default Constructor
The files 5.9.1: fun_check.cpp and 5.2.2.2: hes_lagrangian.cpp contain an examples and tests using the default constructor. They return true if they succeed and false otherwise.

5.1.2.k.c: Assignment Operator
The file 5.1.2.1: fun_assign.cpp contains an example and test of the ADFun<Base> assignment operator. It returns true if it succeeds and false otherwise.
Input File: cppad/core/fun_construct.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.2.1: ADFun Assignment: Example and Test
# include <cppad/cppad.hpp>
# include <limits>

bool fun_assign(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     size_t i, j;

     // ten times machine percision
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // two ADFun<double> objects
     CppAD::ADFun<double> g;

     // domain space vector
     size_t n  = 3;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     for(j = 0; j < n; j++)
          x[j] = AD<double>(j + 2);

     // declare independent variables and start tape recording
     CppAD::Independent(x);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = x[0] + x[0] * x[1];
     y[1] = x[1] * x[2] + x[2];

     // Store operation sequence, and order zero forward results, in f.
     CppAD::ADFun<double> f(x, y);

     // sparsity pattern for the identity matrix
     CPPAD_TESTVECTOR(std::set<size_t>) r(n);
     for(j = 0; j < n; j++)
          r[j].insert(j);

     // Store forward mode sparsity pattern in f
     f.ForSparseJac(n, r);

     // make a copy in g
     g = f;

     // check values that should be equal
     ok &= ( g.size_order() == f.size_order() );
     ok &= ( (g.size_forward_bool() > 0) == (f.size_forward_bool() > 0) );
     ok &= ( (g.size_forward_set() > 0)  == (f.size_forward_set() > 0) );

     // Use zero order Taylor coefficient from f for first order
     // calculation using g.
     CPPAD_TESTVECTOR(double) dx(n), dy(m);
     for(i = 0; i < n; i++)
          dx[i] = 0.;
     dx[1] = 1;
     dy    = g.Forward(1, dx);
     ok &= NearEqual(dy[0], x[0], eps, eps); // partial y[0] w.r.t x[1]
     ok &= NearEqual(dy[1], x[2], eps, eps); // partial y[1] w.r.t x[1]

     // Use forward Jacobian sparsity pattern from f to calculate
     // Hessian sparsity pattern using g.
     CPPAD_TESTVECTOR(std::set<size_t>) s(1), h(n);
     s[0].insert(0); // Compute sparsity pattern for Hessian of y[0]
     h =  f.RevSparseHes(n, s);

     // check sparsity pattern for Hessian of y[0] = x[0] + x[0] * x[1]
     ok  &= ( h[0].find(0) == h[0].end() ); // zero     w.r.t x[0], x[0]
     ok  &= ( h[0].find(1) != h[0].end() ); // non-zero w.r.t x[0], x[1]
     ok  &= ( h[0].find(2) == h[0].end() ); // zero     w.r.t x[0], x[2]

     ok  &= ( h[1].find(0) != h[1].end() ); // non-zero w.r.t x[1], x[0]
     ok  &= ( h[1].find(1) == h[1].end() ); // zero     w.r.t x[1], x[1]
     ok  &= ( h[1].find(2) == h[1].end() ); // zero     w.r.t x[1], x[2]

     ok  &= ( h[2].find(0) == h[2].end() ); // zero     w.r.t x[2], x[0]
     ok  &= ( h[2].find(1) == h[2].end() ); // zero     w.r.t x[2], x[1]
     ok  &= ( h[2].find(2) == h[2].end() ); // zero     w.r.t x[2], x[2]

     return ok;
}

Input File: example/general/fun_assign.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.3: Stop Recording and Store Operation Sequence

5.1.3.a: Syntax
f.Dependent(xy)

5.1.3.b: Purpose
Stop recording and the AD of Base 12.4.g.b: operation sequence that started with the call
     Independent(
x)
and store the operation sequence in f . The operation sequence defines an 12.4.a: AD function @[@ F : B^n \rightarrow B^m @]@ where @(@ B @)@ is the space corresponding to objects of type Base . The value @(@ n @)@ is the dimension of the 5.1.5.d: domain space for the operation sequence. The value @(@ m @)@ is the dimension of the 5.1.5.e: range space for the operation sequence (which is determined by the size of y ).

5.1.3.c: f
The object f has prototype
     ADFun<
Basef
The AD of Base operation sequence is stored in f ; i.e., it becomes the operation sequence corresponding to f . If a previous operation sequence was stored in f , it is deleted.

5.1.3.d: x
The argument x must be the vector argument in a previous call to 5.1.1: Independent . Neither its size, or any of its values, are allowed to change between calling
     Independent(
x)
and
     
f.Dependent(xy)
.

5.1.3.e: y
The vector y has prototype
     const 
ADvector &y
(see 5.1.2: ADvector below). The length of y must be greater than zero and is the dimension of the range space for f .

5.1.3.f: ADvector
The type ADvector must be a 8.9: SimpleVector class with 8.9.b: elements of type AD<Base> . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.1.3.g: Taping
The tape, that was created when Independent(x) was called, will stop recording. The AD operation sequence will be transferred from the tape to the object f and the tape will then be deleted.

5.1.3.h: Forward
No 5.3: Forward calculation is preformed during this operation. Thus, directly after this operation,
     
f.size_order()
is zero (see 5.3.6: size_order ).

5.1.3.i: Parallel Mode
The call to Independent, and the corresponding call to
     ADFun<
Basefxy)
or
     
f.Dependent( xy)
or 5.1.4: abort_recording , must be preformed by the same thread; i.e., 8.23.5: thread_alloc::thread_num must be the same.

5.1.3.j: Example
The file 5.9.1: fun_check.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/dependent.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.4: Abort Recording of an Operation Sequence

5.1.4.a: Syntax
AD<Base>::abort_recording()

5.1.4.b: Purpose
Sometimes it is necessary to abort the recording of an operation sequence that started with a call of the form
     Independent(
x)
If such a recording is currently in progress, abort_recording will stop the recording and delete the corresponding information. Otherwise, abort_recording has no effect.

5.1.4.c: Example
The file 5.1.4.1: abort_recording.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/abort_recording.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.4.1: Abort Current Recording: Example and Test

# include <cppad/cppad.hpp>
# include <limits>

bool abort_recording(void)
{     bool ok = true;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     using CppAD::AD;

     try
     {     // domain space vector
          size_t n = 1;
          CPPAD_TESTVECTOR(AD<double>) x(n);
          x[0]     = 0.;

          // declare independent variables and start tape recording
          CppAD::Independent(x);

          // simulate an error during calculation of y and the execution
          // stream was aborted
          throw 1;
     }
     catch (int e)
     {     ok &= (e == 1);

          // do this incase throw occured after the call to Independent
          // (for case above this is known, but in general it is unknown)
          AD<double>::abort_recording();
     }
     /*
     Now make sure that we can start another recording
     */

     // declare independent variables and start tape recording
     size_t n  = 1;
     double x0 = 0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;
     CppAD::Independent(x);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     y[0] = 2 * x[0];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // forward computation of partials w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     ok   &= CppAD::NearEqual(dy[0], 2., eps, eps);

     return ok;
}

Input File: example/general/abort_recording.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.5: ADFun Sequence Properties

5.1.5.a: Syntax
n = f.Domain()
m = f.Range()
p = f.Parameter(i)
s = f.size_var()
s = f.size_par()
s = f.size_op()
s = f.size_op_arg()
s = f.size_text()
s = f.size_VecAD()
s = f.size_op_seq()

5.1.5.a.a: See Also
5.3.6: size_order , 5.3.8: capacity_order , 5.3.9: number_skip .

5.1.5.b: Purpose
The operations above return properties of the AD of Base 12.4.g.b: operation sequence stored in the ADFun object f . (If there is no operation sequence stored in f , size_var returns zero.)

5.1.5.c: f
The object f has prototype
     const ADFun<
Basef
(see ADFun<Base> 5.1.2: constructor ).

5.1.5.d: Domain
The result n has prototype
     size_t 
n
and is the dimension of the domain space corresponding to f . This is equal to the size of the vector x in the call
     Independent(
x)
that starting recording the operation sequence currently stored in f (see 5.1.2: FunConstruct and 5.1.3: Dependent ).

5.1.5.e: Range
The result m has prototype
     size_t 
m
and is the dimension of the range space corresponding to f . This is equal to the size of the vector y in syntax
     ADFun<
Base> f(xy)
or
     
f.Dependent(y)
depending on which stored the operation sequence currently in f (see 5.1.2: FunConstruct and 5.1.3: Dependent ).

5.1.5.f: Parameter
The argument i has prototype
     size_t 
i
and @(@ 0 \leq i < m @)@. The result p has prototype
     bool 
p
It is true if the i-th component of range space for @(@ F @)@ corresponds to a 12.4.h: parameter in the operation sequence. In this case, the i-th component of @(@ F @)@ is constant and @[@ \D{F_i}{x_j} (x) = 0 @]@ for @(@ j = 0 , \ldots , n-1 @)@ and all @(@ x \in B^n @)@.

5.1.5.g: size_var
The result s has prototype
     size_t 
s
and is the number of variables in the operation sequence plus the following: one for a phantom variable with tape address zero, one for each component of the range that is a parameter. The amount of work and memory necessary for computing function values and derivatives using f is roughly proportional to s . (The function call 5.3.6: f.size_order() returns the number of Taylor coefficient orders, per variable,direction, currently stored in f .)

If there is no operation sequence stored in f , size_var returns zero (see 5.1.2.f: default constructor ).

5.1.5.h: size_par
The result s has prototype
     size_t 
s
and is the number of parameters in the operation sequence. Parameters differ from variables in that only values (and not derivatives) need to be stored for each parameter. These parameters are considered part of the operation sequence, as opposed to the Taylor coefficients which are considered extra data in the function object f . Note that one Base value is required for each parameter.

5.1.5.i: size_op
The result s has prototype
     size_t 
s
and is the number of operations in the operation sequence. Some operators, like comparison operators, do not correspond to a variable. Other operators, like the sine operator, correspond to two variables. Thus, this value will be different from 5.1.5.g: size_var . Note that one enum value is required for each operator.

5.1.5.j: size_op_arg
The result s has prototype
     size_t 
s
and is the total number of operator arguments in the operation sequence. For example, Binary operators (e.g. addition) have two arguments. Note that one integer index is stored in the operation sequence for each argument. Also note that, as of 2013-10-20, there is an extra phantom argument with index 0 that is not used.

5.1.5.k: size_text
The result s has prototype
     size_t 
s
and is the total characters used in the 4.3.6: PrintFor commands in this operation sequence.

5.1.5.l: size_VecAD
The result s has prototype
     size_t 
s
and is the number of 4.6: VecAD vectors, plus the number of elements in the vectors. Only VecAD vectors that depend on the independent variables are stored in the operation sequence.

5.1.5.m: size_op_seq
The result s has prototype
     size_t 
s
and is the amount of memory required to store the operation sequence (not counting a small amount of memory required for every operation sequence). For the current version of CppAD, this is given by
     
s = f.size_op()     * sizeof(CppAD::local::OpCode)
         + 
f.size_op_arg() * sizeof(tape_addr_type)
         + 
f.size_par()    * sizeof(Base)
         + 
f.size_text()   * sizeof(char)
         + 
f.size_VecAD()  * sizeof(tape_addr_type)
         + 
f.size_op()     * sizeof(tape_addr_type) * 3
see 2.2.r: tape_addr_type . Note that this is the minimal amount of memory that can hold the information corresponding to an operation sequence. The actual amount of memory allocated (8.23.10: inuse ) for the operations sequence may be larger.

5.1.5.n: Example
The file 5.1.5.1: seq_property.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: omh/seq_property.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.1.5.1: ADFun Sequence Properties: Example and Test

# include <cppad/cppad.hpp>

bool seq_property(void)
{     bool ok = true;
     using CppAD::AD;

     // Use nvar to track the number of variables in the operation sequence.
     // Start with one for the phantom variable at tape address zero.
     size_t nvar = 1;

     // Use npar to track the number of parameters in the operation sequence.
     size_t npar = 0;

     // Start with one for operator corresponding to phantom variable
     size_t nop  = 1;

     // Start with one for operator corresponding to phantom argument
     size_t narg = 1;

     // Use ntext to track the number of characters used to label
     // output generated using PrintFor commands.
     size_t ntext = 0;

     // Use nvecad to track the number of VecAD vectors, plus the number
     // of VecAD vector elements, in the operation sequence.
     size_t nvecad = 0;

     // a VecAD vector
     CppAD::VecAD<double> v(2);
     v[0]     = 0; // requires the parameter 0, when becomes a variable
     v[1]     = 1; // requires the parameter 1, when becomes a variable

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]     = 0.;
     x[1]     = 1.;

     // declare independent variables and start tape recording
     CppAD::Independent(x);
     nvar    += n;
     nop     += n;

     // a computation that adds to the operation sequence
     AD<double> I = 0;
     v[I]         = x[0];
     nvecad      +=   3;  // one for vector, two for its elements
     npar        +=   2;  // need parameters 0 and 1 for initial v
     nop         +=   1;  // operator for storing in a VecAD object
     narg        +=   3;  // the three arguments are v, I, and x[0]

     // some operations that do not add to the operation sequence
     AD<double> u = x[0];  // use same variable as x[0]
     AD<double> w = x[1];  // use same variable as x[1]

     // a computation that adds to the operation sequence
     w      = w * (u + w);
     nop   += 2;   // requires two new operators, an add and a multiply
     nvar  += 2;   // each operator results in its own variable
     narg  += 4;   // each operator has two arguments

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) y(m);

     // operations that do not add to the operation sequence
     y[0]   = 1.;  // re-use the parameter 1
     y[1]   = u;   // use same variable as u

     // a computation that adds to the operation sequence
     y[2]   = w + 2.;
     nop   += 1;   // requires a new add operator
     npar  += 1;   // new parameter 2 is new, so it must be included
     nvar  += 1;   // variable corresponding to the result
     narg  += 2;   // operator has two arguments

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);
     nop   += 1;   // special operator for y[0] because it is a parameter
     nvar  += 1;   // special variable for y[0] because it is a parameter
     narg  += 1;   // identifies which parameter corresponds to y[0]
     nop   += 1;   // special operator at the end of operation sequence

     // check the sequence property functions
     ok &= f.Domain()      == n;
     ok &= f.Range()       == m;
     ok &= f.Parameter(0)  == true;
     ok &= f.Parameter(1)  == false;
     ok &= f.Parameter(2)  == false;
     ok &= f.size_var()    == nvar;
     ok &= f.size_op()     == nop;
     ok &= f.size_op_arg() == narg;
     ok &= f.size_par()    == npar;
     ok &= f.size_text()   == ntext;
     ok &= f.size_VecAD()  == nvecad;
     size_t sum = 0;
     sum += nop    * sizeof(CppAD::local::OpCode);
     sum += narg   * sizeof(CPPAD_TAPE_ADDR_TYPE);
     sum += npar   * sizeof(double);
     sum += ntext  * sizeof(char);
     sum += nvecad * sizeof(CPPAD_TAPE_ADDR_TYPE);
     sum += nop    * sizeof(CPPAD_TAPE_ADDR_TYPE) * 3;
     ok &= f.size_op_seq() == sum;

     return ok;
}

Input File: example/general/seq_property.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2: First and Second Order Derivatives: Easy Drivers

5.2.a: Contents
Jacobian: 5.2.1Jacobian: Driver Routine
Hessian: 5.2.2Hessian: Easy Driver
ForOne: 5.2.3First Order Partial Derivative: Driver Routine
RevOne: 5.2.4First Order Derivative: Driver Routine
ForTwo: 5.2.5Forward Mode Second Partial Derivative Driver
RevTwo: 5.2.6Reverse Mode Second Partial Derivative Driver

Input File: omh/adfun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.1: Jacobian: Driver Routine

5.2.1.a: Syntax
jac = f.Jacobian(x)

5.2.1.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The syntax above sets jac to the Jacobian of F evaluated at x ; i.e., @[@ jac = F^{(1)} (x) @]@

5.2.1.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.2.1.g: Forward or Reverse below).

5.2.1.d: x
The argument x has prototype
     const 
Vector &x
(see 5.2.1.f: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the Jacobian.

5.2.1.e: jac
The result jac has prototype
     
Vector jac
(see 5.2.1.f: Vector below) and its size is @(@ m * n @)@; i.e., the product of the 5.1.5.d: domain and 5.1.5.e: range dimensions for f . For @(@ i = 0 , \ldots , m - 1 @)@ and @(@ j = 0 , \ldots , n - 1 @)@ @[@ . jac[ i * n + j ] = \D{ F_i }{ x_j } ( x ) @]@

5.2.1.f: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.1.g: Forward or Reverse
This will use order zero Forward mode and either order one Forward or order one Reverse to compute the Jacobian (depending on which it estimates will require less work). After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to Jacobian, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified.

5.2.1.h: Example
The routine 5.2.1.1: Jacobian is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/core/jacobian.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.1.1: Jacobian: Example and Test

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function JacobianCases<Vector> in empty namespace
template <typename Vector>
bool JacobianCases()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // a calculation between the domain and range values
     AD<double> Square = X[0] * X[0];

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = Square * exp( X[1] );
     Y[1] = Square * sin( X[1] );
     Y[2] = Square * cos( X[1] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // new value for the independent variable vector
     Vector x(n);
     x[0] = 2.;
     x[1] = 1.;

     // compute the derivative at this x
     Vector jac( m * n );
     jac = f.Jacobian(x);

     /*
     F'(x) = [ 2 * x[0] * exp(x[1]) ,  x[0] * x[0] * exp(x[1]) ]
             [ 2 * x[0] * sin(x[1]) ,  x[0] * x[0] * cos(x[1]) ]
             [ 2 * x[0] * cos(x[1]) , -x[0] * x[0] * sin(x[i]) ]
     */
     ok &=  NearEqual( 2.*x[0]*exp(x[1]), jac[0*n+0], eps99, eps99);
     ok &=  NearEqual( 2.*x[0]*sin(x[1]), jac[1*n+0], eps99, eps99);
     ok &=  NearEqual( 2.*x[0]*cos(x[1]), jac[2*n+0], eps99, eps99);

     ok &=  NearEqual( x[0] * x[0] *exp(x[1]), jac[0*n+1], eps99, eps99);
     ok &=  NearEqual( x[0] * x[0] *cos(x[1]), jac[1*n+1], eps99, eps99);
     ok &=  NearEqual(-x[0] * x[0] *sin(x[1]), jac[2*n+1], eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool Jacobian(void)
{     bool ok = true;
     // Run with Vector equal to three different cases
     // all of which are Simple Vectors with elements of type double.
     ok &= JacobianCases< CppAD::vector  <double> >();
     ok &= JacobianCases< std::vector    <double> >();
     ok &= JacobianCases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.2: Hessian: Easy Driver

5.2.2.a: Syntax
hes = f.Hessian(xw)
hes = f.Hessian(xl)

5.2.2.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The syntax above sets hes to the Hessian The syntax above sets h to the Hessian @[@ hes = \dpow{2}{x} \sum_{i=1}^m w_i F_i (x) @]@ The routine 5.6.4: sparse_hessian may be faster in the case where the Hessian is sparse.

5.2.2.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.2.2.i: Hessian Uses Forward below).

5.2.2.d: x
The argument x has prototype
     const 
Vector &x
(see 5.2.2.h: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the Hessian.

5.2.2.e: l
If the argument l is present, it has prototype
     size_t 
l
and is less than m , the dimension of the 5.1.5.e: range space for f . It specifies the component of F for which we are evaluating the Hessian. To be specific, in the case where the argument l is present, @[@ w_i = \left\{ \begin{array}{ll} 1 & i = l \\ 0 & {\rm otherwise} \end{array} \right. @]@

5.2.2.f: w
If the argument w is present, it has prototype
     const 
Vector &w
and size @(@ m @)@. It specifies the value of @(@ w_i @)@ in the expression for h .

5.2.2.g: hes
The result hes has prototype
     
Vector hes
(see 5.2.2.h: Vector below) and its size is @(@ n * n @)@. For @(@ j = 0 , \ldots , n - 1 @)@ and @(@ \ell = 0 , \ldots , n - 1 @)@ @[@ hes [ j * n + \ell ] = \DD{ w^{\rm T} F }{ x_j }{ x_\ell } ( x ) @]@

5.2.2.h: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.2.i: Hessian Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to Hessian, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified.

5.2.2.j: Example
The routines 5.2.2.1: hessian.cpp and 5.2.2.2: hes_lagrangian.cpp are examples and tests of Hessian. They return true, if they succeed and false otherwise.
Input File: cppad/core/hessian.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.2.1: Hessian: Example and Test

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function HessianCases<Vector> in empty namespace
template <typename Vector>
bool HessianCases()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // a calculation between the domain and range values
     AD<double> Square = X[0] * X[0];

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = Square * exp( X[1] );
     Y[1] = Square * sin( X[1] );
     Y[2] = Square * cos( X[1] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // new value for the independent variable vector
     Vector x(n);
     x[0] = 2.;
     x[1] = 1.;

     // second derivative of y[1]
     Vector hes( n * n );
     hes = f.Hessian(x, 1);
     /*
     F_1       = x[0] * x[0] * sin(x[1])

     F_1^{(1)} = [ 2 * x[0] * sin(x[1]) , x[0] * x[0] * cos(x[1]) ]

     F_1^{(2)} = [        2 * sin(x[1]) ,      2 * x[0] * cos(x[1]) ]
                 [ 2 * x[0] * cos(x[1]) , - x[0] * x[0] * sin(x[1]) ]
     */
     ok &=  NearEqual(          2.*sin(x[1]), hes[0*n+0], eps99, eps99);
     ok &=  NearEqual(     2.*x[0]*cos(x[1]), hes[0*n+1], eps99, eps99);
     ok &=  NearEqual(     2.*x[0]*cos(x[1]), hes[1*n+0], eps99, eps99);
     ok &=  NearEqual( - x[0]*x[0]*sin(x[1]), hes[1*n+1], eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool Hessian(void)
{     bool ok = true;
     // Run with Vector equal to three different cases
     // all of which are Simple Vectors with elements of type double.
     ok &= HessianCases< CppAD::vector  <double> >();
     ok &= HessianCases< std::vector    <double> >();
     ok &= HessianCases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test

# include <cppad/cppad.hpp>
# include <cassert>

namespace {
     CppAD::AD<double> Lagragian(
          const CppAD::vector< CppAD::AD<double> > &xyz )
     {     using CppAD::AD;
          assert( xyz.size() == 6 );

          AD<double> x0 = xyz[0];
          AD<double> x1 = xyz[1];
          AD<double> x2 = xyz[2];
          AD<double> y0 = xyz[3];
          AD<double> y1 = xyz[4];
          AD<double> z  = xyz[5];

          // compute objective function
          AD<double> f = x0 * x0;
          // compute constraint functions
          AD<double> g0 = 1. + 2.*x1 + 3.*x2;
          AD<double> g1 = log( x0 * x2 );
          // compute the Lagragian
          AD<double> L = y0 * g0 + y1 * g1 + z * f;

          return L;

     }
     CppAD::vector< CppAD::AD<double> > fg(
          const CppAD::vector< CppAD::AD<double> > &x )
     {     using CppAD::AD;
          using CppAD::vector;
          assert( x.size() == 3 );

          vector< AD<double> > fg(3);
          fg[0] = x[0] * x[0];
          fg[1] = 1. + 2. * x[1] + 3. * x[2];
          fg[2] = log( x[0] * x[2] );

          return fg;
     }
     bool CheckHessian(
     CppAD::vector<double> H ,
     double x0, double x1, double x2, double y0, double y1, double z )
     {     using CppAD::NearEqual;
          double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
          bool ok  = true;
          size_t n = 3;
          assert( H.size() == n * n );
          /*
          L   =    z*x0*x0 + y0*(1 + 2*x1 + 3*x2) + y1*log(x0*x2)

          L_0 = 2 * z * x0 + y1 / x0
          L_1 = y0 * 2
          L_2 = y0 * 3 + y1 / x2
          */
          // L_00 = 2 * z - y1 / ( x0 * x0 )
          double check = 2. * z - y1 / (x0 * x0);
          ok &= NearEqual(H[0 * n + 0], check, eps99, eps99);
          // L_01 = L_10 = 0
          ok &= NearEqual(H[0 * n + 1], 0., eps99, eps99);
          ok &= NearEqual(H[1 * n + 0], 0., eps99, eps99);
          // L_02 = L_20 = 0
          ok &= NearEqual(H[0 * n + 2], 0., eps99, eps99);
          ok &= NearEqual(H[2 * n + 0], 0., eps99, eps99);
          // L_11 = 0
          ok &= NearEqual(H[1 * n + 1], 0., eps99, eps99);
          // L_12 = L_21 = 0
          ok &= NearEqual(H[1 * n + 2], 0., eps99, eps99);
          ok &= NearEqual(H[2 * n + 1], 0., eps99, eps99);
          // L_22 = - y1 / (x2 * x2)
          check = - y1 / (x2 * x2);
          ok &= NearEqual(H[2 * n + 2], check, eps99, eps99);

          return ok;
     }
     bool UseL()
     {     using CppAD::AD;
          using CppAD::vector;

          // double values corresponding to x, y, and z vectors
          double x0(.5), x1(1e3), x2(1), y0(2.), y1(3.), z(4.);

          // domain space vector
          size_t n = 3;
          vector< AD<double> >  a_x(n);
          a_x[0] = x0;
          a_x[1] = x1;
          a_x[2] = x2;

          // declare a_x as independent variable vector and start recording
          CppAD::Independent(a_x);

          // vector including x, y, and z
          vector< AD<double> > a_xyz(n + 2 + 1);
          a_xyz[0] = a_x[0];
          a_xyz[1] = a_x[1];
          a_xyz[2] = a_x[2];
          a_xyz[3] = y0;
          a_xyz[4] = y1;
          a_xyz[5] = z;

          // range space vector
          size_t m = 1;
          vector< AD<double> >  a_L(m);
          a_L[0] = Lagragian(a_xyz);

          // create K: x -> L and stop tape recording.
          // Use default ADFun construction for example purposes.
          CppAD::ADFun<double> K;
          K.Dependent(a_x, a_L);

          // Operation sequence corresponding to K depends on
          // value of y0, y1, and z. Must redo calculations above when
          // y0, y1, or z changes.

          // declare independent variable vector and Hessian
          vector<double> x(n);
          vector<double> H( n * n );

          // point at which we are computing the Hessian
          // (must redo calculations below each time x changes)
          x[0] = x0;
          x[1] = x1;
          x[2] = x2;
          H = K.Hessian(x, 0);

          // check this Hessian calculation
          return CheckHessian(H, x0, x1, x2, y0, y1, z);
     }
     bool Usefg()
     {     using CppAD::AD;
          using CppAD::vector;

          // parameters defining problem
          double x0(.5), x1(1e3), x2(1), y0(2.), y1(3.), z(4.);

          // domain space vector
          size_t n = 3;
          vector< AD<double> >  a_x(n);
          a_x[0] = x0;
          a_x[1] = x1;
          a_x[2] = x2;

          // declare a_x as independent variable vector and start recording
          CppAD::Independent(a_x);

          // range space vector
          size_t m = 3;
          vector< AD<double> >  a_fg(m);
          a_fg = fg(a_x);

          // create K: x -> fg and stop tape recording
          CppAD::ADFun<double> K;
          K.Dependent(a_x, a_fg);

          // Operation sequence corresponding to K does not depend on
          // value of x0, x1, x2, y0, y1, or z.

          // forward and reverse mode arguments and results
          vector<double> x(n);
          vector<double> H( n * n );
          vector<double>  dx(n);
          vector<double>   w(m);
          vector<double>  dw(2*n);

          // compute Hessian at this value of x
          // (must redo calculations below each time x changes)
          x[0] = x0;
          x[1] = x1;
          x[2] = x2;
          K.Forward(0, x);

          // set weights to Lagrange multiplier values
          // (must redo calculations below each time y0, y1, or z changes)
          w[0] = z;
          w[1] = y0;
          w[2] = y1;

          // initialize dx as zero
          size_t i, j;
          for(i = 0; i < n; i++)
               dx[i] = 0.;
          // loop over components of x
          for(i = 0; i < n; i++)
          {     dx[i] = 1.;             // dx is i-th elementary vector
               K.Forward(1, dx);       // partial w.r.t dx
               dw = K.Reverse(2, w);   // deritavtive of partial
               for(j = 0; j < n; j++)
                    H[ i * n + j ] = dw[ j * 2 + 1 ];
               dx[i] = 0.;             // dx is zero vector
          }

          // check this Hessian calculation
          return CheckHessian(H, x0, x1, x2, y0, y1, z);
     }
}

bool HesLagrangian(void)
{     bool ok = true;

     // UseL is simpler, but must retape every time that y of z changes
     ok     &= UseL();

     // Usefg does not need to retape unless operation sequence changes
     ok     &= Usefg();
     return ok;
}

Input File: example/general/hes_lagrangian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.3: First Order Partial Derivative: Driver Routine

5.2.3.a: Syntax
dy = f.ForOne(xj)

5.2.3.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The syntax above sets dy to the partial of @(@ F @)@ with respect to @(@ x_j @)@; i.e., @[@ dy = \D{F}{ x_j } (x) = \left[ \D{ F_0 }{ x_j } (x) , \cdots , \D{ F_{m-1} }{ x_j } (x) \right] @]@

5.2.3.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.2.3.h: ForOne Uses Forward below).

5.2.3.d: x
The argument x has prototype
     const 
Vector &x
(see 5.2.3.g: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the partial derivative.

5.2.3.e: j
The argument j has prototype
     size_t 
j
an is less than n , 5.1.5.d: domain space for f . It specifies the component of F for which we are computing the partial derivative.

5.2.3.f: dy
The result dy has prototype
     
Vector dy
(see 5.2.3.g: Vector below) and its size is @(@ m @)@, the dimension of the 5.1.5.e: range space for f . The value of dy is the partial of @(@ F @)@ with respect to @(@ x_j @)@ evaluated at x ; i.e., for @(@ i = 0 , \ldots , m - 1 @)@ @[@ . dy[i] = \D{ F_i }{ x_j } ( x ) @]@

5.2.3.g: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.3.h: ForOne Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to ForOne, the zero order Taylor coefficients correspond to f.Forward(0,x) and the other coefficients are unspecified.

5.2.3.i: Example
The routine 5.2.3.1: ForOne is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/core/for_one.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.3.1: First Order Partial Driver: Example and Test
# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------
// define the template function ForOneCases<Vector> in empty namespace
template <typename Vector>
bool ForOneCases()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = X[0] * exp( X[1] );
     Y[1] = X[0] * sin( X[1] );
     Y[2] = X[0] * cos( X[1] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // new value for the independent variable vector
     Vector x(n);
     x[0] = 2.;
     x[1] = 1.;

     // compute partial of y w.r.t x[0]
     Vector dy(m);
     dy  = f.ForOne(x, 0);
     ok &= NearEqual( dy[0], exp(x[1]), eps99, eps99); // for y[0]
     ok &= NearEqual( dy[1], sin(x[1]), eps99, eps99); // for y[1]
     ok &= NearEqual( dy[2], cos(x[1]), eps99, eps99); // for y[2]

     // compute partial of F w.r.t x[1]
     dy  = f.ForOne(x, 1);
     ok &= NearEqual( dy[0],  x[0]*exp(x[1]), eps99, eps99);
     ok &= NearEqual( dy[1],  x[0]*cos(x[1]), eps99, eps99);
     ok &= NearEqual( dy[2], -x[0]*sin(x[1]), eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool ForOne(void)
{     bool ok = true;
     // Run with Vector equal to three different cases
     // all of which are Simple Vectors with elements of type double.
     ok &= ForOneCases< CppAD::vector  <double> >();
     ok &= ForOneCases< std::vector    <double> >();
     ok &= ForOneCases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/for_one.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.4: First Order Derivative: Driver Routine

5.2.4.a: Syntax
dw = f.RevOne(xi)

5.2.4.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The syntax above sets dw to the derivative of @(@ F_i @)@ with respect to @(@ x @)@; i.e., @[@ dw = F_i^{(1)} (x) = \left[ \D{ F_i }{ x_0 } (x) , \cdots , \D{ F_i }{ x_{n-1} } (x) \right] @]@

5.2.4.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.2.4.h: RevOne Uses Forward below).

5.2.4.d: x
The argument x has prototype
     const 
Vector &x
(see 5.2.4.g: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the derivative.

5.2.4.e: i
The index i has prototype
     size_t 
i
and is less than @(@ m @)@, the dimension of the 5.1.5.e: range space for f . It specifies the component of @(@ F @)@ that we are computing the derivative of.

5.2.4.f: dw
The result dw has prototype
     
Vector dw
(see 5.2.4.g: Vector below) and its size is n , the dimension of the 5.1.5.d: domain space for f . The value of dw is the derivative of @(@ F_i @)@ evaluated at x ; i.e., for @(@ j = 0 , \ldots , n - 1 @)@ @[@ . dw[ j ] = \D{ F_i }{ x_j } ( x ) @]@

5.2.4.g: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.4.h: RevOne Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to RevOne, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified.

5.2.4.i: Example
The routine 5.2.4.1: RevOne is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/core/rev_one.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.4.1: First Order Derivative Driver: Example and Test
# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------
// define the template function RevOneCases<Vector> in empty namespace
template <typename Vector>
bool RevOneCases()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = X[0] * exp( X[1] );
     Y[1] = X[0] * sin( X[1] );
     Y[2] = X[0] * cos( X[1] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // new value for the independent variable vector
     Vector x(n);
     x[0] = 2.;
     x[1] = 1.;

     // compute and check derivative of y[0]
     Vector dw(n);
     dw  = f.RevOne(x, 0);
     ok &= NearEqual(dw[0],      exp(x[1]), eps99, eps99); // w.r.t x[0]
     ok &= NearEqual(dw[1], x[0]*exp(x[1]), eps99, eps99); // w.r.t x[1]

     // compute and check derivative of y[1]
     dw  = f.RevOne(x, 1);
     ok &= NearEqual(dw[0],      sin(x[1]), eps99, eps99);
     ok &= NearEqual(dw[1], x[0]*cos(x[1]), eps99, eps99);

     // compute and check derivative of y[2]
     dw  = f.RevOne(x, 2);
     ok &= NearEqual(dw[0],        cos(x[1]), eps99, eps99);
     ok &= NearEqual(dw[1], - x[0]*sin(x[1]), eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool RevOne(void)
{     bool ok = true;
     // Run with Vector equal to three different cases
     // all of which are Simple Vectors with elements of type double.
     ok &= RevOneCases< CppAD::vector  <double> >();
     ok &= RevOneCases< std::vector    <double> >();
     ok &= RevOneCases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/rev_one.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.5: Forward Mode Second Partial Derivative Driver

5.2.5.a: Syntax
ddy = f.ForTwo(xjk)

5.2.5.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The syntax above sets @[@ ddy [ i * p + \ell ] = \DD{ F_i }{ x_{j[ \ell ]} }{ x_{k[ \ell ]} } (x) @]@ for @(@ i = 0 , \ldots , m-1 @)@ and @(@ \ell = 0 , \ldots , p @)@, where @(@ p @)@ is the size of the vectors j and k .

5.2.5.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.2.5.j: ForTwo Uses Forward below).

5.2.5.d: x
The argument x has prototype
     const 
VectorBase &x
(see 5.2.5.h: VectorBase below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the partial derivatives listed above.

5.2.5.e: j
The argument j has prototype
     const 
VectorSize_t &j
(see 5.2.5.i: VectorSize_t below) We use p to denote the size of the vector j . All of the indices in j must be less than n ; i.e., for @(@ \ell = 0 , \ldots , p-1 @)@, @(@ j[ \ell ] < n @)@.

5.2.5.f: k
The argument k has prototype
     const 
VectorSize_t &k
(see 5.2.5.i: VectorSize_t below) and its size must be equal to p , the size of the vector j . All of the indices in k must be less than n ; i.e., for @(@ \ell = 0 , \ldots , p-1 @)@, @(@ k[ \ell ] < n @)@.

5.2.5.g: ddy
The result ddy has prototype
     
VectorBase ddy
(see 5.2.5.h: VectorBase below) and its size is @(@ m * p @)@. It contains the requested partial derivatives; to be specific, for @(@ i = 0 , \ldots , m - 1 @)@ and @(@ \ell = 0 , \ldots , p - 1 @)@ @[@ ddy [ i * p + \ell ] = \DD{ F_i }{ x_{j[ \ell ]} }{ x_{k[ \ell ]} } (x) @]@

5.2.5.h: VectorBase
The type VectorBase must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.5.i: VectorSize_t
The type VectorSize_t must be a 8.9: SimpleVector class with 8.9.b: elements of type size_t . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.5.j: ForTwo Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to ForTwo, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified.

5.2.5.k: Examples
The routine 5.2.5.1: ForTwo is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/core/for_two.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.5.1: Subset of Second Order Partials: Example and Test
# include <cppad/cppad.hpp>
namespace { // -----------------------------------------------------
// define the template function in empty namespace
// bool ForTwoCases<VectorBase, VectorSize_t>(void)
template <class VectorBase, class VectorSize_t>
bool ForTwoCases()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // a calculation between the domain and range values
     AD<double> Square = X[0] * X[0];

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = Square * exp( X[1] );
     Y[1] = Square * sin( X[1] );
     Y[2] = Square * cos( X[1] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // new value for the independent variable vector
     VectorBase x(n);
     x[0] = 2.;
     x[1] = 1.;

     // set j and k to compute specific second partials of y
     size_t p = 2;
     VectorSize_t j(p);
     VectorSize_t k(p);
     j[0] = 0; k[0] = 0; // for second partial w.r.t. x[0] and x[0]
     j[1] = 0; k[1] = 1; // for second partial w.r.t x[0] and x[1]

     // compute the second partials
     VectorBase ddy(m * p);
     ddy = f.ForTwo(x, j, k);
     /*
     partial of y w.r.t x[0] is
     [ 2 * x[0] * exp(x[1]) ]
     [ 2 * x[0] * sin(x[1]) ]
     [ 2 * x[0] * cos(x[1]) ]
     */
     // second partial of y w.r.t x[0] and x[1]
     ok &=  NearEqual( 2.*exp(x[1]), ddy[0*p+0], eps99, eps99);
     ok &=  NearEqual( 2.*sin(x[1]), ddy[1*p+0], eps99, eps99);
     ok &=  NearEqual( 2.*cos(x[1]), ddy[2*p+0], eps99, eps99);

     // second partial of F w.r.t x[0] and x[1]
     ok &=  NearEqual( 2.*x[0]*exp(x[1]), ddy[0*p+1], eps99, eps99);
     ok &=  NearEqual( 2.*x[0]*cos(x[1]), ddy[1*p+1], eps99, eps99);
     ok &=  NearEqual(-2.*x[0]*sin(x[1]), ddy[2*p+1], eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool ForTwo(void)
{     bool ok = true;
        // Run with VectorBase equal to three different cases
        // all of which are Simple Vectors with elements of type double.
     ok &= ForTwoCases< CppAD::vector <double>, std::vector<size_t> >();
     ok &= ForTwoCases< std::vector   <double>, std::vector<size_t> >();
     ok &= ForTwoCases< std::valarray <double>, std::vector<size_t> >();

        // Run with VectorSize_t equal to two other cases
        // which are Simple Vectors with elements of type size_t.
     ok &= ForTwoCases< std::vector <double>, CppAD::vector<size_t> >();
     ok &= ForTwoCases< std::vector <double>, std::valarray<size_t> >();

     return ok;
}

Input File: example/general/for_two.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.6: Reverse Mode Second Partial Derivative Driver

5.2.6.a: Syntax
ddw = f.RevTwo(xij)

5.2.6.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The syntax above sets @[@ ddw [ k * p + \ell ] = \DD{ F_{i[ \ell ]} }{ x_{j[ \ell ]} }{ x_k } (x) @]@ for @(@ k = 0 , \ldots , n-1 @)@ and @(@ \ell = 0 , \ldots , p @)@, where @(@ p @)@ is the size of the vectors i and j .

5.2.6.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.2.6.j: RevTwo Uses Forward below).

5.2.6.d: x
The argument x has prototype
     const 
VectorBase &x
(see 5.2.6.h: VectorBase below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the partial derivatives listed above.

5.2.6.e: i
The argument i has prototype
     const 
VectorSize_t &i
(see 5.2.6.i: VectorSize_t below) We use p to denote the size of the vector i . All of the indices in i must be less than m , the dimension of the 5.1.5.e: range space for f ; i.e., for @(@ \ell = 0 , \ldots , p-1 @)@, @(@ i[ \ell ] < m @)@.

5.2.6.f: j
The argument j has prototype
     const 
VectorSize_t &j
(see 5.2.6.i: VectorSize_t below) and its size must be equal to p , the size of the vector i . All of the indices in j must be less than n ; i.e., for @(@ \ell = 0 , \ldots , p-1 @)@, @(@ j[ \ell ] < n @)@.

5.2.6.g: ddw
The result ddw has prototype
     
VectorBase ddw
(see 5.2.6.h: VectorBase below) and its size is @(@ n * p @)@. It contains the requested partial derivatives; to be specific, for @(@ k = 0 , \ldots , n - 1 @)@ and @(@ \ell = 0 , \ldots , p - 1 @)@ @[@ ddw [ k * p + \ell ] = \DD{ F_{i[ \ell ]} }{ x_{j[ \ell ]} }{ x_k } (x) @]@

5.2.6.h: VectorBase
The type VectorBase must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.6.i: VectorSize_t
The type VectorSize_t must be a 8.9: SimpleVector class with 8.9.b: elements of type size_t . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.2.6.j: RevTwo Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to RevTwo, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified.

5.2.6.k: Examples
The routine 5.2.6.1: RevTwo is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/core/rev_two.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.2.6.1: Second Partials Reverse Driver: Example and Test
# include <cppad/cppad.hpp>
namespace { // -----------------------------------------------------
// define the template function in empty namespace
// bool RevTwoCases<VectorBase, VectorSize_t>(void)
template <class VectorBase, class VectorSize_t>
bool RevTwoCases()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // a calculation between the domain and range values
     AD<double> Square = X[0] * X[0];

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = Square * exp( X[1] );
     Y[1] = Square * sin( X[1] );
     Y[2] = Square * cos( X[1] );

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // new value for the independent variable vector
     VectorBase x(n);
     x[0] = 2.;
     x[1] = 1.;

     // set i and j to compute specific second partials of y
     size_t p = 2;
     VectorSize_t i(p);
     VectorSize_t j(p);
     i[0] = 0; j[0] = 0; // for partials y[0] w.r.t x[0] and x[k]
     i[1] = 1; j[1] = 1; // for partials y[1] w.r.t x[1] and x[k]

     // compute the second partials
     VectorBase ddw(n * p);
     ddw = f.RevTwo(x, i, j);

     // partials of y[0] w.r.t x[0] is 2 * x[0] * exp(x[1])
     // check partials of y[0] w.r.t x[0] and x[k] for k = 0, 1
     ok &=  NearEqual(      2.*exp(x[1]), ddw[0*p+0], eps99, eps99);
     ok &=  NearEqual( 2.*x[0]*exp(x[1]), ddw[1*p+0], eps99, eps99);

     // partials of y[1] w.r.t x[1] is x[0] * x[0] * cos(x[1])
     // check partials of F_1 w.r.t x[1] and x[k] for k = 0, 1
     ok &=  NearEqual(    2.*x[0]*cos(x[1]), ddw[0*p+1], eps99, eps99);
     ok &=  NearEqual( -x[0]*x[0]*sin(x[1]), ddw[1*p+1], eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool RevTwo(void)
{     bool ok = true;
        // Run with VectorBase equal to three different cases
        // all of which are Simple Vectors with elements of type double.
     ok &= RevTwoCases< CppAD::vector <double>, std::vector<size_t> >();
     ok &= RevTwoCases< std::vector   <double>, std::vector<size_t> >();
     ok &= RevTwoCases< std::valarray <double>, std::vector<size_t> >();

        // Run with VectorSize_t equal to two other cases
        // which are Simple Vectors with elements of type size_t.
     ok &= RevTwoCases< std::vector <double>, CppAD::vector<size_t> >();
     ok &= RevTwoCases< std::vector <double>, std::valarray<size_t> >();

     return ok;
}

Input File: example/general/rev_two.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3: Forward Mode

5.3.a: Contents
forward_zero: 5.3.1Zero Order Forward Mode: Function Values
forward_one: 5.3.2First Order Forward Mode: Derivative Values
forward_two: 5.3.3Second Order Forward Mode: Derivative Values
forward_order: 5.3.4Multiple Order Forward Mode
forward_dir: 5.3.5Multiple Directions Forward Mode
size_order: 5.3.6Number Taylor Coefficient Orders Currently Stored
compare_change: 5.3.7Comparison Changes Between Taping and Zero Order Forward
capacity_order: 5.3.8Controlling Taylor Coefficients Memory Allocation
number_skip: 5.3.9Number of Variables that Can be Skipped

Input File: omh/adfun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.1: Zero Order Forward Mode: Function Values

5.3.1.a: Syntax
y0 = f.Forward(0, x0)
y0 = f.Forward(0, x0s)

5.3.1.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The result of the syntax above is @[@ y0 = F(x0) @]@ See the 5.9.l: FunCheck discussion for possible differences between @(@ F(x) @)@ and the algorithm that defined the operation sequence.

5.3.1.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. After this call to Forward, the value returned by
     
f.size_order()
will be equal to one (see 5.3.6: size_order ).

5.3.1.d: x0
The argument x0 has prototype
     const 
Vectorx0
(see 5.3.1.g: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f .

5.3.1.e: s
If the argument s is not present, std::cout is used in its place. Otherwise, this argument has prototype
     std::ostream& 
s
It specifies where the output corresponding to 4.3.6: PrintFor , and this zero order forward mode call, will be written.

5.3.1.f: y0
The result y0 has prototype
     
Vector y0
(see 5.3.1.g: Vector below) and its value is @(@ F(x) @)@ at x = x0 . The size of y0 is equal to m , the dimension of the 5.1.5.e: range space for f .

5.3.1.g: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.3.1.h: Example
The file 5.3.4.1: forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.

5.3.1.i: Special Case
This is special case of 5.3.4: forward_order where @[@ \begin{array}{rcl} Y(t) & = & F[ X(t) ] \\ X(t) & = & x^{(0)} t^0 + x^{(1)} * t^1 + \cdots, + x^{(q)} * t^q + o( t^q ) \\ Y(t) & = & y^{(0)} t^0 + y^{(1)} * t^1 + \cdots, + y^{(q)} * t^q + o( t^q ) \end{array} @]@ and @(@ o( t^q ) * t^{-q} \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. For this special case, @(@ q = 0 @)@, @(@ x^{(0)} @)@ x0 , @(@ X(t) = x^{(0)} @)@, and @[@ y^{(0)} = Y(t) = F[ X(t) ] = F( x^{(0)} ) @]@ which agrees with the specifications for y0 in the 5.3.1.b: purpose above.
Input File: omh/forward/forward_zero.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.2: First Order Forward Mode: Derivative Values

5.3.2.a: Syntax
y1 = f.Forward(1, x1)

5.3.2.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The result of the syntax above is @[@ y1 = F^{(1)} (x0) * x1 @]@ where @(@ F^{(1)} (x0) @)@ is the Jacobian of @(@ F @)@ evaluated at x0 .

5.3.2.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. Before this call to Forward, the value returned by
     
f.size_order()
must be greater than or equal one. After this call it will be will be two (see 5.3.6: size_order ).

5.3.2.d: x0
The vector x0 in the formula @[@ y1 = F^{(1)} (x0) * x1 @]@ corresponds to the previous call to 5.3.1: forward_zero using this ADFun object f ; i.e.,
     
f.Forward(0, x0)
If there is no previous call with the first argument zero, the value of the 5.1.1: independent variables during the recording of the AD sequence of operations is used for x0 .

5.3.2.e: x1
The argument x1 has prototype
     const 
Vectorx1
(see 5.3.2.f: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f .

5.3.2.f: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.3.2.g: Example
The file 5.3.4.1: forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.

5.3.2.h: Special Case
This is special case of 5.3.4: forward_order where @[@ \begin{array}{rcl} Y(t) & = & F[ X(t) ] \\ X(t) & = & x^{(0)} t^0 + x^{(1)} * t^1 + \cdots, + x^{(q)} * t^q + o( t^q ) \\ Y(t) & = & y^{(0)} t^0 + y^{(1)} * t^1 + \cdots, + y^{(q)} * t^q + o( t^q ) \end{array} @]@ and @(@ o( t^q ) * t^{-q} \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. For this special case, @(@ q = 1 @)@, @(@ x^{(0)} @)@ x0 , @(@ x^{(1)} @)@ x1 , @(@ X(t) = x^{(0)} + x^{(1)} t @)@, and @[@ y^{(0)} + y^{(1)} t = F [ x^{(0)} + x^{(1)} t ] + o(t) @]@ Taking the derivative with respect to @(@ t @)@, at @(@ t = 0 @)@, we obtain @[@ y^{(1)} = F^{(1)} [ x^{(0)} ] x^{(1)} @]@ which agrees with the specifications for y1 in the 5.3.2.b: purpose above.
Input File: omh/forward/forward_one.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.3: Second Order Forward Mode: Derivative Values

5.3.3.a: Syntax
y2 = f.Forward(1, x2)

5.3.3.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The result of the syntax above is that for i = 0 , ... , m-1 ,
     
y2[i]
@(@ = F_i^{(1)} (x0) * x2 + \frac{1}{2} x1^T * F_i^{(2)} (x0) * x1 @)@
where @(@ F^{(1)} (x0) @)@ is the Jacobian of @(@ F @)@, and @(@ F_i^{(2)} (x0) @)@ is the Hessian of th i-th component of @(@ F @)@, evaluated at x0 .

5.3.3.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. Before this call to Forward, the value returned by
     
f.size_order()
must be greater than or equal two. After this call it will be will be three (see 5.3.6: size_order ).

5.3.3.d: x0
The vector x0 in the formula for y2[i] corresponds to the previous call to 5.3.1: forward_zero using this ADFun object f ; i.e.,
     
f.Forward(0, x0)
If there is no previous call with the first argument zero, the value of the 5.1.1: independent variables during the recording of the AD sequence of operations is used for x0 .

5.3.3.e: x1
The vector x1 in the formula for y2[i] corresponds to the previous call to 5.3.2: forward_one using this ADFun object f ; i.e.,
     
f.Forward(1, x1)

5.3.3.f: x2
The argument x2 has prototype
     const 
Vectorx2
(see 5.3.3.h: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f .

5.3.3.g: y2
The result y2 has prototype
     
Vector y2
(see 5.3.3.h: Vector below) The size of y1 is equal to m , the dimension of the 5.1.5.e: range space for f . Its value is given element-wise by the formula in the 5.3.3.b: purpose above.

5.3.3.h: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.3.3.i: Example
The file 5.3.4.1: forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.

5.3.3.j: Special Case
This is special case of 5.3.4: forward_order where @[@ \begin{array}{rcl} Y(t) & = F[ X(t) ] \\ X(t) & = & x^{(0)} t^0 + x^{(1)} * t^1 + \cdots, + x^{(q)} * t^q + o( t^q ) \\ Y(t) & = & y^{(0)} t^0 + y^{(1)} * t^1 + \cdots, + y^{(q)} * t^q + o( t^q ) \end{array} @]@ and @(@ o( t^q ) * t^{-q} \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. For this special case, @(@ q = 2 @)@, @(@ x^{(0)} @)@ x0 , @(@ x^{(1)} @)@ x1 , @(@ X(t) = x^{(0)} + x^{(1)} t + x^{(2)} t^2 @)@, and @[@ y^{(0)} + y^{(1)} t + y^{(2)} t^2 = F [ x^{(0)} + x^{(1)} t + x^{(2)} t^2 ] + o(t^2) @]@ Restricting our attention to the i-th component, and taking the derivative with respect to @(@ t @)@, we obtain @[@ y_i^{(1)} + 2 y_i^{(2)} t = F_i^{(1)} [ x^{(0)} + x^{(1)} t + x^{(2)} t^2 ] [ x^{(1)} + 2 x^{(2)} t ] + o(t) @]@ Taking a second derivative with respect to @(@ t @)@, and evaluating at @(@ t = 0 @)@, we obtain @[@ 2 y_i^{(2)} = [ x^{(1)} ]^T F_i^{(2)} [ x^{(0)} ] x^{(1)} + F_i^{(1)} [ x^{(0)} ] 2 x^{(2)} @]@ which agrees with the specification for y2[i] in the 5.3.3.b: purpose above.
Input File: omh/forward/forward_two.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.4: Multiple Order Forward Mode

5.3.4.a: Syntax
yq = f.Forward(qxq )
yq = f.Forward(qxqs)

5.3.4.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . Given a function @(@ X : B \rightarrow B^n @)@, defined by its 12.4.l: Taylor coefficients , forward mode computes the Taylor coefficients for the function @[@ Y (t) = F [ X(t) ] @]@

5.3.4.b.a: Function Values
If you are using forward mode to compute values for @(@ F(x) @)@, 5.3.1: forward_zero is simpler to understand than this explanation of the general case.

5.3.4.b.b: Derivative Values
If you are using forward mode to compute values for @(@ F^{(1)} (x) * dx @)@, 5.3.2: forward_one is simpler to understand than this explanation of the general case.

5.3.4.c: Notation

5.3.4.c.a: n
We use n to denote the dimension of the 5.1.5.d: domain space for f .

5.3.4.c.b: m
We use m to denote the dimension of the 5.1.5.e: range space for f .

5.3.4.d: f
The 5: ADFun object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. After this call we will have
     
f.size_order()     == q + 1
     
f.size_direction() == 1

5.3.4.e: One Order
If xq.size() == n , then we are only computing one order. In this case, before this call we must have
     
f.size_order()     >= q
     
f.size_direction() == 1

5.3.4.f: q
The argument q has prototype
     size_t 
q
and specifies the highest order of the Taylor coefficients to be calculated.

5.3.4.g: xq
The argument xq has prototype
     const 
Vectorxq
(see 5.3.4.l: Vector below). As above, we use n to denote the dimension of the 5.1.5.d: domain space for f . The size of xq must be either n or n*(q+1) . After this call we will have
     
f.size_order()     == q + 1

5.3.4.g.a: One Order
If xq.size() == n , the q-th order Taylor coefficient for @(@ X(t) @)@ is defined by
     
@(@ x^{(q)} = @)@ xq . For @(@ k = 0 , \ldots , q-1 @)@, the Taylor coefficient @(@ x^{(k)} @)@ is defined by xk in the previous call to
     
f.Forward(kxk)

5.3.4.g.b: Multiple Orders
If xq.size() == n*(q+1) , For @(@ k = 0 , \ldots , q @)@, @(@ j = 0 , \ldots , n-1 @)@, the j-th component of the k-th order Taylor coefficient for @(@ X(t) @)@ is defined by
     
@(@ x_j^{(k)} = @)@ xq[ (q+1) * j + k ]

5.3.4.g.c: Restrictions
Note if f uses 12.8.11: old_atomic functions, the size of xq must be n .

5.3.4.h: s
If the argument s is not present, std::cout is used in its place. Otherwise, this argument has prototype
     std::ostream& 
s
If order zero is begin calculated, s specifies where the output corresponding to 4.3.6: PrintFor will be written. If order zero is not being calculated, s is not used

5.3.4.i: X(t)
The function @(@ X : B \rightarrow B^n @)@ is defined using the Taylor coefficients @(@ x^{(k)} \in B^n @)@: @[@ X(t) = x^{(0)} * t^0 + x^{(1)} * t^1 + \cdots + x^{(q)} * t^q @]@ Note that for @(@ k = 0 , \ldots , q @)@, the k-th derivative of @(@ X(t) @)@ is related to the Taylor coefficients by the equation @[@ x^{(k)} = \frac{1}{k !} X^{(k)} (0) @]@

5.3.4.j: Y(t)
The function @(@ Y : B \rightarrow B^m @)@ is defined by @(@ Y(t) = F[ X(t) ] @)@. We use @(@ y^{(k)} \in B^m @)@ to denote the k-th order Taylor coefficient of @(@ Y(t) @)@; i.e., @[@ Y(t) = y^{(0)} * t^0 + y^{(1)} * t^1 + \cdots + y^{(q)} * t^q + o( t^q ) @]@ where @(@ o( t^q ) * t^{-q} \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. Note that @(@ y^{(k)} @)@ is related to the k-th derivative of @(@ Y(t) @)@ by the equation @[@ y^{(k)} = \frac{1}{k !} Y^{(k)} (0) @]@

5.3.4.k: yq
The return value yq has prototype
     
Vector yq
(see 5.3.4.l: Vector below).

5.3.4.k.a: One Order
If xq.size() == n , the vector yq has size m . The q-th order Taylor coefficient for @(@ Y(t) @)@ is returned as
     
yq
@(@ = y^{(q)} @)@.

5.3.4.k.b: Multiple Orders
If xq.size() == n*(q+1) , the vector yq has size m*(q+1) . For @(@ k = 0 , \ldots , q @)@, for @(@ i = 0 , \ldots , m-1 @)@, the i-th component of the k-th order Taylor coefficient for @(@ Y(t) @)@ is returned as
     
yq[ (q+1) * i + k ]
@(@ = y_i^{(k)} @)@

5.3.4.l: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.3.4.m: Zero Order
The case where @(@ q = 0 @)@ and xq.size() == n , corresponds to the zero order 5.3.1.i: special case .

5.3.4.n: First Order
The case where @(@ q = 1 @)@ and xq.size() == n , corresponds to the first order 5.3.2.h: special case .

5.3.4.o: Second Order
The case where @(@ q = 2 @)@ and xq.size() == n , corresponds to the second order 5.3.3.j: special case .

5.3.4.p: Example
The file 5.3.4.1: forward.cpp ( 5.3.4.2: forward_order.cpp ) contains an example and test using one order (multiple orders). They return true if they succeed and false otherwise.
Input File: omh/forward/forward_order.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.4.1: Forward Mode: Example and Test
# include <limits>
# include <cppad/cppad.hpp>
namespace { // --------------------------------------------------------
// define the template function ForwardCases<Vector> in empty namespace
template <class Vector>
bool ForwardCases(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and starting recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0] * ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // initially, the variable values during taping are stored in f
     ok &= f.size_order() == 1;

     // zero order forward mode using notation in forward_zero
     // use the template parameter Vector for the vector type
     Vector x0(n), y0(m);
     x0[0] = 3.;
     x0[1] = 4.;
     y0    = f.Forward(0, x0);
     ok  &= NearEqual(y0[0] , x0[0]*x0[0]*x0[1], eps, eps);
     ok  &= f.size_order() == 1;

     // first order forward mode using notation in forward_one
     // X(t)           = x0 + x1 * t
     // Y(t) = F[X(t)] = y0 + y1 * t + o(t)
     Vector x1(n), y1(m);
     x1[0] = 1.;
     x1[1] = 0.;
     y1    = f.Forward(1, x1); // partial F w.r.t. x_0
     ok   &= NearEqual(y1[0] , 2.*x0[0]*x0[1], eps, eps);
     ok   &= f.size_order() == 2;

     // second order forward mode using notation in forward_order
     // X(t) =           x0 + x1 * t + x2 * t^2
     // Y(t) = F[X(t)] = y0 + y1 * t + y2 * t^2 + o(t^3)
     Vector x2(n), y2(m);
     x2[0]      = 0.;
     x2[1]      = 0.;
     y2         = f.Forward(2, x2);
     double F_00 = 2. * y2[0]; // second partial F w.r.t. x_0, x_0
     ok         &= NearEqual(F_00, 2.*x0[1], eps, eps);
     ok         &= f.size_order() == 3;

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool Forward(void)
{     bool ok = true;
     // Run with Vector equal to three different cases
     // all of which are Simple Vectors with elements of type double.
     ok &= ForwardCases< CppAD::vector  <double> >();
     ok &= ForwardCases< std::vector    <double> >();
     ok &= ForwardCases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/forward.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.4.2: Forward Mode: Example and Test of Multiple Orders
# include <limits>
# include <cppad/cppad.hpp>
bool forward_order(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and starting recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0] * ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // initially, the variable values during taping are stored in f
     ok &= f.size_order() == 1;

     // Compute three forward orders at one
     size_t q = 2, q1 = q+1;
     CPPAD_TESTVECTOR(double) xq(n*q1), yq;
     xq[q1*0 + 0] = 3.;    xq[q1*1 + 0] = 4.; // x^0 (order zero)
     xq[q1*0 + 1] = 1.;    xq[q1*1 + 1] = 0.; // x^1 (order one)
     xq[q1*0 + 2] = 0.;    xq[q1*1 + 2] = 0.; // x^2 (order two)
     // X(t) =   x^0 + x^1 * t + x^2 * t^2
     //      = [ 3 + t, 4 ]
     yq  = f.Forward(q, xq);
     ok &= size_t( yq.size() ) == m*q1;
     // Y(t) = F[X(t)]
     //      = (3 + t) * (3 + t) * 4
     //      = y^0 + y^1 * t + y^2 * t^2 + o(t^3)
     //
     // check y^0 (order zero)
     CPPAD_TESTVECTOR(double) x0(n);
     x0[0] = xq[q1*0 + 0];
     x0[1] = xq[q1*1 + 0];
     ok  &= NearEqual(yq[q1*0 + 0] , x0[0]*x0[0]*x0[1], eps, eps);
     //
     // check y^1 (order one)
     ok  &= NearEqual(yq[q1*0 + 1] , 2.*x0[0]*x0[1], eps, eps);
     //
     // check y^2 (order two)
     double F_00 = 2. * yq[q1*0 + 2]; // second partial F w.r.t. x_0, x_0
     ok   &= NearEqual(F_00, 2.*x0[1], eps, eps);

     // check number of orders per variable
     ok   &= f.size_order() == 3;

     return ok;
}

Input File: example/general/forward_order.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.5: Multiple Directions Forward Mode

5.3.5.a: Syntax
yq = f.Forward(qrxq)

5.3.5.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . Given a function @(@ X : B \rightarrow B^n @)@, defined by its 12.4.l: Taylor coefficients , forward mode computes the Taylor coefficients for the function @[@ Y (t) = F [ X(t) ] @]@ This version of forward mode computes multiple directions as the same time (reducing the number of passes through the tape). This requires more memory, but might be faster in some cases.

5.3.5.c: Reverse Mode
Reverse mode for multiple directions has not yet been implemented. If you have speed tests that indicate that multiple direction forward mode is faster, and you want to try multiple direction reverse mode, contact the CppAD project manager.

5.3.5.d: Notation

5.3.5.d.a: n
We use n to denote the dimension of the 5.1.5.d: domain space for f .

5.3.5.d.b: m
We use m to denote the dimension of the 5.1.5.e: range space for f .

5.3.5.e: f
The 5: ADFun object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. After this call we will have
     
f.size_order()     == q + 1
     
f.size_direction() == r

5.3.5.f: q
This argument has prototype
     size_t 
q
It specifies the order of Taylor Coefficient that we are calculating and must be greater than zero. The zero order coefficients can only have one direction computed and stored in f so use 5.3.1: forward_zero to compute the zero order coefficients.

5.3.5.g: r
This argument has prototype
     size_t 
r
It specifies the number of directions that are computed together. If ( r == 1 ), you are only using one direction and 5.3.4: forward_order is simpler, and should be faster, than this more general case.

5.3.5.h: xq
The argument xq has prototype
     const 
Vectorxq
and its size must be n*r (see 5.3.5.n: Vector below). For @(@ \ell = 0 , \ldots , r-1 @)@, @(@ j = 0 , \ldots , n-1 @)@, the j-th component of the q-th order Taylor coefficient for @(@ X_\ell (t) @)@ is defined by
     
@(@ x_j^{(q),\ell} = @)@ xqr * j + ell ]

5.3.5.i: Zero Order
For @(@ j = 0 , \ldots , n-1 @)@, the j-th component of the zero order Taylor coefficient for @(@ X_\ell (t) @)@ is defined by
     
@(@ x_j^{(0)} = @)@ xkj ] where xk corresponds to the previous call
     
f.Forward(kxk)
with k = 0 .

5.3.5.j: Non-Zero Lower Orders
For @(@ \ell = 0 , \ldots , r-1 @)@, @(@ j = 0 , \ldots , n-1 @)@, @(@ k = 1, \ldots , q-1 @)@, the j-th component of the k-th order Taylor coefficient for @(@ X_\ell (t) @)@ is defined by
     
@(@ x_j^{(k),\ell} = @)@ xkr * j + ell ] where xk corresponds to the previous call
     
f.Forward(krxk)
Note that r must have the same value in this previous call.

5.3.5.k: X(t)
For @(@ \ell = 0 , \ldots , r-1 @)@, the function @(@ X_\ell : B \rightarrow B^n @)@ is defined using the Taylor coefficients @(@ x^{(k),\ell} \in B^n @)@: @[@ X_\ell (t) = x^{(0)} + x^{(1),\ell} * t^1 + \cdots + x^{(q),\ell} t^q @]@ Note that the k-th derivative of @(@ X_\ell (t) @)@ is related to its Taylor coefficients by @[@ \begin{array}{rcl} x^{(0)} & = & X_\ell (0) \\ x^{(k), \ell} & = & \frac{1}{k !} X_\ell^{(k)} (0) \end{array} @]@ for @(@ k = 1 , \ldots , q @)@.

5.3.5.l: Y(t)
For @(@ \ell = 0 , \ldots , r-1 @)@, the function @(@ Y_\ell : B \rightarrow B^m @)@ is defined by @(@ Y_\ell (t) = F[ X_\ell (t) ] @)@. We use @(@ y^{(0)} @)@ for the zero order coefficient and @(@ y^{(k),\ell} \in B^m @)@ to denote the hight order coefficients; i.e., @[@ Y_\ell (t) = y^{(0)} + y^{(1),\ell} * t^1 + \cdots + y^{(q),\ell} * t^q + o( t^q ) @]@ where @(@ o( t^q ) * t^{-q} \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. Note that the k-th derivative of @(@ Y_\ell (t) @)@ is related to its Taylor coefficients by @[@ \begin{array}{rcl} y^{(0)} & = & Y_\ell (0) \\ y^{(k), \ell} & = & \frac{1}{k !} Y_\ell^{(k)} (0) \end{array} @]@ for @(@ k = 1 , \ldots , q @)@.

5.3.5.m: yq
The argument yq has prototype
     
Vector yq
and its size is m*r (see 5.3.5.n: Vector below). For @(@ \ell = 0 , \ldots , r-1 @)@, @(@ i = 0 , \ldots , m-1 @)@, the i-th component of the q-th order Taylor coefficient for @(@ Y_\ell (t) @)@ is given by
     
@(@ y_i^{(q),\ell} = @)@ yqr * i + ell ]

5.3.5.n: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.3.5.o: Example
The file 5.3.5.1: forward_dir.cpp contains an example and test using one order (multiple orders). They return true if they succeed and false otherwise.
Input File: omh/forward/forward_dir.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.5.1: Forward Mode: Example and Test of Multiple Directions
# include <limits>
# include <cppad/cppad.hpp>
bool forward_dir(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * std::numeric_limits<double>::epsilon();
     size_t j;

     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0] * ax[1] * ax[2];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // initially, the variable values during taping are stored in f
     ok &= f.size_order() == 1;

     // zero order Taylor coefficients
     CPPAD_TESTVECTOR(double) x0(n), y0;
     for(j = 0; j < n; j++)
          x0[j] = double(j+1);
     y0          = f.Forward(0, x0);
     ok         &= size_t( y0.size() ) == m;
     double y_0  = 1.*2.*3.;
     ok         &= NearEqual(y0[0], y_0, eps, eps);

     // first order Taylor coefficients
     size_t r = 2, ell;
     CPPAD_TESTVECTOR(double) x1(r*n), y1;
     for(ell = 0; ell < r; ell++)
     {     for(j = 0; j < n; j++)
               x1[ r * j + ell ] = double(j + 1 + ell);
     }
     y1  = f.Forward(1, r, x1);
     ok &= size_t( y1.size() ) == r*m;

     // secondorder Taylor coefficients
     CPPAD_TESTVECTOR(double) x2(r*n), y2;
     for(ell = 0; ell < r; ell++)
     {     for(j = 0; j < n; j++)
               x2[ r * j + ell ] = 0.0;
     }
     y2  = f.Forward(2, r, x2);
     ok &= size_t( y2.size() ) == r*m;
     //
     // Y_0 (t)     = F[X_0(t)]
     //             =  (1 + 1t)(2 + 2t)(3 + 3t)
     double y_1_0   = 1.*2.*3. + 2.*1.*3. + 3.*1.*2.;
     double y_2_0   = 1.*2.*3. + 2.*1.*3. + 3.*1.*2.;
     //
     // Y_1 (t)     = F[X_1(t)]
     //             =  (1 + 2t)(2 + 3t)(3 + 4t)
     double y_1_1   = 2.*2.*3. + 3.*1.*3. + 4.*1.*2.;
     double y_2_1   = 1.*3.*4. + 2.*2.*4. + 3.*2.*3.;
     //
     ok  &= NearEqual(y1[0] , y_1_0, eps, eps);
     ok  &= NearEqual(y1[1] , y_1_1, eps, eps);
     ok  &= NearEqual(y2[0] , y_2_0, eps, eps);
     ok  &= NearEqual(y2[1] , y_2_1, eps, eps);
     //
     // check number of orders
     ok   &= f.size_order() == 3;
     //
     // check number of directions
     ok   &= f.size_direction() == 2;
     //
     return ok;
}

Input File: example/general/forward_dir.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.6: Number Taylor Coefficient Orders Currently Stored

5.3.6.a: Syntax
s = f.size_order()

5.3.6.a.a: See Also
5.1.5: seq_property

5.3.6.b: Purpose
Determine the number of Taylor coefficient orders, per variable,direction, currently calculated and stored in the ADFun object f . See the discussion under 5.3.6.e: Constructor , 5.3.6.f: Forward , and 5.3.6.g: capacity_order for a description of when this value can change.

5.3.6.c: f
The object f has prototype
     const ADFun<
Basef

5.3.6.d: s
The result s has prototype
     size_t 
s
and is the number of Taylor coefficient orders, per variable,direction in the AD operation sequence, currently calculated and stored in the ADFun object f .

5.3.6.e: Constructor
Directly after the 5.1.2: FunConstruct syntax
     ADFun<
Basef(xy)
the value of s returned by size_order is one. This is because there is an implicit call to Forward that computes the zero order Taylor coefficients during this constructor.

5.3.6.f: Forward
After a call to 5.3.4: Forward with the syntax
        
f.Forward(qx_q)
the value of s returned by size_order would be @(@ q + 1 @)@. The call to Forward above uses the lower order Taylor coefficients to compute and store the q-th order Taylor coefficients for all the variables in the operation sequence corresponding to f . Thus there are @(@ q + 1 @)@ (order zero through q ) Taylor coefficients per variable,direction. (You can determine the number of variables in the operation sequence using the 5.1.5.g: size_var function.)

5.3.6.g: capacity_order
If the number of Taylor coefficient orders currently stored in f is less than or equal c , a call to 5.3.8: capacity_order with the syntax
     
f.capacity_order(c)
does not affect the value s returned by size_order. Otherwise, the value s returned by size_order is equal to c (only Taylor coefficients of order zero through @(@ c-1 @)@ have been retained).

5.3.6.h: Example
The file 5.3.4.1: forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/forward/size_order.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.7: Comparison Changes Between Taping and Zero Order Forward

5.3.7.a: Syntax
f.compare_change_count(count)
number = f.compare_change_number()
op_index = f.compare_change_op_index()

See Also 5.9: FunCheck

5.3.7.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f ; i.e, given @(@ x \in B^n @)@, @(@ F(x) @)@ is defined by
     
F(x) = f.Forward(0, x)
see 5.3.1: forward_zero . If @(@ x @)@ is such that all the algorithm 4.5.1: comparison operations have the same result as when the algorithm was taped, The function @(@ F(x) @)@ and the algorithm will have the same values. (This is a sufficient, but not necessary condition).

5.3.7.c: f
In the compare_change_number and compare_change_op_index syntax, the object f has prototype
     const ADFun<
Basef
In the compare_change_count syntax, the object f has prototype
     ADFun<
Basef

5.3.7.d: count
The argument count has prototype
     size_t 
count
It specifies which comparison change should correspond to the information stored in f during subsequent calls to 5.3.1: forward_zero ; i.e.,
     
f.Forward(0, x)
For example, if count == 1 , the operator index corresponding to the first comparison change will be stored. This is the default value used if count is not specified.

5.3.7.d.a: Speed
The special case where count == 0 , should be faster because the comparisons are not checked during
     
f.Forward(0, x)

5.3.7.e: number
The return value number has prototype
     size_t 
number
If count is non-zero, number is the number of AD<Base> 4.5.1: comparison operations, corresponding to the previous call to
     
f.Forward(0, x)
that have a different result for this value of x than the value used when f was created by taping an algorithm. If count is zero, or if no calls to f.Forward(0, x) follow the previous setting of count , number is zero.

5.3.7.e.a: Discussion
If count and number are non-zero, you may want to re-tape the algorithm with the 12.4.k.c: independent variables equal to the values in x , so the AD operation sequence properly represents the algorithm for this value of independent variables. On the other hand, re-taping the AD operation sequence usually takes significantly more time than evaluation using 5.3.1: forward_zero . If the functions values have not changed (see 5.9: FunCheck ) it may not be worth re-taping a new AD operation sequence.

5.3.7.f: op_index
The return value op_index has prototype
     size_t 
op_index
If count is non-zero, op_index is the operator index corresponding the count -th comparison change during the previous call to
     
f.Forward(0, x)
If count is greater than the corresponding number , there is no such comparison change and op_index will also be zero. If count is zero, if the function f has been 5.7: optimized , or if no calls to f.Forward(0, x) follow the previous setting of count , op_index is zero.

5.3.7.f.a: Purpose
The operator index can be used to generate an error during the taping process so that the corresponding algorithm can be inspected. In some cases, it is possible to re-design this part of the algorithm to avoid the particular comparison operation. For example, using an 4.4.4: conditional expression may be appropriate in some cases. See 5.1.1.f: abort_op_index in the syntax
     Independent(
xabort_op_index)

5.3.7.g: Example
5.3.7.1: compare_change.cpp contains an example and test of this operation. It returns true if they succeed and false otherwise.
Input File: omh/forward/compare_change.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.7.1: CompareChange and Re-Tape: Example and Test

# include <cppad/cppad.hpp>

namespace { // put this function in the empty namespace
     template <typename Type>
     Type Minimum(const Type &x, const Type &y)
     {     // Use a comparision to compute the min(x, y)
          // (note that CondExp would never require retaping).
          if( x < y )
               return x;
          return y;
     }
     struct error_info {
          bool known;
          int  line;
          std::string file;
          std::string exp;
          std::string msg;
     };
     void error_handler(
          bool        known       ,
          int         line        ,
          const char *file        ,
          const char *exp         ,
          const char *msg         )
     {     // error handler must not return, so throw an exception
          error_info info;
          info.known = known;
          info.line  = line;
          info.file  = file;
          info.exp   = exp;
          info.msg   = msg;
          throw info;
     }

}

bool compare_change(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 3.;
     ax[1] = 4.;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = Minimum(ax[0], ax[1]);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // set count to one (not necessry because is its default value)
     f.compare_change_count(1);

     // evaluate zero mode Forward where comparison has the same result
     // as during taping; i.e., x[0] < x[1].
     CPPAD_TESTVECTOR(double) x(n), y(m);
     x[0] = 2.;
     x[1] = 3.;
     y    = f.Forward(0, x);
     ok  &= (y[0] == x[0]);
     ok  &= (y[0] == Minimum(x[0], x[1]));
     ok  &= (f.compare_change_number() == 0);
     ok  &= (f.compare_change_op_index() == 0);

     // evaluate zero mode Forward where comparison has different result
     // as during taping; i.e., x[0] >= x[1].
     x[0] = 3.;
     x[1] = 2.;
     y    = f.Forward(0, x);
     ok  &= (y[0] == x[0]);
     ok  &= (y[0] != Minimum(x[0], x[1]));
     ok  &= (f.compare_change_number() == 1);
     ok  &= (f.compare_change_op_index() > 0 );
     size_t op_index = f.compare_change_op_index();

     // Local block during which default CppAD error handler is replaced.
     // If you do not replace the default CppAD error handler,
     // and you run in the debugger, you will be able to inspect the
     // call stack and see that 'if( x < y )' is where the comparison is.
     bool missed_error = true;
     {     CppAD::ErrorHandler local_error_handler(error_handler);

          std::string check_msg =
               "Operator index equals abort_op_index in Independent";
          try {
               // determine the operation index where the change occurred
               CppAD::Independent(ax, op_index);
               ay[0] = Minimum(ax[0], ax[1]);
# ifdef NDEBUG
               // CppAD does not spend time checking operator index when
               // NDEBUG is defined
               missed_error = false;
               AD<double>::abort_recording();
# endif
          }
          catch( error_info info )
          {     missed_error = false;
               ok          &= info.known;
               ok          &= info.msg == check_msg;
               // Must abort the recording so we can start a new one
               // (and to avoid a memory leak).
               AD<double>::abort_recording();
          }
     }
# ifdef CPPAD_DEBUG_AND_RELEASE
     if( missed_error )
     {     // This routine is compiled for debugging, but the routine that checks
          // operator indices was compiled for release.
          missed_error = false;
          AD<double>::abort_recording();
     }
# endif
     ok &= ! missed_error;

     // set count to zero to demonstrate case where comparisons are not checked
     f.compare_change_count(0);
     y    = f.Forward(0, x);
     ok  &= (y[0] == x[0]);
     ok  &= (y[0] != Minimum(x[0], x[1]));
     ok  &= (f.compare_change_number()   == 0);
     ok  &= (f.compare_change_op_index() == 0);

     // now demonstrate that compare_change_number works for an optimized
     // tape (note that compare_change_op_index is always zero after optimize)
     f.optimize();
     f.compare_change_count(1);
     y    = f.Forward(0, x);
     ok  &= (y[0] == x[0]);
     ok  &= (y[0] != Minimum(x[0], x[1]));
     ok  &= (f.compare_change_number()   == 1);
     ok  &= (f.compare_change_op_index() == 0);

     // now retape to get the a tape that agrees with the algorithm
     ax[0] = x[0];
     ax[1] = x[1];
     Independent(ax);
     ay[0] = Minimum(ax[0], ax[1]);
     f.Dependent(ax, ay);
     y    = f.Forward(0, x);
     ok  &= (y[0] == x[1]);
     ok  &= (y[0] == Minimum(x[0], x[1]));
     ok  &= (f.compare_change_number()   == 0);
     ok  &= (f.compare_change_op_index() == 0);

     return ok;
}


Input File: example/general/compare_change.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.8: Controlling Taylor Coefficients Memory Allocation

5.3.8.a: Syntax
f.capacity_order(c)

5.3.8.a.a: See Also
5.1.5: seq_property

5.3.8.b: Purpose
The Taylor coefficients calculated by 5.3: Forward mode calculations are retained in an 5: ADFun object for subsequent use during 5.4: Reverse mode and higher order Forward mode calculations. For example, a call to 5.3.4: Forward with the syntax
        
yq = f.Forward(qxq)
where q > 0 and xq.size() == f.Domain() , uses the lower order Taylor coefficients and computes the q-th order Taylor coefficients for all the variables in the operation sequence corresponding to f . The capacity_order operation allows you to control that amount of memory that is retained by an AD function object (to hold Forward results for subsequent calculations).

5.3.8.c: f
The object f has prototype
     ADFun<
Basef

5.3.8.d: c
The argument c has prototype
     size_t 
c
It specifies the number of Taylor coefficient orders that are allocated in the AD operation sequence corresponding to f .

5.3.8.d.a: Pre-Allocating Memory
If you plan to make calls to Forward with the maximum value of q equal to Q , it should be faster to pre-allocate memory for these calls using
     
f.capacity_order(c)
with c equal to @(@ Q + 1 @)@. If you do no do this, Forward will automatically allocate memory and will copy the results to a larger buffer, when necessary.

Note that each call to 5.1.3: Dependent frees the old memory connected to the function object and sets the corresponding taylor capacity to zero.

5.3.8.d.b: Freeing Memory
If you no longer need the Taylor coefficients of order q and higher (that are stored in f ), you can reduce the memory allocated to f using
     
f.capacity_order(c)
with c equal to q . Note that, if 8.23.9: ta_hold_memory is true, this memory is not actually returned to the system, but rather held for future use by the same thread.

5.3.8.e: Original State
If f is 5.1.2: constructed with the syntax
     ADFun<
Basef(xy)
, there is an implicit call to 5.3.1: forward_zero with xq equal to the value of the 12.4.k.c: independent variables when the AD operation sequence was recorded. This corresponds to c == 1 .

5.3.8.f: Example
The file 5.3.8.1: capacity_order.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: cppad/core/capacity_order.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
# include <cppad/cppad.hpp>

namespace {
     bool test(void)
     {     bool ok = true;
          using CppAD::AD;
          using CppAD::NearEqual;
          using CppAD::thread_alloc;

          // domain space vector
          size_t n(1), m(1);
          CPPAD_TESTVECTOR(AD<double>) ax(n), ay(n);

          // declare independent variables and start tape recording
          ax[0]  = 1.0;
          CppAD::Independent(ax);

          // Set y = x^3, use enough variables so more that the minimal amount
          // of memory is allocated for Taylor coefficients
          ay[0] = 0.;
          for( size_t i = 0; i < 10; i++)
               ay[0] += ax[0] * ax[0] * ax[0];
          ay[0] = ay[0] / 10.;

          // create f: x -> y and stop tape recording
          // (without running zero order forward mode).
          CppAD::ADFun<double> f;
          f.Dependent(ax, ay);

          // check that this is master thread
          size_t thread = thread_alloc::thread_num();
          ok           &= thread == 0; // this should be master thread

          // The highest order forward mode calculation below is first order.
          // This corresponds to two Taylor coefficient per variable,direction
          // (orders zero and one). Preallocate memory for speed.
          size_t inuse  = thread_alloc::inuse(thread);
          f.capacity_order(2);
          ok &= thread_alloc::inuse(thread) > inuse;

          // zero order forward mode
          CPPAD_TESTVECTOR(double) x(n), y(m);
          x[0] = 0.5;
          y    = f.Forward(0, x);
          double eps = 10. * CppAD::numeric_limits<double>::epsilon();
          ok  &= NearEqual(y[0], x[0] * x[0] * x[0], eps, eps);

          // forward computation of partials w.r.t. x
          CPPAD_TESTVECTOR(double) dx(n), dy(m);
          dx[0] = 1.;
          dy    = f.Forward(1, dx);
          ok   &= NearEqual(dy[0], 3. * x[0] * x[0], eps, eps);

          // Suppose we no longer need the first order Taylor coefficients.
          inuse = thread_alloc::inuse(thread);
          f.capacity_order(1); // just keep zero order coefficients
          ok   &= thread_alloc::inuse(thread) < inuse;

          // Suppose we no longer need the zero order Taylor coefficients
          // (could have done this first and not used f.capacity_order(1)).
          inuse = thread_alloc::inuse(thread);
          f.capacity_order(0);
          ok   &= thread_alloc::inuse(thread) < inuse;

          // turn off memory holding
          thread_alloc::hold_memory(false);

          return ok;
     }
}
bool capacity_order(void)
{     bool ok = true;
     using CppAD::thread_alloc;

     // original amount of memory inuse
     size_t thread = thread_alloc::thread_num();
     ok           &= thread == 0; // this should be master thread
     size_t inuse  = thread_alloc::inuse(thread);

     // do test in separate routine so all objects are destroyed
     ok &= test();

     // check that the amount of memroy inuse has not changed
     ok &= thread_alloc::inuse(thread) == inuse;

     // Test above uses hold_memory, so return available memory
     thread_alloc::free_available(thread);

     return ok;
}

Input File: example/general/capacity_order.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.9: Number of Variables that Can be Skipped

5.3.9.a: Syntax
n = f.number_skip()

5.3.9.a.a: See Also
5.1.5: seq_property

5.3.9.b: Purpose
The 4.4.4: conditional expressions use either the 4.4.4: if_true or 4.4.4: if_false . Hence, some terms only need to be evaluated depending on the value of the comparison in the conditional expression. The 5.7: optimize option is capable of detecting some of these case and determining variables that can be skipped. This routine returns the number such variables.

5.3.9.c: n
The return value n has type size_t is the number of variables that the optimizer has determined can be skipped (given the independent variable values specified by the previous call to 5.3: f.Forward for order zero).

5.3.9.d: f
The object f has prototype
     ADFun<
Basef

5.3.9.e: Example
The file 5.3.9.1: number_skip.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/num_skip.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.3.9.1: Number of Variables That Can be Skipped: Example and Test
# include <cppad/cppad.hpp>
bool number_skip(void)
{     bool ok = true;
     using CppAD::AD;

     // independent variable vector
     CppAD::vector< AD<double> > ax(2);
     ax[0] = 0.;
     ax[1] = 1.;
     Independent(ax);

     // Use a conditional expression
     CppAD::vector< AD<double> > ay(1);

     // variable that gets optimized out
     AD<double> az = ax[0] * ax[0];


     // conditional expression
     ay[0] = CondExpLt(ax[0], ax[1], ax[0] + ax[1], ax[0] - ax[1]);

     // create function object F : x -> ay
     CppAD::ADFun<double> f;
     f.Dependent(ax, ay);

     // use zero order to evaluate F[ (3, 4) ]
     CppAD::vector<double>  x( f.Domain() );
     CppAD::vector<double>  y( f.Range() );
     x[0]    = 3.;
     x[1]    = 4.;
     y   = f.Forward(0, x);
     ok &= (y[0] == x[0] + x[1]);

     // before call to optimize
     ok &= f.number_skip() == 0;
     size_t n_var = f.size_var();

     // now optimize the operation sequence
     f.optimize();

     // after optimize, check forward mode result
     x[0]    = 4.;
     x[1]    = 3.;
     y   = f.Forward(0, x);
     ok &= (y[0] == x[0] - x[1]);

     // after optimize, check amount of optimization
     ok &= f.size_var() == n_var - 1;
     ok &= f.number_skip() == 1;

     return ok;
}

Input File: example/general/number_skip.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4: Reverse Mode

5.4.a: Multiple Directions
Reverse mode after 5.3.5: Forward(q, r, xq) with number of directions r != 1 is not yet supported. There is one exception, 5.4.1: reverse_one is allowed because there is only one zero order forward direction. After such an operation, only the zero order forward results are retained (the higher order forward results are lost).

5.4.b: Contents
reverse_one: 5.4.1First Order Reverse Mode
reverse_two: 5.4.2Second Order Reverse Mode
reverse_any: 5.4.3Any Order Reverse Mode
subgraph_reverse: 5.4.4Reverse Mode Using Subgraphs

Input File: omh/adfun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.1: First Order Reverse Mode

5.4.1.a: Syntax
dw = f.Reverse(1, w)

5.4.1.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . The function @(@ W : B^n \rightarrow B @)@ is defined by @[@ W(x) = w_0 * F_0 ( x ) + \cdots + w_{m-1} * F_{m-1} (x) @]@ The result of this operation is the derivative @(@ dw = W^{(1)} (x) @)@; i.e., @[@ dw = w_0 * F_0^{(1)} ( x ) + \cdots + w_{m-1} * F_{m-1}^{(1)} (x) @]@ Note that if @(@ w @)@ is the i-th 12.4.f: elementary vector , @(@ dw = F_i^{(1)} (x) @)@.

5.4.1.c: f
The object f has prototype
     const ADFun<
Basef
Before this call to Reverse, the value returned by
     
f.size_order()
must be greater than or equal one (see 5.3.6: size_order ).

5.4.1.d: x
The vector x in expression for dw above corresponds to the previous call to 5.3.1: forward_zero using this ADFun object f ; i.e.,
     
f.Forward(0, x)
If there is no previous call with the first argument zero, the value of the 5.1.1: independent variables during the recording of the AD sequence of operations is used for x .

5.4.1.e: w
The argument w has prototype
     const 
Vector &w
(see 5.4.1.g: Vector below) and its size must be equal to m , the dimension of the 5.1.5.e: range space for f .

5.4.1.f: dw
The result dw has prototype
     
Vector dw
(see 5.4.1.g: Vector below) and its value is the derivative @(@ W^{(1)} (x) @)@. The size of dw is equal to n , the dimension of the 5.1.5.d: domain space for f .

5.4.1.g: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.4.1.h: Example
The file 5.4.1.1: reverse_one.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/reverse/reverse_one.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.1.1: First Order Reverse Mode: Example and Test
# include <cppad/cppad.hpp>
namespace { // ----------------------------------------------------------
// define the template function reverse_one_cases<Vector> in empty namespace
template <typename Vector>
bool reverse_one_cases(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0] * ax[0] * ax[1];

     // create f : x -> y and stop recording
     CppAD::ADFun<double> f(ax, ay);

     // use first order reverse mode to evaluate derivative of y[0]
     // and use the values in x for the independent variables.
     CPPAD_TESTVECTOR(double) w(m), dw(n);
     w[0] = 1.;
     dw   = f.Reverse(1, w);
     ok  &= NearEqual(dw[0] , 2.*ax[0]*ax[1], eps99, eps99);
     ok  &= NearEqual(dw[1] ,    ax[0]*ax[0], eps99, eps99);

     // use zero order forward mode to evaluate y at x = (3, 4)
     // and use the template parameter Vector for the vector type
     Vector x(n), y(m);
     x[0]    = 3.;
     x[1]    = 4.;
     y       = f.Forward(0, x);
     ok     &= NearEqual(y[0] , x[0]*x[0]*x[1], eps99, eps99);

     // use first order reverse mode to evaluate derivative of y[0]
     // and using the values in x for the independent variables.
     w[0] = 1.;
     dw   = f.Reverse(1, w);
     ok  &= NearEqual(dw[0] , 2.*x[0]*x[1], eps99, eps99);
     ok  &= NearEqual(dw[1] ,    x[0]*x[0], eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool reverse_one(void)
{     bool ok = true;
     // Run with Vector equal to three different cases
     // all of which are Simple Vectors with elements of type double.
     ok &= reverse_one_cases< CppAD::vector  <double> >();
     ok &= reverse_one_cases< std::vector    <double> >();
     ok &= reverse_one_cases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/reverse_one.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.2: Second Order Reverse Mode

5.4.2.a: Syntax
dw = f.Reverse(2, w)

5.4.2.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . Reverse mode computes the derivative of the 5.3: Forward mode 12.4.l: Taylor coefficients with respect to the domain variable @(@ x @)@.

5.4.2.c: x^(k)
For @(@ k = 0, 1 @)@, the vector @(@ x^{(k)} \in B^n @)@ is defined as the value of x_k in the previous call (counting this call) of the form
     
f.Forward(kx_k)
If there is no previous call with @(@ k = 0 @)@, @(@ x^{(0)} @)@ is the value of the independent variables when the corresponding AD of Base 12.4.g.b: operation sequence was recorded.

5.4.2.d: W
The functions @(@ W_0 : B^n \rightarrow B @)@ and @(@ W_1 : B^n \rightarrow B @)@ are defined by @[@ \begin{array}{rcl} W_0 ( u ) & = & w_0 * F_0 ( u ) + \cdots + w_{m-1} * F_{m-1} (u) \\ W_1 ( u ) & = & w_0 * F_0^{(1)} ( u ) * x^{(1)} + \cdots + w_{m-1} * F_{m-1}^{(1)} (u) * x^{(1)} \end{array} @]@ This operation computes the derivatives @[@ \begin{array}{rcl} W_0^{(1)} (u) & = & w_0 * F_0^{(1)} ( u ) + \cdots + w_{m-1} * F_{m-1}^{(1)} (u) \\ W_1^{(1)} (u) & = & w_0 * \left( x^{(1)} \right)^\R{T} * F_0^{(2)} ( u ) + \cdots + w_{m-1} * \left( x^{(1)} \right)^\R{T} F_{m-1}^{(2)} (u) \end{array} @]@ at @(@ u = x^{(0)} @)@.

5.4.2.e: f
The object f has prototype
     const ADFun<
Basef
Before this call to Reverse, the value returned by
     
f.size_order()
must be greater than or equal two (see 5.3.6: size_order ).

5.4.2.f: w
The argument w has prototype
     const 
Vector &w
(see 5.4.2.h: Vector below) and its size must be equal to m , the dimension of the 5.1.5.e: range space for f .

5.4.2.g: dw
The result dw has prototype
     
Vector dw
(see 5.4.2.h: Vector below). It contains both the derivative @(@ W^{(1)} (x) @)@ and the derivative @(@ U^{(1)} (x) @)@. The size of dw is equal to @(@ n \times 2 @)@, where @(@ n @)@ is the dimension of the 5.1.5.d: domain space for f .

5.4.2.g.a: First Order Partials
For @(@ j = 0 , \ldots , n - 1 @)@, @[@ dw [ j * 2 + 0 ] = \D{ W_0 }{ u_j } \left( x^{(0)} \right) = w_0 * \D{ F_0 }{ u_j } \left( x^{(0)} \right) + \cdots + w_{m-1} * \D{ F_{m-1} }{ u_j } \left( x^{(0)} \right) @]@ This part of dw contains the same values as are returned by 5.4.1: reverse_one .

5.4.2.g.b: Second Order Partials
For @(@ j = 0 , \ldots , n - 1 @)@, @[@ dw [ j * 2 + 1 ] = \D{ W_1 }{ u_j } \left( x^{(0)} \right) = \sum_{\ell=0}^{n-1} x_\ell^{(1)} \left[ w_0 * \DD{ F_0 }{ u_\ell }{ u_j } \left( x^{(0)} \right) + \cdots + w_{m-1} * \DD{ F_{m-1} }{ u_\ell }{ u_j } \left( x^{(0)} \right) \right] @]@

5.4.2.h: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.4.2.i: Hessian Times Direction
Suppose that @(@ w @)@ is the i-th elementary vector. It follows that for @(@ j = 0, \ldots, n-1 @)@ @[@ \begin{array}{rcl} dw[ j * 2 + 1 ] & = & w_i \sum_{\ell=0}^{n-1} \DD{F_i}{ u_j }{ u_\ell } \left( x^{(0)} \right) x_\ell^{(1)} \\ & = & \left[ F_i^{(2)} \left( x^{(0)} \right) * x^{(1)} \right]_j \end{array} @]@ Thus the vector @(@ ( dw[1], dw[3], \ldots , dw[ n * q - 1 ] ) @)@ is equal to the Hessian of @(@ F_i (x) @)@ times the direction @(@ x^{(1)} @)@. In the special case where @(@ x^{(1)} @)@ is the l-th 12.4.f: elementary vector , @[@ dw[ j * 2 + 1 ] = \DD{ F_i }{ x_j }{ x_\ell } \left( x^{(0)} \right) @]@

5.4.2.j: Example
The files 5.4.2.1: reverse_two.cpp and 5.4.2.2: hes_times_dir.cpp contain a examples and tests of reverse mode calculations. They return true if they succeed and false otherwise.
Input File: omh/reverse/reverse_two.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.2.1: Second Order Reverse ModeExample and Test
# include <cppad/cppad.hpp>
namespace { // ----------------------------------------------------------
// define the template function reverse_two_cases<Vector> in empty namespace
template <typename Vector>
bool reverse_two_cases(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) X(n);
     X[0] = 0.;
     X[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = X[0] * X[0] * X[1];

     // create f : X -> Y and stop recording
     CppAD::ADFun<double> f(X, Y);

     // use zero order forward mode to evaluate y at x = (3, 4)
     // use the template parameter Vector for the vector type
     Vector x(n), y(m);
     x[0]  = 3.;
     x[1]  = 4.;
     y     = f.Forward(0, x);
     ok    &= NearEqual(y[0] , x[0]*x[0]*x[1], eps99, eps99);

     // use first order forward mode in x[0] direction
     // (all second order partials below involve x[0])
     Vector dx(n), dy(m);
     dx[0] = 1.;
     dx[1] = 1.;
     dy    = f.Forward(1, dx);
     double check = 2.*x[0]*x[1]*dx[0] + x[0]*x[0]*dx[1];
     ok   &= NearEqual(dy[0], check, eps99, eps99);

     // use second order reverse mode to evalaute second partials of y[0]
     // with respect to (x[0], x[0]) and with respect to (x[0], x[1])
     Vector w(m), dw( n * 2 );
     w[0]  = 1.;
     dw    = f.Reverse(2, w);

     // check derivative of f
     ok   &= NearEqual(dw[0*2+0] , 2.*x[0]*x[1], eps99, eps99);
     ok   &= NearEqual(dw[1*2+0] ,    x[0]*x[0], eps99, eps99);

     // check derivative of f^{(1)} (x) * dx
     check = 2.*x[1]*dx[1] + 2.*x[0]*dx[1];
     ok   &= NearEqual(dw[0*2+1] , check, eps99, eps99);
     check = 2.*x[0]*dx[1];
     ok   &= NearEqual(dw[1*2+1] , check, eps99, eps99);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool reverse_two(void)
{     bool ok = true;
     ok &= reverse_two_cases< CppAD::vector  <double> >();
     ok &= reverse_two_cases< std::vector    <double> >();
     ok &= reverse_two_cases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/reverse_two.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.2.2: Hessian Times Direction: Example and Test
// Example and test of computing the Hessian times a direction; i.e.,
// given F : R^n -> R and a direction dx in R^n, we compute F''(x) * dx

# include <cppad/cppad.hpp>

namespace { // put this function in the empty namespace
     // F(x) = |x|^2 = x[0]^2 + ... + x[n-1]^2
     template <class Type>
     Type F(CPPAD_TESTVECTOR(Type) &x)
     {     Type sum = 0;
          size_t i = x.size();
          while(i--)
               sum += x[i] * x[i];
          return sum;
     }
}

bool HesTimesDir(void)
{     bool ok = true;                   // initialize test result
     size_t j;                         // a domain variable variable

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 5;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     for(j = 0; j < n; j++)
          X[j] = AD<double>(j);

     // declare independent variables and start recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = F(X);

     // create f : X -> Y and stop recording
     CppAD::ADFun<double> f(X, Y);

     // choose a direction dx and compute dy(x) = F'(x) * dx
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     for(j = 0; j < n; j++)
          dx[j] = double(n - j);
     dy = f.Forward(1, dx);

     // compute ddw = F''(x) * dx
     CPPAD_TESTVECTOR(double) w(m);
     CPPAD_TESTVECTOR(double) ddw(2 * n);
     w[0] = 1.;
     ddw  = f.Reverse(2, w);

     // F(x)        = x[0]^2 + x[1]^2 + ... + x[n-1]^2
     // F''(x)      = 2 * Identity_Matrix
     // F''(x) * dx = 2 * dx
     for(j = 0; j < n; j++)
          ok &= NearEqual(ddw[j * 2 + 1], 2.*dx[j], eps99, eps99);

     return ok;
}

Input File: example/general/hes_times_dir.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.3: Any Order Reverse Mode

5.4.3.a: Syntax
dw = f.Reverse(qw)

5.4.3.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . Reverse mode computes the derivative of the 5.3: Forward mode 12.4.l: Taylor coefficients with respect to the domain variable @(@ x @)@. To be specific, it computes the derivative @(@ W^{(1)} (u) @)@ at @(@ u = x @)@ which is specified by the following notation:

5.4.3.c: Notation

5.4.3.c.a: u^(k)
For @(@ k = 0, \ldots , q-1 @)@, the vector @(@ u^{(k)} \in B^n @)@ is defined as the value of x_k in the previous calls of the form
     
f.Forward(kx_k)
If there is no previous call with @(@ k = 0 @)@, @(@ u^{(0)} @)@ is the value of the independent variables when the corresponding AD of Base 12.4.g.b: operation sequence was recorded.

5.4.3.c.b: X(t, u)
The function @(@ X : B \times B^{n \times q} \rightarrow B^n @)@ is defined by @[@ X ( t , u ) = u^{(0)} + u^{(1)} * t + \cdots + u^{(q-1)} * t^{q-1} @]@ Note that for @(@ k = 0 , \ldots , q-1 @)@, @(@ u^{(k)} @)@ is related to the k-th partial of @(@ X(t, u) @)@ with respect to @(@ t @)@ by @[@ u^{(k)} = \frac{1}{k !} \Dpow{k}{t} X(0, u) @]@

5.4.3.c.c: Y(t, u)
The function @(@ Y : B \times B^{n \times q} \rightarrow B^m @)@ is defined by @[@ Y(t, u) = F [ X(t,u) ] @]@

5.4.3.c.d: w^(k)
If the argument w has size m * q , for @(@ k = 0 , \ldots , q-1 @)@ and @(@ i = 0, \ldots , m-1 @)@, @[@ w_i^{(k)} = w [ i * q + k ] @]@ If the argument w has size m , for @(@ k = 0 , \ldots , q-1 @)@ and @(@ i = 0, \ldots , m-1 @)@, @[@ w_i^{(k)} = \left\{ \begin{array}{ll} w [ i ] & {\rm if} \; k = q-1 \\ 0 & {\rm otherwise} \end{array} \right. @]@

5.4.3.c.e: W(u)
The function @(@ W : B^{n \times q} \rightarrow B @)@ is defined by @[@ W(u) = \sum_{k=0}^{q-1} ( w^{(k)} )^\R{T} \frac{1}{k !} \Dpow{k}{t} Y(0, u) @]@

5.4.3.d: f
The object f has prototype
     const ADFun<
Basef
Before this call to Reverse, the value returned by
     
f.size_order()
must be greater than or equal q (see 5.3.6: size_order ).

5.4.3.e: q
The argument q has prototype
     size_t 
q
and specifies the number of Taylor coefficient orders to be differentiated (for each variable).

5.4.3.f: w
The argument w has prototype
     const 
Vector &w
(see 5.4.3.j: Vector below) and its size must be equal to m or m * q , It specifies the weighting vector w in the definition of 5.4.3.c.e: W(u) .

5.4.3.g: dw
The return value dw has prototype
     
Vector dw
(see 5.4.3.j: Vector below). It is a vector with size @(@ n \times q @)@. For @(@ j = 0, \ldots, n-1 @)@ and @(@ k = 0 , \ldots , q-1 @)@ If the argument w has size m * q , @[@ dw[ j * q + k ] = W^{(1)} ( x )_{j,k} @]@ where @(@ u = x @)@ is value of the Taylor coefficients where the derivative is evaluated.

If the argument w has size m , @[@ dw[ j * q + q - k - 1 ] = W^{(1)} ( x )_{j,k} @]@ where @(@ u = x @)@ is value of the Taylor coefficients where the derivative is evaluated. Note the reverse order in which the order indices are stored. This is an unfortunate consequence of keeping Reverse backward compatible.

5.4.3.h: First Order
We consider the case where q = 1 and w.size() == m . In this case @[@ \begin{array}{rcl} W(u) & = & w_0 Y_0 (0, u) + \cdots + w_m Y_m (0, u) \\ W(u) & = & w_0 F_0 [ X(0, u) ] + \cdots + w_m F_m [ X(0, u) ] \\ W^{(1)} (x) & = & w_0 F_0^{(1)} ( x^{(0)} ) + \cdots + w_m F_m^{(1)} ( x^{(0)} ) \end{array} @]@ This is the same as the result documented in 5.4.1: reverse_one .

5.4.3.i: Second Order
We consider the case where q = 2 and w.size() == m . In this case @[@ \begin{array}{rcl} W(u) & = & w_0 \partial_t Y_0 (0, u) + \cdots + w_m \partial_t Y_m (0, u) \\ W(u) & = & w_0 \partial_t \{ F_0 [ X(t, u) ] \}_{t = 0} + \cdots + w_m \partial_t \{ F_m [ X(t, u) ] \}_{t = 0} \\ W(u) & = & w_0 F_0^{(1)} ( u^{(0)} ) u^{(1)} + \cdots + w_0 F_m^{(1)} ( u^{(0)} ) u^{(1)} \\ \partial_{u(0)} W(x) & = & w_0 ( x^{(1)} )^\R{T} F_0^{(2)} ( x^{(0)} ) + \cdots + w_m ( x^{(1)} )^\R{T} F_m^{(2)} ( x^{(0)} ) \\ \partial_{u(1)} W(x) & = & w_0 F_0^{(1)} ( x^{(0)} ) + \cdots + w_m F_m^{(1)} ( x^{(0)} ) \end{array} @]@ where @(@ \partial{u(0)} @)@ denotes partial with respect to @(@ u^{(0)} @)@. These are the same as the result documented in 5.4.2: reverse_two .

5.4.3.j: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.4.3.k: Example
  1. The file 5.4.3.1: reverse_three.cpp contains an example and test of using reverse mode to compute third order derivatives.
  2. The file 5.4.3.2: reverse_checkpoint.cpp contains an example and test of the general reverse mode case.

Input File: omh/reverse/reverse_any.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.3.1: Third Order Reverse Mode: Example and Test

5.4.3.1.a: Taylor Coefficients
@[@ \begin{array}{rcl} X(t) & = & x^{(0)} + x^{(1)} t + x^{(2)} t^2 \\ X^{(1)} (t) & = & x^{(1)} + 2 x^{(2)} t \\ X^{(2)} (t) & = & 2 x^{(2)} \end{array} @]@Thus, we need to be careful to properly account for the fact that @(@ X^{(2)} (0) = 2 x^{(2)} @)@ (and similarly @(@ Y^{(2)} (0) = 2 y^{(2)} @)@).
# include <cppad/cppad.hpp>
namespace { // ----------------------------------------------------------
// define the template function cases<Vector> in empty namespace
template <typename Vector>
bool cases(void)
{     bool ok    = true;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     using CppAD::AD;
     using CppAD::NearEqual;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) X(n);
     X[0] = 0.;
     X[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = X[0] * X[1];

     // create f : X -> Y and stop recording
     CppAD::ADFun<double> f(X, Y);

     // define x^0 and compute y^0 using user zero order forward
     Vector x0(n), y0(m);
     x0[0]    = 2.;
     x0[1]    = 3.;
     y0       = f.Forward(0, x0);

     // y^0 = F(x^0)
     double check;
     check    =  x0[0] * x0[1];
     ok      &= NearEqual(y0[0] , check, eps, eps);

     // define x^1 and compute y^1 using first order forward mode
     Vector x1(n), y1(m);
     x1[0] = 4.;
     x1[1] = 5.;
     y1    = f.Forward(1, x1);

     // Y^1 (x) = partial_t F( x^0 + x^1 * t )
     // y^1     = Y^1 (0)
     check = x1[0] * x0[1] + x0[0] * x1[1];
     ok   &= NearEqual(y1[0], check, eps, eps);

     // define x^2 and compute y^2 using second order forward mode
     Vector x2(n), y2(m);
     x2[0] = 6.;
     x2[1] = 7.;
     y2    = f.Forward(2, x2);

     // Y^2 (x) = partial_tt F( x^0 + x^1 * t + x^2 * t^2 )
     // y^2     = (1/2) *  Y^2 (0)
     check  = x2[0] * x0[1] + x1[0] * x1[1] + x0[0] * x2[1];
     ok    &= NearEqual(y2[0], check, eps, eps);

     // W(x)  = Y^0 (x) + 2 * Y^1 (x) + 3 * (1/2) * Y^2 (x)
     size_t p = 3;
     Vector dw(n*p), w(m*p);
     w[0] = 1.;
     w[1] = 2.;
     w[2] = 3.;
     dw   = f.Reverse(p, w);

     // check partial w.r.t x^0_0 of W(x)
     check = x0[1] + 2. * x1[1] + 3. * x2[1];
     ok   &= NearEqual(dw[0*p+0], check, eps, eps);

     // check partial w.r.t x^0_1 of W(x)
     check = x0[0] + 2. * x1[0] + 3. * x2[0];
     ok   &= NearEqual(dw[1*p+0], check, eps, eps);

     // check partial w.r.t x^1_0 of W(x)
     check = 2. * x0[1] + 3. * x1[1];
     ok   &= NearEqual(dw[0*p+1], check, eps, eps);

     // check partial w.r.t x^1_1 of W(x)
     check = 2. * x0[0] + 3. * x1[0];
     ok   &= NearEqual(dw[1*p+1], check, eps, eps);

     // check partial w.r.t x^2_0 of W(x)
     check = 3. * x0[1];
     ok   &= NearEqual(dw[0*p+2], check, eps, eps);

     // check partial w.r.t x^2_1 of W(x)
     check = 3. * x0[0];
     ok   &= NearEqual(dw[1*p+2], check, eps, eps);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool reverse_three(void)
{     bool ok = true;
     ok &= cases< CppAD::vector  <double> >();
     ok &= cases< std::vector    <double> >();
     ok &= cases< std::valarray  <double> >();
     return ok;
}

Input File: example/general/reverse_three.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test

5.4.3.2.a: See Also
4.4.7.1: checkpoint

5.4.3.2.b: Purpose
Break a large computation into pieces and only store values at the interface of the pieces (this is much easier to do using 4.4.7.1: checkpoint ). In actual applications, there may be many functions, but for this example there are only two. The functions @(@ F : \B{R}^2 \rightarrow \B{R}^2 @)@ and @(@ G : \B{R}^2 \rightarrow \B{R}^2 @)@ defined by @[@ F(x) = \left( \begin{array}{c} x_0 x_1 \\ x_1 - x_0 \end{array} \right) \; , \; G(y) = \left( \begin{array}{c} y_0 - y_1 \\ y_1 y_0 \end{array} \right) @]@

5.4.3.2.c: Processing Steps
We apply reverse mode to compute the derivative of @(@ H : \B{R}^2 \rightarrow \B{R} @)@ is defined by @[@ \begin{array}{rcl} H(x) & = & G_0 [ F(x) ] + G_1 [ F(x) ] \\ & = & x_0 x_1 - ( x_1 - x_0 ) + x_0 x_1 ( x_1 - x_0 ) \\ & = & x_0 x_1 ( 1 - x_0 + x_1 ) - x_1 + x_0 \end{array} @]@ Given the zero and first order Taylor coefficients @(@ x^{(0)} @)@ and @(@ x^{(1)} @)@, we use @(@ X(t) @)@, @(@ Y(t) @)@ and @(@ Z(t) @)@ for the corresponding functions; i.e., @[@ \begin{array}{rcl} X(t) & = & x^{(0)} + x^{(1)} t \\ Y(t) & = & F[X(t)] = y^{(0)} + y^{(1)} t + O(t^2) \\ Z(t) & = & G \{ F [ X(t) ] \} = z^{(0)} + z^{(1)} t + O(t^2) \\ h^{(0)} & = & z^{(0)}_0 + z^{(0)}_1 \\ h^{(1)} & = & z^{(1)}_0 + z^{(1)}_1 \end{array} @]@ Here are the processing steps:
  1. Use forward mode on @(@ F(x) @)@ to compute @(@ y^{(0)} @)@ and @(@ y^{(1)} @)@.
  2. Free some, or all, of the memory corresponding to @(@ F(x) @)@.
  3. Use forward mode on @(@ G(y) @)@ to compute @(@ z^{(0)} @)@ and @(@ z^{(1)} @)@
  4. Use reverse mode on @(@ G(y) @)@ to compute the derivative of @(@ h^{(1)} @)@ with respect to @(@ y^{(0)} @)@ and @(@ y^{(1)} @)@.
  5. Free all the memory corresponding to @(@ G(y) @)@.
  6. Use reverse mode on @(@ F(x) @)@ to compute the derivative of @(@ h^{(1)} @)@ with respect to @(@ x^{(0)} @)@ and @(@ x^{(1)} @)@.
This uses the following relations: @[@ \begin{array}{rcl} \partial_{x(0)} h^{(1)} [ x^{(0)} , x^{(1)} ] & = & \partial_{y(0)} h^{(1)} [ y^{(0)} , y^{(1)} ] \partial_{x(0)} y^{(0)} [ x^{(0)} , x^{(1)} ] \\ & + & \partial_{y(1)} h^{(1)} [ y^{(0)} , y^{(1)} ] \partial_{x(0)} y^{(1)} [ x^{(0)} , x^{(1)} ] \\ \partial_{x(1)} h^{(1)} [ x^{(0)} , x^{(1)} ] & = & \partial_{y(0)} h^{(1)} [ y^{(0)} , y^{(1)} ] \partial_{x(1)} y^{(0)} [ x^{(0)} , x^{(1)} ] \\ & + & \partial_{y(1)} h^{(1)} [ y^{(0)} , y^{(1)} ] \partial_{x(1)} y^{(1)} [ x^{(0)} , x^{(1)} ] \end{array} @]@ where @(@ \partial_{x(0)} @)@ denotes the partial with respect to @(@ x^{(0)} @)@.

# include <cppad/cppad.hpp>

namespace {
     template <class Vector>
     Vector F(const Vector& x)
     {     Vector y(2);
          y[0] = x[0] * x[1];
          y[1] = x[1] - x[0];
          return y;
     }
     template <class Vector>
     Vector G(const Vector& y)
     {     Vector z(2);
          z[0] = y[0] - y[1];
          z[1] = y[1] * y[0];
          return z;
     }
}

namespace {
     bool reverse_any_case(bool free_all)
     {     bool ok = true;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

          using CppAD::AD;
          using CppAD::NearEqual;
          CppAD::ADFun<double> f, g, empty;

          // specify the Taylor coefficients for X(t)
          size_t n    = 2;
          CPPAD_TESTVECTOR(double) x0(n), x1(n);
          x0[0] = 1.; x0[1] = 2.;
          x1[0] = 3.; x1[1] = 4.;

          // record the function F(x)
          CPPAD_TESTVECTOR(AD<double>) X(n), Y(n);
          size_t i;
          for(i = 0; i < n; i++)
               X[i] = x0[i];
          CppAD::Independent(X);
          Y = F(X);
          f.Dependent(X, Y);

          // a fucntion object with an almost empty operation sequence
          CppAD::Independent(X);
          empty.Dependent(X, X);

          // compute the Taylor coefficients for Y(t)
          CPPAD_TESTVECTOR(double) y0(n), y1(n);
          y0 = f.Forward(0, x0);
          y1 = f.Forward(1, x1);
          if( free_all )
               f = empty;
          else
          {     // free all the Taylor coefficients stored in f
               f.capacity_order(0);
          }

          // record the function G(x)
          CPPAD_TESTVECTOR(AD<double>) Z(n);
          CppAD::Independent(Y);
          Z = G(Y);
          g.Dependent(Y, Z);

          // compute the Taylor coefficients for Z(t)
          CPPAD_TESTVECTOR(double) z0(n), z1(n);
          z0 = g.Forward(0, y0);
          z1 = g.Forward(1, y1);

          // check zero order Taylor coefficient for h^0 = z_0^0 + z_1^0
          double check = x0[0] * x0[1] * (1. - x0[0] + x0[1]) - x0[1] + x0[0];
          double h0    = z0[0] + z0[1];
          ok          &= NearEqual(h0, check, eps, eps);

          // check first order Taylor coefficient h^1
          check     = x0[0] * x0[1] * (- x1[0] + x1[1]) - x1[1] + x1[0];
          check    += x1[0] * x0[1] * (1. - x0[0] + x0[1]);
          check    += x0[0] * x1[1] * (1. - x0[0] + x0[1]);
          double h1 = z1[0] + z1[1];
          ok       &= NearEqual(h1, check, eps, eps);

          // compute the derivative with respect to y^0 and y^0 of h^1
          size_t p = 2;
          CPPAD_TESTVECTOR(double) w(n*p), dw(n*p);
          w[0*p+0] = 0.; // coefficient for z_0^0
          w[0*p+1] = 1.; // coefficient for z_0^1
          w[1*p+0] = 0.; // coefficient for z_1^0
          w[1*p+1] = 1.; // coefficient for z_1^1
          dw       = g.Reverse(p, w);

          // We are done using g, so we can free its memory.
          g = empty;
          // We need to use f next.
          if( free_all )
          {     // we must again record the operation sequence for F(x)
               CppAD::Independent(X);
               Y = F(X);
               f.Dependent(X, Y);
          }
          // now recompute the Taylor coefficients corresponding to F(x)
          // (we already know the result; i.e., y0 and y1).
          f.Forward(0, x0);
          f.Forward(1, x1);

          // compute the derivative with respect to x^0 and x^0 of
          //     h^1 = z_0^1 + z_1^1
          CPPAD_TESTVECTOR(double) dv(n*p);
          dv   = f.Reverse(p, dw);

          // check partial of h^1 w.r.t x^0_0
          check  = x0[1] * (- x1[0] + x1[1]);
          check -= x1[0] * x0[1];
          check += x1[1] * (1. - x0[0] + x0[1]) - x0[0] * x1[1];
          ok    &= NearEqual(dv[0*p+0], check, eps, eps);

          // check partial of h^1 w.r.t x^0_1
          check  = x0[0] * (- x1[0] + x1[1]);
          check += x1[0] * (1. - x0[0] + x0[1]) + x1[0] * x0[1];
          check += x0[0] * x1[1];
          ok    &= NearEqual(dv[1*p+0], check, eps, eps);

          // check partial of h^1 w.r.t x^1_0
          check  = 1. - x0[0] * x0[1];
          check += x0[1] * (1. - x0[0] + x0[1]);
          ok    &= NearEqual(dv[0*p+1], check, eps, eps);

          // check partial of h^1 w.r.t x^1_1
          check  = x0[0] * x0[1] - 1.;
          check += x0[0] * (1. - x0[0] + x0[1]);
          ok    &= NearEqual(dv[1*p+1], check, eps, eps);

          return ok;
     }
}
bool reverse_any(void)
{     bool ok = true;
     ok     &= reverse_any_case(true);
     ok     &= reverse_any_case(false);
     return ok;
}

Input File: example/general/reverse_checkpoint.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.4: Reverse Mode Using Subgraphs

5.4.4.a: Syntax
f.subgraph_reverse(select_domain)
f.subgraph_reverse(qellcoldw)

5.4.4.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . Reverse mode computes the derivative of the 5.3: Forward mode 12.4.l: Taylor coefficients with respect to the domain variable @(@ x @)@.

5.4.4.c: Notation
We use the reverse mode 5.4.3.c: notation with the following change: the vector 5.4.3.c.d: w^(k) is defined @[@ w_i^{(k)} = \left\{ \begin{array}{ll} 1 & {\rm if} \; k = q-1 \; \R{and} \; i = \ell \\ 0 & {\rm otherwise} \end{array} \right. @]@

5.4.4.d: BaseVector
The type BaseVector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.4.4.e: BoolVector
The type BoolVector is a 8.9: SimpleVector class with 8.9.b: elements of type bool.

5.4.4.f: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.4.4.g: select_domain
The argument select_domain has prototype
     const 
BoolVectorselect_domain
It has size @(@ n @)@ and specifies which independent variables to include in the calculation. If select_domain[j] is false, it is assumed that @(@ u^{(k)}_j = 0 @)@ for @(@ k > 0 @)@; i.e., the j-th component of the Taylor coefficient for @(@ x @)@, with order greater that zero, are zero; see 5.4.3.c.a: u^(k) .

5.4.4.h: q
The argument q has prototype
     size_t 
q
and specifies the number of Taylor coefficient orders to be differentiated.

5.4.4.i: ell
The argument ell has prototype
     size_t 
ell
and specifies the dependent variable index that we are computing the derivatives for; i.e. @(@ \ell @)@. This index can only be used once per, and after, a call that selects the independent variables using select_domain .

5.4.4.j: col
This argument col has prototype
     
SizeVector col
The input size and value of its elements do not matter. The col.resize member function is used to change its size to the number the number of possible non-zero derivative components. For each c ,
     
select_domaincol[c] ] == true
     
col[c+1] >= col[c]
and the derivative with respect to the j-th independent variable is possibly non-zero where j = col[c] .

5.4.4.k: dw
The argument dw has prototype
     
Vector dw
Its input size and value does not matter. Upon return, it is a vector with size @(@ n \times q @)@. For @(@ c = 0 , \ldots , %col%.size()-1 @)@, and @(@ k = 0, \ldots , q-1 @)@, @[@ dw[ j * q + k ] = W^{(1)} ( x )_{j,k} @]@ is the derivative of the specified Taylor coefficients w.r.t the j-th independent variable where j = col[c] . Note that this corresponds to the 5.4.3: reverse_any convention when 5.4.3.f: w has size m * q .

5.4.4.l: Example
The file 5.4.4.1: subgraph_reverse.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/subgraph_reverse.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
# include <cppad/cppad.hpp>
bool subgraph_reverse(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::sparse_rc;
     using CppAD::sparse_rcv;
     //
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)     d_vector;
     typedef CPPAD_TESTVECTOR(bool)       b_vector;
     typedef CPPAD_TESTVECTOR(size_t)     s_vector;
     //
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     //
     // domain space vector
     size_t n = 4;
     a_vector  a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);
     //
     // declare independent variables and starting recording
     CppAD::Independent(a_x);
     //
     size_t m = 3;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[1];
     a_y[1] = a_x[2] + a_x[3];
     a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.;
     //
     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);
     //
     // new value for the independent variable vector
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j);
     f.Forward(0, x);
     /*
            [ 1 1 0 0  ]
     J(x) = [ 0 0 1 1  ]
            [ 1 1 1 x_3]
     */
     double J[] = {
          1.0, 1.0, 0.0, 0.0,
          0.0, 0.0, 1.0, 1.0,
          1.0, 1.0, 1.0, 0.0
     };
     J[11] = x[3];
     //
     // exclude x[0] from the calculations
     b_vector select_domain(n);
     select_domain[0] = false;
     for(size_t j = 1; j < n; j++)
          select_domain[j] = true;
     //
     // initilaize for reverse mode derivatives computation on subgraphs
     f.subgraph_reverse(select_domain);
     //
     // compute the derivative for each range component
     for(size_t i = 0; i < m; i++)
     {     d_vector dw;
          s_vector col;
          size_t   q = 1; // derivative of one Taylor coefficient (zero order)
          f.subgraph_reverse(q, i, col, dw);
          //
          // check order in col
          for(size_t c = 1; c < size_t( col.size() ); c++)
               ok &= col[c] > col[c-1];
          //
          // check that x[0] has been excluded by select_domain
          if( size_t( col.size() ) > 0 )
               ok &= col[0] != 0;
          //
          // check derivatives for i-th row of J(x)
          // note that dw is only specified for j in col
          size_t c = 0;
          for(size_t j = 1; j < n; j++)
          {     while( c < size_t( col.size() ) && col[c] < j )
                    ++c;
               if( c < size_t( col.size() ) && col[c] == j )
                    ok &= NearEqual(dw[j], J[i * n + j], eps99, eps99);
               else
                    ok &= NearEqual(0.0, J[i * n + j], eps99, eps99);
          }
     }
     return ok;
}

Input File: example/sparse/subgraph_reverse.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5: Calculating Sparsity Patterns

5.5.a: Preferred Sparsity Patterns
5.5.1: for_jac_sparsity Forward Mode Jacobian Sparsity Patterns
5.5.3: rev_jac_sparsity Reverse Mode Jacobian Sparsity Patterns
5.5.7: for_hes_sparsity Forward Mode Hessian Sparsity Patterns
5.5.5: rev_hes_sparsity Reverse Mode Hessian Sparsity Patterns
5.5.11: subgraph_sparsity Subgraph Dependency Sparsity Patterns

5.5.b: Old Sparsity Patterns
5.5.2: ForSparseJac Jacobian Sparsity Pattern: Forward Mode
5.5.4: RevSparseJac Jacobian Sparsity Pattern: Reverse Mode
5.5.8: ForSparseHes Hessian Sparsity Pattern: Forward Mode
5.5.6: RevSparseHes Hessian Sparsity Pattern: Reverse Mode

Input File: omh/adfun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.1: Forward Mode Jacobian Sparsity Patterns

5.5.1.a: Syntax
f.for_jac_sparsity(
     
pattern_intransposedependencyinternal_boolpattern_out
)


5.5.1.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to the operation sequence stored in f . Fix @(@ R \in \B{R}^{n \times \ell} @)@ and define the function @[@ J(x) = F^{(1)} ( x ) * R @]@ Given the 12.4.j: sparsity pattern for @(@ R @)@, for_jac_sparsity computes a sparsity pattern for @(@ J(x) @)@.

5.5.1.c: x
Note that the sparsity pattern @(@ J(x) @)@ corresponds to the operation sequence stored in f and does not depend on the argument x . (The operation sequence may contain 4.4.4: CondExp and 4.6: VecAD operations.)

5.5.1.d: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.5.1.e: f
The object f has prototype
     ADFun<
Basef
The 5: ADFun object f is not const. After a call to for_jac_sparsity, a sparsity pattern for each of the variables in the operation sequence is held in f for possible later use during reverse Hessian sparsity calculations.

5.5.1.e.a: size_forward_bool
After for_jac_sparsity, if k is a size_t object,
     
k = f.size_forward_bool()
sets k to the amount of memory (in unsigned character units) used to store the 12.4.j.b: boolean vector sparsity patterns. If internal_bool if false, k will be zero. Otherwise it will be non-zero. If you do not need this information for 5.5.6: RevSparseHes calculations, it can be deleted (and the corresponding memory freed) using
     
f.size_forward_bool(0)
after which f.size_forward_bool() will return zero.

5.5.1.e.b: size_forward_set
After for_jac_sparsity, if k is a size_t object,
     
k = f.size_forward_set()
sets k to the amount of memory (in unsigned character units) used to store the 12.4.j.c: vector of sets sparsity patterns. If internal_bool if true, k will be zero. Otherwise it will be non-zero. If you do not need this information for future 5.5.5: rev_hes_sparsity calculations, it can be deleted (and the corresponding memory freed) using
     
f.size_forward_set(0)
after which f.size_forward_set() will return zero.

5.5.1.f: pattern_in
The argument pattern_in has prototype
     const sparse_rc<
SizeVector>& pattern_in
see 8.27: sparse_rc . If transpose it is false (true), pattern_in is a sparsity pattern for @(@ R @)@ (@(@ R^\R{T} @)@).

5.5.1.g: transpose
This argument has prototype
     bool 
transpose
See 5.5.1.f: pattern_in above and 5.5.1.j: pattern_out below.

5.5.1.h: dependency
This argument has prototype
     bool 
dependency
see 5.5.1.j: pattern_out below.

5.5.1.i: internal_bool
If this is true, calculations are done with sets represented by a vector of boolean values. Otherwise, a vector of sets of integers is used.

5.5.1.j: pattern_out
This argument has prototype
     sparse_rc<
SizeVector>& pattern_out
This input value of pattern_out does not matter. If transpose it is false (true), upon return pattern_out is a sparsity pattern for @(@ J(x) @)@ (@(@ J(x)^\R{T} @)@). If dependency is true, pattern_out is a 5.5.9.b: dependency pattern instead of sparsity pattern.

5.5.1.k: Sparsity for Entire Jacobian
Suppose that @(@ R @)@ is the @(@ n \times n @)@ identity matrix. In this case, pattern_out is a sparsity pattern for @(@ F^{(1)} ( x ) @)@ ( @(@ F^{(1)} (x)^\R{T} @)@ ) if transpose is false (true).

5.5.1.l: Example
The file 5.5.1.1: for_jac_sparsity.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/for_jac_sparsity.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
# include <cppad/cppad.hpp>

bool for_jac_sparsity(void)
{     bool ok = true;
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;
     //
     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0];
     ay[1] = ax[0] * ax[1];
     ay[2] = ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     size_t nr     = n;
     size_t nc     = n;
     size_t nnz_in = n;
     sparsity pattern_in(nr, nc, nnz_in);
     for(size_t k = 0; k < nnz_in; k++)
     {     size_t r = k;
          size_t c = k;
          pattern_in.set(k, r, c);
     }
     //
     // Compute sparsity pattern for J(x) = F'(x)
     bool transpose       = false;
     bool dependency      = false;
     bool internal_bool   = false;
     sparsity pattern_out;
     f.for_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     size_t nnz = pattern_out.nnz();
     ok        &= nnz == 4;
     ok        &= pattern_out.nr() == m;
     ok        &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector col_major = pattern_out.col_major();
          //
          ok &= row[ col_major[0] ] ==  0  && col[ col_major[0] ] ==  0;
          ok &= row[ col_major[1] ] ==  1  && col[ col_major[1] ] ==  0;
          ok &= row[ col_major[2] ] ==  1  && col[ col_major[2] ] ==  1;
          ok &= row[ col_major[3] ] ==  2  && col[ col_major[3] ] ==  1;
          //
          // check that set and not boolean values are stored
          ok &= (f.size_forward_set() > 0);
          ok &= (f.size_forward_bool() == 0);
     }
     //
     // note that the transpose of the identity is the identity
     transpose     = true;
     internal_bool = true;
     f.for_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     nnz  = pattern_out.nnz();
     ok  &= nnz == 4;
     ok  &= pattern_out.nr() == n;
     ok  &= pattern_out.nc() == m;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector row_major = pattern_out.row_major();
          //
          ok &= col[ row_major[0] ] ==  0  && row[ row_major[0] ] ==  0;
          ok &= col[ row_major[1] ] ==  1  && row[ row_major[1] ] ==  0;
          ok &= col[ row_major[2] ] ==  1  && row[ row_major[2] ] ==  1;
          ok &= col[ row_major[3] ] ==  2  && row[ row_major[3] ] ==  1;
          //
          // check that set and not boolean values are stored
          ok &= (f.size_forward_set() == 0);
          ok &= (f.size_forward_bool() > 0);
     }
     return ok;
}

Input File: example/sparse/for_jac_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.2: Jacobian Sparsity Pattern: Forward Mode

5.5.2.a: Syntax
s = f.ForSparseJac(qr)
s = f.ForSparseJac(qrtransposedependency)

5.5.2.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . For a fixed @(@ n \times q @)@ matrix @(@ R @)@, the Jacobian of @(@ F[ x + R * u ] @)@ with respect to @(@ u @)@ at @(@ u = 0 @)@ is @[@ S(x) = F^{(1)} ( x ) * R @]@ Given a 12.4.j: sparsity pattern for @(@ R @)@, ForSparseJac returns a sparsity pattern for the @(@ S(x) @)@.

5.5.2.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. After a call to ForSparseJac, the sparsity pattern for each of the variables in the operation sequence is held in f (for possible later use by 5.5.6: RevSparseHes ). These sparsity patterns are stored with elements of type bool or elements of type std::set<size_t> (see 5.5.2.j: VectorSet below).

5.5.2.c.a: size_forward_bool
After ForSparseJac, if k is a size_t object,
     
k = f.size_forward_bool()
sets k to the amount of memory (in unsigned character units) used to store the sparsity pattern with elements of type bool in the function object f . If the sparsity patterns for the previous ForSparseJac used elements of type bool, the return value for size_forward_bool will be non-zero. Otherwise, its return value will be zero. This sparsity pattern is stored for use by 5.5.6: RevSparseHes and when it is not longer needed, it can be deleted (and the corresponding memory freed) using
     
f.size_forward_bool(0)
After this call, f.size_forward_bool() will return zero.

5.5.2.c.b: size_forward_set
After ForSparseJac, if k is a size_t object,
     
k = f.size_forward_set()
sets k to the amount of memory (in unsigned character units) used to store the 12.4.j.c: vector of sets sparsity patterns. If the sparsity patterns for this operation use elements of type bool, the return value for size_forward_set will be zero. Otherwise, its return value will be non-zero. This sparsity pattern is stored for use by 5.5.6: RevSparseHes and when it is not longer needed, it can be deleted (and the corresponding memory freed) using
     
f.size_forward_set(0)
After this call, f.size_forward_set() will return zero.

5.5.2.d: x
If the operation sequence in f is 12.4.g.d: independent of the independent variables in @(@ x \in B^n @)@, the sparsity pattern is valid for all values of (even if it has 4.4.4: CondExp or 4.6: VecAD operations).

5.5.2.e: q
The argument q has prototype
     size_t 
q
It specifies the number of columns in @(@ R \in B^{n \times q} @)@ and the Jacobian @(@ S(x) \in B^{m \times q} @)@.

5.5.2.f: transpose
The argument transpose has prototype
     bool 
transpose
The default value false is used when transpose is not present.

5.5.2.g: dependency
The argument dependency has prototype
     bool 
dependency
If dependency is true, the 5.5.9.b: dependency pattern (instead of sparsity pattern) is computed.

5.5.2.h: r
The argument r has prototype
     const 
VectorSetr
see 5.5.2.j: VectorSet below.

5.5.2.h.a: transpose false
If r has elements of type bool, its size is @(@ n * q @)@. If it has elements of type std::set<size_t>, its size is @(@ n @)@ and all the set elements must be between zero and q-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ R \in B^{n \times q} @)@.

5.5.2.h.b: transpose true
If r has elements of type bool, its size is @(@ q * n @)@. If it has elements of type std::set<size_t>, its size is @(@ q @)@ and all the set elements must be between zero and n-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ R^\R{T} \in B^{q \times n} @)@.

5.5.2.i: s
The return value s has prototype
     
VectorSet s
see 5.5.2.j: VectorSet below.

5.5.2.i.a: transpose false
If s has elements of type bool, its size is @(@ m * q @)@. If it has elements of type std::set<size_t>, its size is @(@ m @)@ and all its set elements are between zero and q-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ S(x) \in B^{m \times q} @)@.

5.5.2.i.b: transpose true
If s has elements of type bool, its size is @(@ q * m @)@. If it has elements of type std::set<size_t>, its size is @(@ q @)@ and all its set elements are between zero and m-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ S(x)^\R{T} \in B^{q \times m} @)@.

5.5.2.j: VectorSet
The type VectorSet must be a 8.9: SimpleVector class with 8.9.b: elements of type bool or std::set<size_t>; see 12.4.j: sparsity pattern for a discussion of the difference.

5.5.2.k: Entire Sparsity Pattern
Suppose that @(@ q = n @)@ and @(@ R @)@ is the @(@ n \times n @)@ identity matrix. In this case, the corresponding value for s is a sparsity pattern for the Jacobian @(@ S(x) = F^{(1)} ( x ) @)@.

5.5.2.l: Example
The file 5.5.2.1: for_sparse_jac.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise. The file 5.5.6.2.b: sparsity_sub.cpp contains an example and test of using ForSparseJac to compute the sparsity pattern for a subset of the Jacobian.
Input File: cppad/core/for_sparse_jac.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test

# include <set>
# include <cppad/cppad.hpp>

namespace { // -------------------------------------------------------------
// define the template function BoolCases<Vector>
template <typename Vector>  // vector class, elements of type bool
bool BoolCases(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) X(n);
     X[0] = 0.;
     X[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = X[0];
     Y[1] = X[0] * X[1];
     Y[2] = X[1];

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // sparsity pattern for the identity matrix
     Vector r(n * n);
     size_t i, j;
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
               r[ i * n + j ] = (i == j);
     }

     // sparsity pattern for F'(x)
     Vector s(m * n);
     s = f.ForSparseJac(n, r);

     // check values
     ok &= (s[ 0 * n + 0 ] == true);  // Y[0] does     depend on X[0]
     ok &= (s[ 0 * n + 1 ] == false); // Y[0] does not depend on X[1]
     ok &= (s[ 1 * n + 0 ] == true);  // Y[1] does     depend on X[0]
     ok &= (s[ 1 * n + 1 ] == true);  // Y[1] does     depend on X[1]
     ok &= (s[ 2 * n + 0 ] == false); // Y[2] does not depend on X[0]
     ok &= (s[ 2 * n + 1 ] == true);  // Y[2] does     depend on X[1]

     // check that values are stored
     ok &= (f.size_forward_bool() > 0);
     ok &= (f.size_forward_set() == 0);

     // sparsity pattern for F'(x)^T, note R is the identity, so R^T = R
     bool transpose = true;
     Vector st(n * m);
     st = f.ForSparseJac(n, r, transpose);

     // check values
     ok &= (st[ 0 * m + 0 ] == true);  // Y[0] does     depend on X[0]
     ok &= (st[ 1 * m + 0 ] == false); // Y[0] does not depend on X[1]
     ok &= (st[ 0 * m + 1 ] == true);  // Y[1] does     depend on X[0]
     ok &= (st[ 1 * m + 1 ] == true);  // Y[1] does     depend on X[1]
     ok &= (st[ 0 * m + 2 ] == false); // Y[2] does not depend on X[0]
     ok &= (st[ 1 * m + 2 ] == true);  // Y[2] does     depend on X[1]

     // check that values are stored
     ok &= (f.size_forward_bool() > 0);
     ok &= (f.size_forward_set() == 0);

     // free values from forward calculation
     f.size_forward_bool(0);
     ok &= (f.size_forward_bool() == 0);

     return ok;
}
// define the template function SetCases<Vector>
template <typename Vector>  // vector class, elements of type std::set<size_t>
bool SetCases(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) X(n);
     X[0] = 0.;
     X[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(X);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) Y(m);
     Y[0] = X[0];
     Y[1] = X[0] * X[1];
     Y[2] = X[1];

     // create f: X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // sparsity pattern for the identity matrix
     Vector r(n);
     size_t i;
     for(i = 0; i < n; i++)
     {     assert( r[i].empty() );
          r[i].insert(i);
     }

     // sparsity pattern for F'(x)
     Vector s(m);
     s = f.ForSparseJac(n, r);

     // an interator to a standard set
     std::set<size_t>::iterator itr;
     bool found;

     // Y[0] does     depend on X[0]
     found = s[0].find(0) != s[0].end();  ok &= ( found == true );
     // Y[0] does not depend on X[1]
     found = s[0].find(1) != s[0].end();  ok &= ( found == false );
     // Y[1] does     depend on X[0]
     found = s[1].find(0) != s[1].end();  ok &= ( found == true );
     // Y[1] does     depend on X[1]
     found = s[1].find(1) != s[1].end();  ok &= ( found == true );
     // Y[2] does not depend on X[0]
     found = s[2].find(0) != s[2].end();  ok &= ( found == false );
     // Y[2] does     depend on X[1]
     found = s[2].find(1) != s[2].end();  ok &= ( found == true );

     // check that values are stored
     ok &= (f.size_forward_set() > 0);
     ok &= (f.size_forward_bool() == 0);


     // sparsity pattern for F'(x)^T
     bool transpose = true;
     Vector st(n);
     st = f.ForSparseJac(n, r, transpose);

     // Y[0] does     depend on X[0]
     found = st[0].find(0) != st[0].end();  ok &= ( found == true );
     // Y[0] does not depend on X[1]
     found = st[1].find(0) != st[1].end();  ok &= ( found == false );
     // Y[1] does     depend on X[0]
     found = st[0].find(1) != st[0].end();  ok &= ( found == true );
     // Y[1] does     depend on X[1]
     found = st[1].find(1) != st[1].end();  ok &= ( found == true );
     // Y[2] does not depend on X[0]
     found = st[0].find(2) != st[0].end();  ok &= ( found == false );
     // Y[2] does     depend on X[1]
     found = st[1].find(2) != st[1].end();  ok &= ( found == true );

     // check that values are stored
     ok &= (f.size_forward_set() > 0);
     ok &= (f.size_forward_bool() == 0);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool ForSparseJac(void)
{     bool ok = true;
     // Run with Vector equal to four different cases
     // all of which are Simple Vectors with elements of type bool.
     ok &= BoolCases< CppAD::vectorBool     >();
     ok &= BoolCases< CppAD::vector  <bool> >();
     ok &= BoolCases< std::vector    <bool> >();
     ok &= BoolCases< std::valarray  <bool> >();

     // Run with Vector equal to two different cases both of which are
     // Simple Vectors with elements of type std::set<size_t>
     typedef std::set<size_t> set;
     ok &= SetCases< CppAD::vector  <set> >();
     // ok &= SetCases< std::vector    <set> >();

     // Do not use valarray because its element access in the const case
     // returns a copy instead of a reference
     // ok &= SetCases< std::valarray  <set> >();

     return ok;
}

Input File: example/sparse/for_sparse_jac.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.3: Reverse Mode Jacobian Sparsity Patterns

5.5.3.a: Syntax
f.rev_jac_sparsity(
     
pattern_intransposedependencyinternal_boolpattern_out
)


5.5.3.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to the operation sequence stored in f . Fix @(@ R \in \B{R}^{\ell \times m} @)@ and define the function @[@ J(x) = R * F^{(1)} ( x ) @]@ Given the 12.4.j: sparsity pattern for @(@ R @)@, rev_jac_sparsity computes a sparsity pattern for @(@ J(x) @)@.

5.5.3.c: x
Note that the sparsity pattern @(@ J(x) @)@ corresponds to the operation sequence stored in f and does not depend on the argument x . (The operation sequence may contain 4.4.4: CondExp and 4.6: VecAD operations.)

5.5.3.d: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.5.3.e: f
The object f has prototype
     ADFun<
Basef

5.5.3.f: pattern_in
The argument pattern_in has prototype
     const sparse_rc<
SizeVector>& pattern_in
see 8.27: sparse_rc . If transpose it is false (true), pattern_in is a sparsity pattern for @(@ R @)@ (@(@ R^\R{T} @)@).

5.5.3.g: transpose
This argument has prototype
     bool 
transpose
See 5.5.3.f: pattern_in above and 5.5.3.j: pattern_out below.

5.5.3.h: dependency
This argument has prototype
     bool 
dependency
see 5.5.3.j: pattern_out below.

5.5.3.i: internal_bool
If this is true, calculations are done with sets represented by a vector of boolean values. Otherwise, a vector of sets of integers is used.

5.5.3.j: pattern_out
This argument has prototype
     sparse_rc<
SizeVector>& pattern_out
This input value of pattern_out does not matter. If transpose it is false (true), upon return pattern_out is a sparsity pattern for @(@ J(x) @)@ (@(@ J(x)^\R{T} @)@). If dependency is true, pattern_out is a 5.5.9.b: dependency pattern instead of sparsity pattern.

5.5.3.k: Sparsity for Entire Jacobian
Suppose that @(@ R @)@ is the @(@ m \times m @)@ identity matrix. In this case, pattern_out is a sparsity pattern for @(@ F^{(1)} ( x ) @)@ ( @(@ F^{(1)} (x)^\R{T} @)@ ) if transpose is false (true).

5.5.3.l: Example
The file 5.5.3.1: rev_jac_sparsity.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/rev_jac_sparsity.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
# include <cppad/cppad.hpp>

bool rev_jac_sparsity(void)
{     bool ok = true;
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;
     //
     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0];
     ay[1] = ax[0] * ax[1];
     ay[2] = ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     size_t nr     = m;
     size_t nc     = m;
     size_t nnz_in = m;
     sparsity pattern_in(nr, nc, nnz_in);
     for(size_t k = 0; k < nnz_in; k++)
     {     size_t r = k;
          size_t c = k;
          pattern_in.set(k, r, c);
     }
     // compute sparsite pattern for J(x) = F'(x)
     bool transpose       = false;
     bool dependency      = false;
     bool internal_bool   = false;
     sparsity pattern_out;
     f.rev_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     size_t nnz = pattern_out.nnz();
     ok        &= nnz == 4;
     ok        &= pattern_out.nr() == m;
     ok        &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector col_major = pattern_out.col_major();
          //
          ok &= row[ col_major[0] ] ==  0  && col[ col_major[0] ] ==  0;
          ok &= row[ col_major[1] ] ==  1  && col[ col_major[1] ] ==  0;
          ok &= row[ col_major[2] ] ==  1  && col[ col_major[2] ] ==  1;
          ok &= row[ col_major[3] ] ==  2  && col[ col_major[3] ] ==  1;
     }
     // note that the transpose of the identity is the identity
     transpose     = true;
     internal_bool = true;
     f.rev_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     nnz  = pattern_out.nnz();
     ok  &= nnz == 4;
     ok  &= pattern_out.nr() == n;
     ok  &= pattern_out.nc() == m;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector row_major = pattern_out.row_major();
          //
          ok &= col[ row_major[0] ] ==  0  && row[ row_major[0] ] ==  0;
          ok &= col[ row_major[1] ] ==  1  && row[ row_major[1] ] ==  0;
          ok &= col[ row_major[2] ] ==  1  && row[ row_major[2] ] ==  1;
          ok &= col[ row_major[3] ] ==  2  && row[ row_major[3] ] ==  1;
     }
     return ok;
}

Input File: example/sparse/rev_jac_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.4: Jacobian Sparsity Pattern: Reverse Mode

5.5.4.a: Syntax
s = f.RevSparseJac(qr)
s = f.RevSparseJac(qrtransposedependency)

5.5.4.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . For a fixed matrix @(@ R \in B^{q \times m} @)@, the Jacobian of @(@ R * F( x ) @)@ with respect to @(@ x @)@ is @[@ S(x) = R * F^{(1)} ( x ) @]@ Given a 12.4.j: sparsity pattern for @(@ R @)@, RevSparseJac returns a sparsity pattern for the @(@ S(x) @)@.

5.5.4.c: f
The object f has prototype
     ADFun<
Basef

5.5.4.d: x
If the operation sequence in f is 12.4.g.d: independent of the independent variables in @(@ x \in B^n @)@, the sparsity pattern is valid for all values of (even if it has 4.4.4: CondExp or 4.6: VecAD operations).

5.5.4.e: q
The argument q has prototype
     size_t 
q
It specifies the number of rows in @(@ R \in B^{q \times m} @)@ and the Jacobian @(@ S(x) \in B^{q \times n} @)@.

5.5.4.f: transpose
The argument transpose has prototype
     bool 
transpose
The default value false is used when transpose is not present.

5.5.4.g: dependency
The argument dependency has prototype
     bool 
dependency
If dependency is true, the 5.5.9.b: dependency pattern (instead of sparsity pattern) is computed.

5.5.4.h: r
The argument s has prototype
     const 
VectorSetr
see 5.5.4.j: VectorSet below.

5.5.4.h.a: transpose false
If r has elements of type bool, its size is @(@ q * m @)@. If it has elements of type std::set<size_t>, its size is q and all its set elements are between zero and @(@ m - 1 @)@. It specifies a 12.4.j: sparsity pattern for the matrix @(@ R \in B^{q \times m} @)@.

5.5.4.h.b: transpose true
If r has elements of type bool, its size is @(@ m * q @)@. If it has elements of type std::set<size_t>, its size is m and all its set elements are between zero and @(@ q - 1 @)@. It specifies a 12.4.j: sparsity pattern for the matrix @(@ R^\R{T} \in B^{m \times q} @)@.

5.5.4.i: s
The return value s has prototype
     
VectorSet s
see 5.5.4.j: VectorSet below.

5.5.4.i.a: transpose false
If it has elements of type bool, its size is @(@ q * n @)@. If it has elements of type std::set<size_t>, its size is q and all its set elements are between zero and @(@ n - 1 @)@. It specifies a 12.4.j: sparsity pattern for the matrix @(@ S(x) \in {q \times n} @)@.

5.5.4.i.b: transpose true
If it has elements of type bool, its size is @(@ n * q @)@. If it has elements of type std::set<size_t>, its size is n and all its set elements are between zero and @(@ q - 1 @)@. It specifies a 12.4.j: sparsity pattern for the matrix @(@ S(x)^\R{T} \in {n \times q} @)@.

5.5.4.j: VectorSet
The type VectorSet must be a 8.9: SimpleVector class with 8.9.b: elements of type bool or std::set<size_t>; see 12.4.j: sparsity pattern for a discussion of the difference.

5.5.4.k: Entire Sparsity Pattern
Suppose that @(@ q = m @)@ and @(@ R @)@ is the @(@ m \times m @)@ identity matrix. In this case, the corresponding value for s is a sparsity pattern for the Jacobian @(@ S(x) = F^{(1)} ( x ) @)@.

5.5.4.l: Example
The file 5.5.4.1: rev_sparse_jac.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/rev_sparse_jac.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test

# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------------
// define the template function BoolCases<Vector>
template <typename Vector>  // vector class, elements of type bool
bool BoolCases(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0];
     ay[1] = ax[0] * ax[1];
     ay[2] = ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     Vector r(m * m);
     size_t i, j;
     for(i = 0; i < m; i++)
     {     for(j = 0; j < m; j++)
               r[ i * m + j ] = (i == j);
     }

     // sparsity pattern for F'(x)
     Vector s(m * n);
     s = f.RevSparseJac(m, r);

     // check values
     ok &= (s[ 0 * n + 0 ] == true);  // y[0] does     depend on x[0]
     ok &= (s[ 0 * n + 1 ] == false); // y[0] does not depend on x[1]
     ok &= (s[ 1 * n + 0 ] == true);  // y[1] does     depend on x[0]
     ok &= (s[ 1 * n + 1 ] == true);  // y[1] does     depend on x[1]
     ok &= (s[ 2 * n + 0 ] == false); // y[2] does not depend on x[0]
     ok &= (s[ 2 * n + 1 ] == true);  // y[2] does     depend on x[1]

     // sparsity pattern for F'(x)^T, note R is the identity, so R^T = R
     bool transpose = true;
     Vector st(n * m);
     st = f.RevSparseJac(m, r, transpose);

     // check values
     ok &= (st[ 0 * m + 0 ] == true);  // y[0] does     depend on x[0]
     ok &= (st[ 1 * m + 0 ] == false); // y[0] does not depend on x[1]
     ok &= (st[ 0 * m + 1 ] == true);  // y[1] does     depend on x[0]
     ok &= (st[ 1 * m + 1 ] == true);  // y[1] does     depend on x[1]
     ok &= (st[ 0 * m + 2 ] == false); // y[2] does not depend on x[0]
     ok &= (st[ 1 * m + 2 ] == true);  // y[2] does     depend on x[1]

     return ok;
}
// define the template function SetCases<Vector>
template <typename Vector>  // vector class, elements of type std::set<size_t>
bool SetCases(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0];
     ay[1] = ax[0] * ax[1];
     ay[2] = ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     Vector r(m);
     size_t i;
     for(i = 0; i < m; i++)
     {     assert( r[i].empty() );
          r[i].insert(i);
     }

     // sparsity pattern for F'(x)
     Vector s(m);
     s = f.RevSparseJac(m, r);

     // check values
     bool found;

     // y[0] does     depend on x[0]
     found = s[0].find(0) != s[0].end();  ok &= (found == true);
     // y[0] does not depend on x[1]
     found = s[0].find(1) != s[0].end();  ok &= (found == false);
     // y[1] does     depend on x[0]
     found = s[1].find(0) != s[1].end();  ok &= (found == true);
     // y[1] does     depend on x[1]
     found = s[1].find(1) != s[1].end();  ok &= (found == true);
     // y[2] does not depend on x[0]
     found = s[2].find(0) != s[2].end();  ok &= (found == false);
     // y[2] does     depend on x[1]
     found = s[2].find(1) != s[2].end();  ok &= (found == true);

     // sparsity pattern for F'(x)^T
     bool transpose = true;
     Vector st(n);
     st = f.RevSparseJac(m, r, transpose);

     // y[0] does     depend on x[0]
     found = st[0].find(0) != st[0].end();  ok &= (found == true);
     // y[0] does not depend on x[1]
     found = st[1].find(0) != st[1].end();  ok &= (found == false);
     // y[1] does     depend on x[0]
     found = st[0].find(1) != st[0].end();  ok &= (found == true);
     // y[1] does     depend on x[1]
     found = st[1].find(1) != st[1].end();  ok &= (found == true);
     // y[2] does not depend on x[0]
     found = st[0].find(2) != st[0].end();  ok &= (found == false);
     // y[2] does     depend on x[1]
     found = st[1].find(2) != st[1].end();  ok &= (found == true);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool RevSparseJac(void)
{     bool ok = true;
     // Run with Vector equal to four different cases
     // all of which are Simple Vectors with elements of type bool.
     ok &= BoolCases< CppAD::vectorBool     >();
     ok &= BoolCases< CppAD::vector  <bool> >();
     ok &= BoolCases< std::vector    <bool> >();
     ok &= BoolCases< std::valarray  <bool> >();


     // Run with Vector equal to two different cases both of which are
     // Simple Vectors with elements of type std::set<size_t>
     typedef std::set<size_t> set;
     ok &= SetCases< CppAD::vector  <set> >();
     ok &= SetCases< std::vector    <set> >();

     // Do not use valarray because its element access in the const case
     // returns a copy instead of a reference
     // ok &= SetCases< std::valarray  <set> >();

     return ok;
}

Input File: example/sparse/rev_sparse_jac.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.5: Reverse Mode Hessian Sparsity Patterns

5.5.5.a: Syntax
f.rev_hes_sparsity(
     
select_rangetransposeinternal_boolpattern_out
)


5.5.5.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to the operation sequence stored in f . Fix @(@ R \in \B{R}^{n \times \ell} @)@, @(@ s \in \B{R}^m @)@ and define the function @[@ H(x) = ( s^\R{T} F )^{(2)} ( x ) R @]@ Given a 12.4.j: sparsity pattern for @(@ R @)@ and for the vector @(@ s @)@, rev_hes_sparsity computes a sparsity pattern for @(@ H(x) @)@.

5.5.5.c: x
Note that the sparsity pattern @(@ H(x) @)@ corresponds to the operation sequence stored in f and does not depend on the argument x .

5.5.5.d: BoolVector
The type BoolVector is a 8.9: SimpleVector class with 8.9.b: elements of type bool.

5.5.5.e: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.5.5.f: f
The object f has prototype
     ADFun<
Basef

5.5.5.g: R
The sparsity pattern for the matrix @(@ R @)@ is specified by 5.5.1.f: pattern_in in the previous call
     
f.for_jac_sparsity(
          
pattern_intransposedependencyinternal_boolpattern_out
)


5.5.5.h: select_range
The argument select_range has prototype
     const 
BoolVectorselect_range
It has size @(@ m @)@ and specifies which components of the vector @(@ s @)@ are non-zero; i.e., select_range[i] is true if and only if @(@ s_i @)@ is possibly non-zero.

5.5.5.i: transpose
This argument has prototype
     bool 
transpose
See 5.5.5.k: pattern_out below.

5.5.5.j: internal_bool
If this is true, calculations are done with sets represented by a vector of boolean values. Otherwise, a vector of sets of integers is used. This must be the same as in the previous call to f.for_jac_sparsity .

5.5.5.k: pattern_out
This argument has prototype
     sparse_rc<
SizeVector>& pattern_out
This input value of pattern_out does not matter. If transpose it is false (true), upon return pattern_out is a sparsity pattern for @(@ H(x) @)@ (@(@ H(x)^\R{T} @)@).

5.5.5.l: Sparsity for Entire Hessian
Suppose that @(@ R @)@ is the @(@ n \times n @)@ identity matrix. In this case, pattern_out is a sparsity pattern for @(@ (s^\R{T} F) F^{(2)} ( x ) @)@.

5.5.5.m: Example
The file 5.5.5.1: rev_hes_sparsity.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/rev_hes_sparsity.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
# include <cppad/cppad.hpp>

bool rev_hes_sparsity(void)
{     bool ok = true;
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;
     //
     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = sin( ax[2] );
     ay[1] = ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     size_t nr     = n;
     size_t nc     = n;
     size_t nnz_in = n;
     sparsity pattern_in(nr, nc, nnz_in);
     for(size_t k = 0; k < nnz_in; k++)
     {     size_t r = k;
          size_t c = k;
          pattern_in.set(k, r, c);
     }
     // compute sparsity pattern for J(x) = F'(x)
     bool transpose       = false;
     bool dependency      = false;
     bool internal_bool   = false;
     sparsity pattern_out;
     f.for_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     //
     // compute sparsity pattern for H(x) = F_1''(x)
     CPPAD_TESTVECTOR(bool) select_range(m);
     select_range[0] = false;
     select_range[1] = true;
     f.rev_hes_sparsity(
          select_range, transpose, internal_bool, pattern_out
     );
     size_t nnz = pattern_out.nnz();
     ok        &= nnz == 2;
     ok        &= pattern_out.nr() == n;
     ok        &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector row_major = pattern_out.row_major();
          //
          ok &= row[ row_major[0] ] ==  0  && col[ row_major[0] ] ==  1;
          ok &= row[ row_major[1] ] ==  1  && col[ row_major[1] ] ==  0;
     }
     //
     // compute sparsity pattern for H(x) = F_0''(x)
     select_range[0] = true;
     select_range[1] = false;
     f.rev_hes_sparsity(
          select_range, transpose, internal_bool, pattern_out
     );
     nnz = pattern_out.nnz();
     ok &= nnz == 1;
     ok &= pattern_out.nr() == n;
     ok &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          //
          ok &= row[0] ==  2  && col[0] ==  2;
     }
     return ok;
}

Input File: example/sparse/rev_hes_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.6: Hessian Sparsity Pattern: Reverse Mode

5.5.6.a: Syntax
h = f.RevSparseHes(qs)
h = f.RevSparseHes(qstranspose)

5.5.6.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to f . For a fixed matrix @(@ R \in \B{R}^{n \times q} @)@ and a fixed vector @(@ S \in \B{R}^{1 \times m} @)@, we define @[@ \begin{array}{rcl} H(x) & = & \partial_x \left[ \partial_u S * F[ x + R * u ] \right]_{u=0} \\ & = & R^\R{T} * (S * F)^{(2)} ( x ) \\ H(x)^\R{T} & = & (S * F)^{(2)} ( x ) * R \end{array} @]@ Given a 12.4.j: sparsity pattern for the matrix @(@ R @)@ and the vector @(@ S @)@, RevSparseHes returns a sparsity pattern for the @(@ H(x) @)@.

5.5.6.c: f
The object f has prototype
     const ADFun<
Basef

5.5.6.d: x
If the operation sequence in f is 12.4.g.d: independent of the independent variables in @(@ x \in B^n @)@, the sparsity pattern is valid for all values of (even if it has 4.4.4: CondExp or 4.6: VecAD operations).

5.5.6.e: q
The argument q has prototype
     size_t 
q
It specifies the number of columns in @(@ R \in \B{R}^{n \times q} @)@ and the number of rows in @(@ H(x) \in \B{R}^{q \times n} @)@. It must be the same value as in the previous 5.5.2: ForSparseJac call
     
f.ForSparseJac(qrr_transpose)
Note that if r_transpose is true, r in the call above corresponding to @(@ R^\R{T} \in \B{R}^{q \times n} @)@

5.5.6.f: transpose
The argument transpose has prototype
     bool 
transpose
The default value false is used when transpose is not present.

5.5.6.g: r
The matrix @(@ R @)@ is specified by the previous call
     
f.ForSparseJac(qrtranspose)
see 5.5.2.h: r . The type of the elements of 5.5.6.j: VectorSet must be the same as the type of the elements of r .

5.5.6.h: s
The argument s has prototype
     const 
VectorSets
(see 5.5.6.j: VectorSet below) If it has elements of type bool, its size is @(@ m @)@. If it has elements of type std::set<size_t>, its size is one and all the elements of s[0] are between zero and @(@ m - 1 @)@. It specifies a 12.4.j: sparsity pattern for the vector S .

5.5.6.i: h
The result h has prototype
     
VectorSeth
(see 5.5.6.j: VectorSet below).

5.5.6.i.a: transpose false
If h has elements of type bool, its size is @(@ q * n @)@. If it has elements of type std::set<size_t>, its size is @(@ q @)@ and all the set elements are between zero and n-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ H(x) @)@.

5.5.6.i.b: transpose true
If h has elements of type bool, its size is @(@ n * q @)@. If it has elements of type std::set<size_t>, its size is @(@ n @)@ and all the set elements are between zero and q-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ H(x)^\R{T} @)@.

5.5.6.j: VectorSet
The type VectorSet must be a 8.9: SimpleVector class with 8.9.b: elements of type bool or std::set<size_t>; see 12.4.j: sparsity pattern for a discussion of the difference. The type of the elements of 5.5.6.j: VectorSet must be the same as the type of the elements of r .

5.5.6.k: Entire Sparsity Pattern
Suppose that @(@ q = n @)@ and @(@ R \in \B{R}^{n \times n} @)@ is the @(@ n \times n @)@ identity matrix. Further suppose that the @(@ S @)@ is the k-th 12.4.f: elementary vector ; i.e. @[@ S_j = \left\{ \begin{array}{ll} 1 & {\rm if} \; j = k \\ 0 & {\rm otherwise} \end{array} \right. @]@ In this case, the corresponding value h is a sparsity pattern for the Hessian matrix @(@ F_k^{(2)} (x) \in \B{R}^{n \times n} @)@.

5.5.6.l: Example
The file 5.5.6.1: rev_sparse_hes.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise. The file 5.5.6.2.c: sparsity_sub.cpp contains an example and test of using RevSparseHes to compute the sparsity pattern for a subset of the Hessian.
Input File: cppad/core/rev_sparse_hes.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test

# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------------

// expected sparsity pattern
bool check_f0[] = {
     false, false, false,  // partials w.r.t x0 and (x0, x1, x2)
     false, false, false,  // partials w.r.t x1 and (x0, x1, x2)
     false, false, true    // partials w.r.t x2 and (x0, x1, x2)
};
bool check_f1[] = {
     false,  true, false,  // partials w.r.t x0 and (x0, x1, x2)
     true,  false, false,  // partials w.r.t x1 and (x0, x1, x2)
     false, false, false   // partials w.r.t x2 and (x0, x1, x2)
};

// define the template function BoolCases<Vector> in empty namespace
template <typename Vector> // vector class, elements of type bool
bool BoolCases(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = sin( ax[2] );
     ay[1] = ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     Vector r(n * n);
     size_t i, j;
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
               r[ i * n + j ] = (i == j);
     }

     // compute sparsity pattern for J(x) = F^{(1)} (x)
     f.ForSparseJac(n, r);

     // compute sparsity pattern for H(x) = F_0^{(2)} (x)
     Vector s(m);
     for(i = 0; i < m; i++)
          s[i] = false;
     s[0] = true;
     Vector h(n * n);
     h    = f.RevSparseHes(n, s);

     // check values
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               ok &= (h[ i * n + j ] == check_f0[ i * n + j ] );

     // compute sparsity pattern for H(x) = F_1^{(2)} (x)
     for(i = 0; i < m; i++)
          s[i] = false;
     s[1] = true;
     h    = f.RevSparseHes(n, s);

     // check values
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               ok &= (h[ i * n + j ] == check_f1[ i * n + j ] );

     // call that transposed the result
     bool transpose = true;
     h    = f.RevSparseHes(n, s, transpose);

     // This h is symmetric, because R is symmetric, not really testing here
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               ok &= (h[ j * n + i ] == check_f1[ i * n + j ] );

     return ok;
}
// define the template function SetCases<Vector> in empty namespace
template <typename Vector> // vector class, elements of type std::set<size_t>
bool SetCases(void)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = sin( ax[2] );
     ay[1] = ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     Vector r(n);
     size_t i;
     for(i = 0; i < n; i++)
     {     assert( r[i].empty() );
          r[i].insert(i);
     }

     // compute sparsity pattern for J(x) = F^{(1)} (x)
     f.ForSparseJac(n, r);

     // compute sparsity pattern for H(x) = F_0^{(2)} (x)
     Vector s(1);
     assert( s[0].empty() );
     s[0].insert(0);
     Vector h(n);
     h    = f.RevSparseHes(n, s);

     // check values
     std::set<size_t>::iterator itr;
     size_t j;
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     bool found = h[i].find(j) != h[i].end();
               ok        &= (found == check_f0[i * n + j]);
          }
     }

     // compute sparsity pattern for H(x) = F_1^{(2)} (x)
     s[0].clear();
     assert( s[0].empty() );
     s[0].insert(1);
     h    = f.RevSparseHes(n, s);

     // check values
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     bool found = h[i].find(j) != h[i].end();
               ok        &= (found == check_f1[i * n + j]);
          }
     }

     // call that transposed the result
     bool transpose = true;
     h    = f.RevSparseHes(n, s, transpose);

     // This h is symmetric, because R is symmetric, not really testing here
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     bool found = h[j].find(i) != h[j].end();
               ok        &= (found == check_f1[i * n + j]);
          }
     }

     return ok;
}
} // End empty namespace

# include <vector>
# include <valarray>
bool rev_sparse_hes(void)
{     bool ok = true;
     // Run with Vector equal to four different cases
     // all of which are Simple Vectors with elements of type bool.
     ok &= BoolCases< CppAD::vector  <bool> >();
     ok &= BoolCases< CppAD::vectorBool     >();
     ok &= BoolCases< std::vector    <bool> >();
     ok &= BoolCases< std::valarray  <bool> >();

     // Run with Vector equal to two different cases both of which are
     // Simple Vectors with elements of type std::set<size_t>
     typedef std::set<size_t> set;
     ok &= SetCases< CppAD::vector  <set> >();
     ok &= SetCases< std::vector    <set> >();

     // Do not use valarray because its element access in the const case
     // returns a copy instead of a reference
     // ok &= SetCases< std::valarray  <set> >();

     return ok;
}


Input File: example/sparse/rev_sparse_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test

5.5.6.2.a: See Also
5.6.4.3: sparse_sub_hes.cpp , 5.6.4.2: sub_sparse_hes.cpp .

5.5.6.2.b: ForSparseJac
The routine 5.5.2: ForSparseJac is used to compute the sparsity for both the full Jacobian (see s ) and a subset of the Jacobian (see s2 ).

5.5.6.2.c: RevSparseHes
The routine 5.5.6: RevSparseHes is used to compute both sparsity for both the full Hessian (see h ) and a subset of the Hessian (see h2 ).
# include <cppad/cppad.hpp>

bool sparsity_sub(void)
{     // C++ source code
     bool ok = true;
     //
     using std::cout;
     using CppAD::vector;
     using CppAD::AD;
     using CppAD::vectorBool;

     size_t n = 4;
     size_t m = n-1;
     vector< AD<double> > ax(n), ay(m);
     for(size_t j = 0; j < n; j++)
          ax[j] = double(j+1);
     CppAD::Independent(ax);
     for(size_t i = 0; i < m; i++)
          ay[i] = (ax[i+1] - ax[i]) * (ax[i+1] - ax[i]);
     CppAD::ADFun<double> f(ax, ay);

     // Evaluate the full Jacobian sparsity pattern for f
     vectorBool r(n * n), s(m * n);
     for(size_t j = 0 ; j < n; j++)
     {     for(size_t i = 0; i < n; i++)
               r[i * n + j] = (i == j);
     }
     s = f.ForSparseJac(n, r);

     // evaluate the sparsity for the Hessian of f_0 + ... + f_{m-1}
     vectorBool t(m), h(n * n);
     for(size_t i = 0; i < m; i++)
          t[i] = true;
     h = f.RevSparseHes(n, t);

     // evaluate the Jacobian sparsity pattern for first n/2 components of x
     size_t n2 = n / 2;
     vectorBool r2(n * n2), s2(m * n2);
     for(size_t j = 0 ; j < n2; j++)
     {     for(size_t i = 0; i < n; i++)
               r2[i * n2 + j] = (i == j);
     }
     s2 = f.ForSparseJac(n2, r2);

     // evaluate the sparsity for the subset of Hessian of
     // f_0 + ... + f_{m-1} where first partial has only first n/2 components
     vectorBool h2(n2 * n);
     h2 = f.RevSparseHes(n2, t);

     // check sparsity pattern for Jacobian
     for(size_t i = 0; i < m; i++)
     {     for(size_t j = 0; j < n2; j++)
               ok &= s2[i * n2 + j] == s[i * n + j];
     }

     // check sparsity pattern for Hessian
     for(size_t i = 0; i < n2; i++)
     {     for(size_t j = 0; j < n; j++)
               ok &= h2[i * n + j] == h[i * n + j];
     }
     return ok;
}

Input File: example/sparse/sparsity_sub.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.7: Forward Mode Hessian Sparsity Patterns

5.5.7.a: Syntax
f.for_hes_sparsity(
     
select_domainselect_rangeinternal_boolpattern_out
)


5.5.7.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to the operation sequence stored in f . Fix a diagonal matrix @(@ D \in \B{R}^{n \times n} @)@, a vector @(@ s \in \B{R}^m @)@ and define the function @[@ H(x) = D ( s^\R{T} F )^{(2)} ( x ) D @]@ Given the sparsity for @(@ D @)@ and @(@ s @)@, for_hes_sparsity computes a sparsity pattern for @(@ H(x) @)@.

5.5.7.c: x
Note that the sparsity pattern @(@ H(x) @)@ corresponds to the operation sequence stored in f and does not depend on the argument x .

5.5.7.d: BoolVector
The type BoolVector is a 8.9: SimpleVector class with 8.9.b: elements of type bool.

5.5.7.e: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.5.7.f: f
The object f has prototype
     ADFun<
Basef

5.5.7.g: select_domain
The argument select_domain has prototype
     const 
BoolVectorselect_domain
It has size @(@ n @)@ and specifies which components of the diagonal of @(@ D @)@ are non-zero; i.e., select_domain[j] is true if and only if @(@ D_{j,j} @)@ is possibly non-zero.

5.5.7.h: select_range
The argument select_range has prototype
     const 
BoolVectorselect_range
It has size @(@ m @)@ and specifies which components of the vector @(@ s @)@ are non-zero; i.e., select_range[i] is true if and only if @(@ s_i @)@ is possibly non-zero.

5.5.7.i: internal_bool
If this is true, calculations are done with sets represented by a vector of boolean values. Otherwise, a vector of sets of integers is used.

5.5.7.j: pattern_out
This argument has prototype
     sparse_rc<
SizeVector>& pattern_out
This input value of pattern_out does not matter. Upon return pattern_out is a sparsity pattern for @(@ H(x) @)@.

5.5.7.k: Sparsity for Entire Hessian
Suppose that @(@ R @)@ is the @(@ n \times n @)@ identity matrix. In this case, pattern_out is a sparsity pattern for @(@ (s^\R{T} F) F^{(2)} ( x ) @)@.

5.5.7.l: Algorithm
See Algorithm II in Computing sparse Hessians with automatic differentiation by Andrea Walther. Note that s provides the information so that 'dead ends' are not included in the sparsity pattern.

5.5.7.m: Example
The file 5.5.7.1: for_hes_sparsity.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/for_hes_sparsity.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
# include <cppad/cppad.hpp>

bool for_hes_sparsity(void)
{     bool ok = true;
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;
     //
     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = sin( ax[2] );
     ay[1] = ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // include all x components in sparsity pattern
     CPPAD_TESTVECTOR(bool) select_domain(n);
     for(size_t j = 0; j < n; j++)
          select_domain[j] = true;

     // compute sparsity pattern for H(x) = F_1''(x)
     CPPAD_TESTVECTOR(bool) select_range(m);
     select_range[0]    = false;
     select_range[1]    = true;
     bool internal_bool = true;
     sparsity pattern_out;
     f.for_hes_sparsity(
          select_domain, select_range, internal_bool, pattern_out
     );
     size_t nnz = pattern_out.nnz();
     ok        &= nnz == 2;
     ok        &= pattern_out.nr() == n;
     ok        &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector row_major = pattern_out.row_major();
          //
          ok &= row[ row_major[0] ] ==  0  && col[ row_major[0] ] ==  1;
          ok &= row[ row_major[1] ] ==  1  && col[ row_major[1] ] ==  0;
     }
     //
     // compute sparsity pattern for H(x) = F_0''(x)
     select_range[0] = true;
     select_range[1] = false;
     f.for_hes_sparsity(
          select_domain, select_range, internal_bool, pattern_out
     );
     nnz = pattern_out.nnz();
     ok &= nnz == 1;
     ok &= pattern_out.nr() == n;
     ok &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          //
          ok &= row[0] ==  2  && col[0] ==  2;
     }
     return ok;
}

Input File: example/sparse/for_hes_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.8: Hessian Sparsity Pattern: Forward Mode

5.5.8.a: Syntax
h = f.ForSparseHes(rs)

5.5.8.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to f . we define @[@ \begin{array}{rcl} H(x) & = & \partial_x \left[ \partial_u S \cdot F[ x + R \cdot u ] \right]_{u=0} \\ & = & R^\R{T} \cdot (S \cdot F)^{(2)} ( x ) \cdot R \end{array} @]@ Where @(@ R \in \B{R}^{n \times n} @)@ is a diagonal matrix and @(@ S \in \B{R}^{1 \times m} @)@ is a row vector. Given a 12.4.j: sparsity pattern for the diagonal of @(@ R @)@ and the vector @(@ S @)@, ForSparseHes returns a sparsity pattern for the @(@ H(x) @)@.

5.5.8.c: f
The object f has prototype
     const ADFun<
Basef

5.5.8.d: x
If the operation sequence in f is 12.4.g.d: independent of the independent variables in @(@ x \in B^n @)@, the sparsity pattern is valid for all values of (even if it has 4.4.4: CondExp or 4.6: VecAD operations).

5.5.8.e: r
The argument r has prototype
     const 
VectorSetr
(see 5.5.8.h: VectorSet below) If it has elements of type bool, its size is @(@ n @)@. If it has elements of type std::set<size_t>, its size is one and all the elements of s[0] are between zero and @(@ n - 1 @)@. It specifies a 12.4.j: sparsity pattern for the diagonal of @(@ R @)@. The fewer non-zero elements in this sparsity pattern, the faster the calculation should be and the more sparse @(@ H(x) @)@ should be.

5.5.8.f: s
The argument s has prototype
     const 
VectorSets
(see 5.5.8.h: VectorSet below) If it has elements of type bool, its size is @(@ m @)@. If it has elements of type std::set<size_t>, its size is one and all the elements of s[0] are between zero and @(@ m - 1 @)@. It specifies a 12.4.j: sparsity pattern for the vector S . The fewer non-zero elements in this sparsity pattern, the faster the calculation should be and the more sparse @(@ H(x) @)@ should be.

5.5.8.g: h
The result h has prototype
     
VectorSeth
(see 5.5.8.h: VectorSet below). If h has elements of type bool, its size is @(@ n * n @)@. If it has elements of type std::set<size_t>, its size is @(@ n @)@ and all the set elements are between zero and n-1 inclusive. It specifies a 12.4.j: sparsity pattern for the matrix @(@ H(x) @)@.

5.5.8.h: VectorSet
The type VectorSet must be a 8.9: SimpleVector class with 8.9.b: elements of type bool or std::set<size_t>; see 12.4.j: sparsity pattern for a discussion of the difference. The type of the elements of 5.5.8.h: VectorSet must be the same as the type of the elements of r .

5.5.8.i: Algorithm
See Algorithm II in Computing sparse Hessians with automatic differentiation by Andrea Walther. Note that s provides the information so that 'dead ends' are not included in the sparsity pattern.

5.5.8.j: Example
The file 5.5.8.1: for_sparse_hes.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/for_sparse_hes.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.8.1: Forward Mode Hessian Sparsity: Example and Test

# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------------

// expected sparsity pattern
bool check_f0[] = {
     false, false, false,  // partials w.r.t x0 and (x0, x1, x2)
     false, false, false,  // partials w.r.t x1 and (x0, x1, x2)
     false, false, true    // partials w.r.t x2 and (x0, x1, x2)
};
bool check_f1[] = {
     false,  true, false,  // partials w.r.t x0 and (x0, x1, x2)
     true,  false, false,  // partials w.r.t x1 and (x0, x1, x2)
     false, false, false   // partials w.r.t x2 and (x0, x1, x2)
};

// define the template function BoolCases<Vector> in empty namespace
template <typename Vector> // vector class, elements of type bool
bool BoolCases(bool optimize)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = sin( ax[2] ) + ax[0] + ax[1] + ax[2];
     ay[1] = ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);
     if( optimize )
          f.optimize();

     // sparsity pattern for diagonal of identity matrix
     Vector r(n);
     size_t i, j;
     for(i = 0; i < n; i++)
          r[ i ] = true;

     // compute sparsity pattern for H(x) = F_0^{(2)} (x)
     Vector s(m);
     for(i = 0; i < m; i++)
          s[i] = false;
     s[0] = true;
     Vector h(n * n);
     h    = f.ForSparseHes(r, s);

     // check values
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               ok &= (h[ i * n + j ] == check_f0[ i * n + j ] );

     // compute sparsity pattern for H(x) = F_1^{(2)} (x)
     for(i = 0; i < m; i++)
          s[i] = false;
     s[1] = true;
     h    = f.ForSparseHes(r, s);

     // check values
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               ok &= (h[ i * n + j ] == check_f1[ i * n + j ] );

     return ok;
}
// define the template function SetCases<Vector> in empty namespace
template <typename Vector> // vector class, elements of type std::set<size_t>
bool SetCases(bool optimize)
{     bool ok = true;
     using CppAD::AD;

     // domain space vector
     size_t n = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;
     ax[2] = 2.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 2;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = sin( ax[2] );
     ay[1] = ax[0] * ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);
     if( optimize )
          f.optimize();

     // sparsity pattern for the diagonal of the identity matrix
     Vector r(1);
     size_t i;
     for(i = 0; i < n; i++)
          r[0].insert(i);

     // compute sparsity pattern for H(x) = F_0^{(2)} (x)
     Vector s(1);
     assert( s[0].empty() );
     s[0].insert(0);
     Vector h(n);
     h    = f.ForSparseHes(r, s);

     // check values
     std::set<size_t>::iterator itr;
     size_t j;
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     bool found = h[i].find(j) != h[i].end();
               ok        &= (found == check_f0[i * n + j]);
          }
     }

     // compute sparsity pattern for H(x) = F_1^{(2)} (x)
     s[0].clear();
     assert( s[0].empty() );
     s[0].insert(1);
     h    = f.ForSparseHes(r, s);

     // check values
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     bool found = h[i].find(j) != h[i].end();
               ok        &= (found == check_f1[i * n + j]);
          }
     }

     return ok;
}
} // End empty namespace

# include <vector>
# include <valarray>
bool for_sparse_hes(void)
{     bool ok = true;
     for(size_t k = 0; k < 2; k++)
     {     bool optimize = bool(k);

          // Run with Vector equal to four different cases
          // all of which are Simple Vectors with elements of type bool.
          ok &= BoolCases< CppAD::vector  <bool> >(optimize);
          ok &= BoolCases< CppAD::vectorBool     >(optimize);
          ok &= BoolCases< std::vector    <bool> >(optimize);
          ok &= BoolCases< std::valarray  <bool> >(optimize);

          // Run with Vector equal to two different cases both of which are
          // Simple Vectors with elements of type std::set<size_t>
          typedef std::set<size_t> set;
          ok &= SetCases< CppAD::vector  <set> >(optimize);
          ok &= SetCases< std::vector    <set> >(optimize);

          // Do not use valarray because its element access in the const case
          // returns a copy instead of a reference
          // ok &= SetCases< std::valarray  <set> >(optimize);
     }
     return ok;
}


Input File: example/sparse/for_sparse_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.9: Computing Dependency: Example and Test

5.5.9.a: Discussion
The partial of an dependent variable with respect to an independent variable might always be zero even though the dependent variable depends on the value of the dependent variable. Consider the following case @[@ f(x) = {\rm sign} (x) = \left\{ \begin{array}{rl} +1 & {\rm if} \; x > 0 \\ 0 & {\rm if} \; x = 0 \\ -1 & {\rm if} \; x < 0 \end{array} \right. @]@ In this case the value of @(@ f(x) @)@ depends on the value of @(@ x @)@ but CppAD always returns zero for the derivative of the 4.4.2.21: sign function.

5.5.9.b: Dependency Pattern
If the i-th dependent variables depends on the value of the j-th independent variable, the corresponding entry in the dependency pattern is non-zero (true). Otherwise it is zero (false). CppAD uses 12.4.j: sparsity patterns to represent dependency patterns.

5.5.9.c: Computation
The dependency argument to 5.5.1.h: for_jac_sparsity and 5.5.4.g: RevSparseJac is a flag that signals that the dependency pattern (instead of the sparsity pattern) is computed.
# include <cppad/cppad.hpp>
namespace {
     double heavyside(const double& x)
     {     if( x <= 0.0 )
               return 0.0;
          return 1.0;
     }
     CPPAD_DISCRETE_FUNCTION(double, heavyside)
}

bool dependency(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;

     // VecAD object for use later
     CppAD::VecAD<double> vec_ad(2);
     vec_ad[0] = 0.0;
     vec_ad[1] = 1.0;

     // domain space vector
     size_t n  = 5;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     for(size_t j = 0; j < n; j++)
          ax[j] = AD<double>(j + 1);

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // some AD constants
     AD<double> azero(0.0), aone(1.0);

     // range space vector
     size_t m  = n;
     size_t m1 = n - 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     // Note that ay[m1 - j] depends on ax[j]
     ay[m1 - 0] = sign( ax[0] );
     ay[m1 - 1] = CondExpLe( ax[1], azero, azero, aone);
     ay[m1 - 2] = CondExpLe( azero, ax[2], azero, aone);
     ay[m1 - 3] = heavyside( ax[3] );
     ay[m1 - 4] = vec_ad[ ax[4] - AD<double>(4.0) ];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for n by n identity matrix
     size_t nr  = n;
     size_t nc  = n;
     size_t nnz = n;
     sparsity pattern_in(nr, nc, nnz);
     for(size_t k = 0; k < nnz; k++)
     {     size_t r = k;
          size_t c = k;
          pattern_in.set(k, r, c);
     }

     // compute dependency pattern
     bool transpose     = false;
     bool dependency    = true;  // would transpose dependency pattern
     bool internal_bool = true;  // does not affect result
     sparsity pattern_out;
     f.for_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     const SizeVector& row( pattern_out.row() );
     const SizeVector& col( pattern_out.col() );
     SizeVector col_major = pattern_out.col_major();

     // check result
     ok &= pattern_out.nr()  == n;
     ok &= pattern_out.nc()  == n;
     ok &= pattern_out.nnz() == n;
     for(size_t k = 0; k < n; k++)
     {     ok &= row[ col_major[k] ] == m1 - k;
          ok &= col[ col_major[k] ] == k;
     }
     // -----------------------------------------------------------
     // RevSparseJac and set dependency
     CppAD::vector<    std::set<size_t> > eye_set(m), depend_set(m);
     for(size_t i = 0; i < m; i++)
     {     ok &= eye_set[i].empty();
          eye_set[i].insert(i);
     }
     depend_set = f.RevSparseJac(n, eye_set, transpose, dependency);
     for(size_t i = 0; i < m; i++)
     {     std::set<size_t> check;
          check.insert(m1 - i);
          ok &= depend_set[i] == check;
     }
     dependency = false;
     depend_set = f.RevSparseJac(n, eye_set, transpose, dependency);
     for(size_t i = 0; i < m; i++)
          ok &= depend_set[i].empty();
     return ok;
}

Input File: example/sparse/dependency.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test

5.5.10.a: Purpose
This example show how to use row and column index sparsity patterns 8.27: sparse_rc to compute sparse Jacobians and Hessians. This became the preferred way to represent sparsity on 12.7.1.bd: 2017-02-09 .
# include <cppad/cppad.hpp>
namespace {
     using CppAD::sparse_rc;
     using CppAD::sparse_rcv;
     using CppAD::NearEqual;
     //
     typedef CPPAD_TESTVECTOR(bool)                b_vector;
     typedef CPPAD_TESTVECTOR(size_t)              s_vector;
     typedef CPPAD_TESTVECTOR(double)              d_vector;
     typedef CPPAD_TESTVECTOR( CppAD::AD<double> ) a_vector;
     //
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     // -----------------------------------------------------------------------
     // function f(x) that we are computing sparse results for
     // -----------------------------------------------------------------------
     a_vector fun(const a_vector& x)
     {     size_t n  = x.size();
          a_vector ret(n + 1);
          for(size_t i = 0; i < n; i++)
          {     size_t j = (i + 1) % n;
               ret[i]     = x[i] * x[i] * x[j];
          }
          ret[n] = 0.0;
          return ret;
     }
     // -----------------------------------------------------------------------
     // Jacobian
     // -----------------------------------------------------------------------
     bool check_jac(
          const d_vector&                       x      ,
          const sparse_rcv<s_vector, d_vector>& subset )
     {     bool ok  = true;
          size_t n = x.size();
          //
          ok &= subset.nnz() == 2 * n;
          const s_vector& row( subset.row() );
          const s_vector& col( subset.col() );
          const d_vector& val( subset.val() );
          s_vector row_major = subset.row_major();
          for(size_t i = 0; i < n; i++)
          {     size_t j = (i + 1) % n;
               size_t k = 2 * i;
               //
               ok &= row[ row_major[k] ]   == i;
               ok &= row[ row_major[k+1] ] == i;
               //
               size_t ck  = col[ row_major[k] ];
               size_t ckp = col[ row_major[k+1] ];
               double vk  = val[ row_major[k] ];
               double vkp = val[ row_major[k+1] ];
               //
               // put diagonal element first
               if( j < i )
               {     std::swap(ck, ckp);
                    std::swap(vk, vkp);
               }
               // diagonal element
               ok &= ck == i;
               ok &= NearEqual( vk, 2.0 * x[i] * x[j], eps99, eps99 );
               // off diagonal element
               ok &= ckp == j;
               ok &= NearEqual( vkp, x[i] * x[i], eps99, eps99 );
          }
          return ok;
     }
     // Use forward mode for Jacobian and sparsity pattern
     bool forward_jac(CppAD::ADFun<double>& f)
     {     bool ok = true;
          size_t n = f.Domain();
          //
          // sparsity pattern for identity matrix
          sparse_rc<s_vector> pattern_in(n, n, n);
          for(size_t k = 0; k < n; k++)
               pattern_in.set(k, k, k);
          //
          // sparsity pattern for Jacobian
          bool transpose     = false;
          bool dependency    = false;
          bool internal_bool = false;
          sparse_rc<s_vector> pattern_out;
          f.for_jac_sparsity(
               pattern_in, transpose, dependency, internal_bool, pattern_out
          );
          //
          // compute entire Jacobian
          size_t                         group_max = 1;
          std::string                    coloring  = "cppad";
          sparse_rcv<s_vector, d_vector> subset( pattern_out );
          CppAD::sparse_jac_work         work;
          d_vector x(n);
          for(size_t j = 0; j < n; j++)
               x[j] = double(j + 2);
          size_t n_sweep = f.sparse_jac_for(
               group_max, x, subset, pattern_out, coloring, work
          );
          //
          // check Jacobian
          ok &= check_jac(x, subset);
          ok &= n_sweep == 2;
          //
          return ok;
     }
     // Use reverse mode for Jacobian and sparsity pattern
     bool reverse_jac(CppAD::ADFun<double>& f)
     {     bool ok = true;
          size_t n = f.Domain();
          size_t m = f.Range();
          //
          // sparsity pattern for identity matrix
          sparse_rc<s_vector> pattern_in(m, m, m);
          for(size_t k = 0; k < m; k++)
               pattern_in.set(k, k, k);
          //
          // sparsity pattern for Jacobian
          bool transpose     = false;
          bool dependency    = false;
          bool internal_bool = false;
          sparse_rc<s_vector> pattern_out;
          f.rev_jac_sparsity(
               pattern_in, transpose, dependency, internal_bool, pattern_out
          );
          //
          // compute entire Jacobian
          std::string                    coloring  = "cppad";
          sparse_rcv<s_vector, d_vector> subset( pattern_out );
          CppAD::sparse_jac_work         work;
          d_vector x(n);
          for(size_t j = 0; j < n; j++)
               x[j] = double(j + 2);
          size_t n_sweep = f.sparse_jac_rev(
               x, subset, pattern_out, coloring, work
          );
          //
          // check Jacobian
          ok &= check_jac(x, subset);
          ok &= n_sweep == 2;
          //
          return ok;
     }
     // ------------------------------------------------------------------------
     // Hessian
     // ------------------------------------------------------------------------
     bool check_hes(
          size_t                                i      ,
          const d_vector&                       x      ,
          const sparse_rcv<s_vector, d_vector>& subset )
     {     bool ok  = true;
          size_t n = x.size();
          size_t j = (i + 1) % n;
          //
          ok &= subset.nnz() == 3;
          const s_vector& row( subset.row() );
          const s_vector& col( subset.col() );
          const d_vector& val( subset.val() );
          s_vector row_major = subset.row_major();
          //
          double v0 = val[ row_major[0] ];
          double v1 = val[ row_major[1] ];
          double v2 = val[ row_major[2] ];
          if( j < i )
          {     ok &= row[ row_major[0] ] == j;
               ok &= col[ row_major[0] ] == i;
               ok &= NearEqual( v0, 2.0 * x[i], eps99, eps99 );
               //
               ok &= row[ row_major[1] ] == i;
               ok &= col[ row_major[1] ] == j;
               ok &= NearEqual( v1, 2.0 * x[i], eps99, eps99 );
               //
               ok &= row[ row_major[2] ] == i;
               ok &= col[ row_major[2] ] == i;
               ok &= NearEqual( v2, 2.0 * x[j], eps99, eps99 );
          }
          else
          {     ok &= row[ row_major[0] ] == i;
               ok &= col[ row_major[0] ] == i;
               ok &= NearEqual( v0, 2.0 * x[j], eps99, eps99 );
               //
               ok &= row[ row_major[1] ] == i;
               ok &= col[ row_major[1] ] == j;
               ok &= NearEqual( v1, 2.0 * x[i], eps99, eps99 );
               //
               ok &= row[ row_major[2] ] == j;
               ok &= col[ row_major[2] ] == i;
               ok &= NearEqual( v2, 2.0 * x[i], eps99, eps99 );
          }
          return ok;
     }
     // Use forward mode for Hessian and sparsity pattern
     bool forward_hes(CppAD::ADFun<double>& f)
     {     bool ok = true;
          size_t n = f.Domain();
          size_t m = f.Range();
          //
          b_vector select_domain(n);
          for(size_t j = 0; j < n; j++)
               select_domain[j] = true;
          sparse_rc<s_vector> pattern_out;
          //
          for(size_t i = 0; i < m; i++)
          {     // select i-th component of range
               b_vector select_range(m);
               d_vector w(m);
               for(size_t k = 0; k < m; k++)
               {     select_range[k] = k == i;
                    w[k] = 0.0;
                    if( k == i )
                         w[k] = 1.0;
               }
               //
               bool internal_bool = false;
               f.for_hes_sparsity(
                    select_domain, select_range, internal_bool, pattern_out
               );
               //
               // compute Hessian for i-th component function
               std::string                    coloring  = "cppad.symmetric";
               sparse_rcv<s_vector, d_vector> subset( pattern_out );
               CppAD::sparse_hes_work         work;
               d_vector x(n);
               for(size_t j = 0; j < n; j++)
                    x[j] = double(j + 2);
               size_t n_sweep = f.sparse_hes(
                    x, w, subset, pattern_out, coloring, work
               );
               //
               // check Hessian
               if( i == n )
                    ok &= subset.nnz() == 0;
               else
               {     ok &= check_hes(i, x, subset);
                    ok &= n_sweep == 1;
               }
          }
          return ok;
     }
     // Use reverse mode for Hessian and sparsity pattern
     bool reverse_hes(CppAD::ADFun<double>& f)
     {     bool ok = true;
          size_t n = f.Domain();
          size_t m = f.Range();
          //
          // n by n identity matrix
          sparse_rc<s_vector> pattern_in(n, n, n);
          for(size_t j = 0; j < n; j++)
               pattern_in.set(j, j, j);
          //
          bool transpose     = false;
          bool dependency    = false;
          bool internal_bool = true;
          sparse_rc<s_vector> pattern_out;
          //
          f.for_jac_sparsity(
               pattern_in, transpose, dependency, internal_bool, pattern_out
          );
          //
          for(size_t i = 0; i < m; i++)
          {     // select i-th component of range
               b_vector select_range(m);
               d_vector w(m);
               for(size_t k = 0; k < m; k++)
               {     select_range[k] = k == i;
                    w[k] = 0.0;
                    if( k == i )
                         w[k] = 1.0;
               }
               //
               f.rev_hes_sparsity(
                    select_range, transpose, internal_bool, pattern_out
               );
               //
               // compute Hessian for i-th component function
               std::string                    coloring  = "cppad.symmetric";
               sparse_rcv<s_vector, d_vector> subset( pattern_out );
               CppAD::sparse_hes_work         work;
               d_vector x(n);
               for(size_t j = 0; j < n; j++)
                    x[j] = double(j + 2);
               size_t n_sweep = f.sparse_hes(
                    x, w, subset, pattern_out, coloring, work
               );
               //
               // check Hessian
               if( i == n )
                    ok &= subset.nnz() == 0;
               else
               {     ok &= check_hes(i, x, subset);
                    ok &= n_sweep == 1;
               }
          }
          return ok;
     }
}
// driver for all of the cases above
bool rc_sparsity(void)
{     bool ok = true;
     //
     // record the funcion
     size_t n = 20;
     size_t m = n + 1;
     a_vector x(n), y(m);
     for(size_t j = 0; j < n; j++)
          x[j] = CppAD::AD<double>(j+1);
     CppAD::Independent(x);
     y = fun(x);
     CppAD::ADFun<double> f(x, y);
     //
     // run the example / tests
     ok &= forward_jac(f);
     ok &= reverse_jac(f);
     ok &= forward_hes(f);
     ok &= reverse_hes(f);
     //
     return ok;
}

Input File: example/sparse/rc_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.11: Subgraph Dependency Sparsity Patterns

5.5.11.a: Syntax
f.subgraph_sparsity(
     
select_domainselect_rangetransposepattern_out
)


5.5.11.b: Notation
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the 12.4.a: AD function corresponding to the operation sequence stored in f .

5.5.11.c: Method
This routine uses a subgraph technique. To be specific, for each dependent variable, it a subgraph of the operation sequence to determine which independent variables affect it. This avoids to overhead of performing set operations that is inherent in other methods for computing sparsity patterns.

5.5.11.d: Atomic Function
The sparsity calculation for 4.4.7.2.3: atomic functions in the f operation sequence are not efficient. To be specific, each atomic function is treated as if all of its outputs depend on all of its inputs. This may be improved upon in the future; see the 12.6.b.a: subgraph atomic functions wish list item.

5.5.11.e: BoolVector
The type BoolVector is a 8.9: SimpleVector class with 8.9.b: elements of type bool.

5.5.11.f: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.5.11.g: f
The object f has prototype
     ADFun<
Basef

5.5.11.h: select_domain
The argument select_domain has prototype
     const 
BoolVectorselect_domain
It has size @(@ n @)@ and specifies which independent variables to include in the calculation. If not all the independent variables are included in the calculation, a forward pass on the operation sequence is used to determine which nodes may be in the subgraphs.

5.5.11.i: select_range
The argument select_range has prototype
     const 
BoolVectorselect_range
It has size @(@ m @)@ and specifies which components of the range to include in the calculation. A subgraph is built for each dependent variable and the selected set of independent variables.

5.5.11.j: transpose
This argument has prototype
     bool 
transpose
If transpose it is false (true), upon return pattern_out is a sparsity pattern for @(@ J(x) @)@ (@(@ J(x)^\R{T} @)@) defined below.

5.5.11.k: pattern_out
This argument has prototype
     sparse_rc<
SizeVector>& pattern_out
This input value of pattern_out does not matter. Upon return pattern_out is a 5.5.9.b: dependency pattern for @(@ F(x) @)@. The pattern has @(@ m @)@ rows, @(@ n @)@ columns. If select_domain[j] is true, select_range[i] is true, and @(@ F_i (x) @)@ depends on @(@ x_j @)@, then the pair @(@ (i, j) @)@ is in pattern_out . Not that this is also a sparsity pattern for the Jacobian @[@ J(x) = R F^{(1)} (x) D @]@ where @(@ D @)@ (@(@ R @)@) is the diagonal matrix corresponding to select_domain ( select_range ).

5.5.11.l: Example
The file 5.5.11.1: subgraph_sparsity.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/subgraph_sparsity.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
# include <cppad/cppad.hpp>

bool subgraph_sparsity(void)
{     bool ok = true;
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;
     //
     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.;
     ax[1] = 1.;

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = ax[0];
     ay[1] = ax[0] * ax[1];
     ay[2] = ax[1];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);



     // compute sparsite pattern for F'(x)
     CPPAD_TESTVECTOR(bool) select_domain(n), select_range(m);
     for(size_t j = 0; j < n; j++)
          select_domain[j] = true;
     for(size_t i = 0; i < m; i++)
          select_range[i] = true;
     bool transpose       = false;
     sparsity pattern_out;
     f.subgraph_sparsity(select_domain, select_range, transpose, pattern_out);

     // check sparsity pattern
     size_t nnz = pattern_out.nnz();
     ok        &= nnz == 4;
     ok        &= pattern_out.nr() == m;
     ok        &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector col_major = pattern_out.col_major();
          //
          ok &= row[ col_major[0] ] ==  0  && col[ col_major[0] ] ==  0;
          ok &= row[ col_major[1] ] ==  1  && col[ col_major[1] ] ==  0;
          ok &= row[ col_major[2] ] ==  1  && col[ col_major[2] ] ==  1;
          ok &= row[ col_major[3] ] ==  2  && col[ col_major[3] ] ==  1;
     }
     // note that the transpose of the identity is the identity
     transpose     = true;
     f.subgraph_sparsity(select_domain, select_range, transpose, pattern_out);
     //
     nnz  = pattern_out.nnz();
     ok  &= nnz == 4;
     ok  &= pattern_out.nr() == n;
     ok  &= pattern_out.nc() == m;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector row_major = pattern_out.row_major();
          //
          ok &= col[ row_major[0] ] ==  0  && row[ row_major[0] ] ==  0;
          ok &= col[ row_major[1] ] ==  1  && row[ row_major[1] ] ==  0;
          ok &= col[ row_major[2] ] ==  1  && row[ row_major[2] ] ==  1;
          ok &= col[ row_major[3] ] ==  2  && row[ row_major[3] ] ==  1;
     }
     return ok;
}

Input File: example/sparse/subgraph_sparsity.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6: Calculating Sparse Derivatives

5.6.a: Preferred Sparsity Patterns
5.6.1: sparse_jac Computing Sparse Jacobians
5.6.3: sparse_hes Computing Sparse Hessians
5.6.5: subgraph_jac_rev Compute Sparse Jacobians Using Subgraphs

5.6.b: Old Sparsity Patterns
5.6.2: sparse_jacobian Sparse Jacobian
5.6.4: sparse_hessian Sparse Hessian

Input File: omh/adfun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.1: Computing Sparse Jacobians

5.6.1.a: Syntax
n_sweep = f.sparse_jac_for(
     
group_maxxsubsetpatterncoloringwork
)
n_sweep = f.sparse_jac_rev(
     
xsubsetpatterncoloringwork
)


5.6.1.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the function corresponding to f . Here n is the 5.1.5.d: domain size, and m is the 5.1.5.e: range size, or f . The syntax above takes advantage of sparsity when computing the Jacobian @[@ J(x) = F^{(1)} (x) @]@ In the sparse case, this should be faster and take less memory than 5.2.1: Jacobian . We use the notation @(@ J_{i,j} (x) @)@ to denote the partial of @(@ F_i (x) @)@ with respect to @(@ x_j @)@.

5.6.1.c: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.6.1.d: BaseVector
The type BaseVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.6.1.e: sparse_jac_for
This function uses first order forward mode sweeps 5.3.2: forward_one to compute multiple columns of the Jacobian at the same time.

5.6.1.f: sparse_jac_rev
This uses function first order reverse mode sweeps 5.4.1: reverse_one to compute multiple rows of the Jacobian at the same time.

5.6.1.g: f
This object has prototype
     ADFun<
Basef
Note that the Taylor coefficients stored in f are affected by this operation; see 5.6.1.o: uses forward below.

5.6.1.h: group_max
This argument has prototype
     size_t 
group_max
and must be greater than zero. It specifies the maximum number of colors to group during a single forward sweep. If a single color is in a group, a single direction for of first order forward mode 5.3.2: forward_one is used for each color. If multiple colors are in a group, the multiple direction for of first order forward mode 5.3.5: forward_dir is used with one direction for each color. This uses separate memory for each direction (more memory), but my be significantly faster.

5.6.1.i: x
This argument has prototype
     const 
BaseVectorx
and its size is n . It specifies the point at which to evaluate the Jacobian @(@ J(x) @)@.

5.6.1.j: subset
This argument has prototype
     sparse_rcv<
SizeVectorBaseVector>& subset
Its row size is subset.nr() == m , and its column size is subset.nc() == n . It specifies which elements of the Jacobian are computed. The input value of its value vector subset.val() does not matter. Upon return it contains the value of the corresponding elements of the Jacobian. All of the row, column pairs in subset must also appear in pattern ; i.e., they must be possibly non-zero.

5.6.1.k: pattern
This argument has prototype
     const sparse_rc<
SizeVector>& pattern
Its row size is pattern.nr() == m , and its column size is pattern.nc() == n . It is a sparsity pattern for the Jacobian @(@ J(x) @)@. This argument is not used (and need not satisfy any conditions), when 5.6.1.m: work is non-empty.

5.6.1.l: coloring
The coloring algorithm determines which rows (reverse) or columns (forward) can be computed during the same sweep. This field has prototype
     const std::string& 
coloring
This value only matters when work is empty; i.e., after the work constructor or work.clear() .

5.6.1.l.a: cppad
This uses a general purpose coloring algorithm written for Cppad.

5.6.1.l.b: colpack
If 2.2.2: colpack_prefix is specified on the 2.2.b: cmake command line, you can set coloring to colpack. This uses a general purpose coloring algorithm that is part of Colpack.

5.6.1.m: work
This argument has prototype
     sparse_jac_work& 
work
We refer to its initial value, and its value after work.clear() , as empty. If it is empty, information is stored in work . This can be used to reduce computation when a future call is for the same object f , the same member function sparse_jac_for or sparse_jac_rev, and the same subset of the Jacobian. If any of these values change, use work.clear() to empty this structure.

5.6.1.n: n_sweep
The return value n_sweep has prototype
     size_t 
n_sweep
If sparse_jac_for (sparse_jac_rev) is used, n_sweep is the number of first order forward (reverse) sweeps used to compute the requested Jacobian values. It is also the number of colors determined by the coloring method mentioned above. This is proportional to the total computational work, not counting the zero order forward sweep, or combining multiple columns (rows) into a single sweep.

5.6.1.o: Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to sparse_jac_forward or sparse_jac_rev, the zero order coefficients correspond to
     
f.Forward(0, x)
All the other forward mode coefficients are unspecified.

5.6.1.p: Example
The files 5.6.1.1: sparse_jac_for.cpp and 5.6.1.2: sparse_jac_rev.cpp are examples and tests of sparse_jac_for and sparse_jac_rev. They return true, if they succeed, and false otherwise.
Input File: cppad/core/sparse_jac.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
# include <cppad/cppad.hpp>
bool sparse_jac_for(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::sparse_rc;
     using CppAD::sparse_rcv;
     //
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)     d_vector;
     typedef CPPAD_TESTVECTOR(size_t)     s_vector;
     //
     // domain space vector
     size_t n = 3;
     a_vector  a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);
     //
     // declare independent variables and starting recording
     CppAD::Independent(a_x);
     //
     size_t m = 4;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[2];
     a_y[1] = a_x[0] + a_x[2];
     a_y[2] = a_x[1] + a_x[2];
     a_y[3] = a_x[1] + a_x[2] * a_x[2] / 2.;
     //
     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);
     //
     // new value for the independent variable vector
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j);
     /*
            [ 1 0 1   ]
     J(x) = [ 1 0 1   ]
            [ 0 1 1   ]
            [ 0 1 x_2 ]
     */
     d_vector check(m * n);
     //
     // column-major order values of J(x)
     size_t nnz = 8;
     s_vector check_row(nnz), check_col(nnz);
     d_vector check_val(nnz);
     for(size_t k = 0; k < nnz; k++)
     {     // check_val
          if( k < 7 )
               check_val[k] = 1.0;
          else
               check_val[k] = x[2];
          //
          // check_row and check_col
          check_row[k] = k;
          if( k < 2 )
               check_col[k] = 0;
          else if( k < 4 )
               check_col[k] = 1;
          else
          {     check_col[k] = 2;
               check_row[k] = k - 4;
          }
     }
     //
     // n by n identity matrix sparsity
     sparse_rc<s_vector> pattern_in;
     pattern_in.resize(n, n, n);
     for(size_t k = 0; k < n; k++)
          pattern_in.set(k, k, k);
     //
     // sparsity for J(x)
     bool transpose     = false;
     bool dependency    = false;
     bool internal_bool = true;
     sparse_rc<s_vector> pattern_jac;
     f.for_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_jac
     );
     //
     // compute entire forward mode Jacobian
     sparse_rcv<s_vector, d_vector> subset( pattern_jac );
     CppAD::sparse_jac_work work;
     std::string coloring = "cppad";
     size_t group_max = 10;
     size_t n_sweep = f.sparse_jac_for(
          group_max, x, subset, pattern_jac, coloring, work
     );
     ok &= n_sweep == 2;
     //
     const s_vector row( subset.row() );
     const s_vector col( subset.col() );
     const d_vector val( subset.val() );
     s_vector col_major = subset.col_major();
     ok  &= subset.nnz() == nnz;
     for(size_t k = 0; k < nnz; k++)
     {     ok &= row[ col_major[k] ] == check_row[k];
          ok &= col[ col_major[k] ] == check_col[k];
          ok &= val[ col_major[k] ] == check_val[k];
     }
     // compute non-zero in row 3 only
     sparse_rc<s_vector> pattern_row3;
     pattern_row3.resize(m, n, 2); // nr = m, nc = n, nnz = 2
     pattern_row3.set(0, 3, 1);    // row[0] = 3, col[0] = 1
     pattern_row3.set(1, 3, 2);    // row[1] = 3, col[1] = 2
     sparse_rcv<s_vector, d_vector> subset_row3( pattern_row3 );
     work.clear();
     n_sweep = f.sparse_jac_for(
          group_max, x, subset_row3, pattern_jac, coloring, work
     );
     ok &= n_sweep == 2;
     //
     const s_vector row_row3( subset_row3.row() );
     const s_vector col_row3( subset_row3.col() );
     const d_vector val_row3( subset_row3.val() );
     ok &= subset_row3.nnz() == 2;
     //
     ok &= row_row3[0] == 3;
     ok &= col_row3[0] == 1;
     ok &= val_row3[0] == 1.0;
     //
     ok &= row_row3[1] == 3;
     ok &= col_row3[1] == 2;
     ok &= val_row3[1] == x[2];
     //
     return ok;
}

Input File: example/sparse/sparse_jac_for.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
# include <cppad/cppad.hpp>
bool sparse_jac_rev(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::sparse_rc;
     using CppAD::sparse_rcv;
     //
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)     d_vector;
     typedef CPPAD_TESTVECTOR(size_t)     s_vector;
     //
     // domain space vector
     size_t n = 4;
     a_vector  a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);
     //
     // declare independent variables and starting recording
     CppAD::Independent(a_x);
     //
     size_t m = 3;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[1];
     a_y[1] = a_x[2] + a_x[3];
     a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.;
     //
     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);
     //
     // new value for the independent variable vector
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j);
     /*
            [ 1 1 0 0  ]
     J(x) = [ 0 0 1 1  ]
            [ 1 1 1 x_3]
     */
     //
     // row-major order values of J(x)
     size_t nnz = 8;
     s_vector check_row(nnz), check_col(nnz);
     d_vector check_val(nnz);
     for(size_t k = 0; k < nnz; k++)
     {     // check_val
          if( k < 7 )
               check_val[k] = 1.0;
          else
               check_val[k] = x[3];
          //
          // check_row and check_col
          check_col[k] = k;
          if( k < 2 )
               check_row[k] = 0;
          else if( k < 4 )
               check_row[k] = 1;
          else
          {     check_row[k] = 2;
               check_col[k] = k - 4;
          }
     }
     //
     // m by m identity matrix sparsity
     sparse_rc<s_vector> pattern_in(m, m, m);
     for(size_t k = 0; k < m; k++)
          pattern_in.set(k, k, k);
     //
     // sparsity for J(x)
     bool transpose     = false;
     bool dependency    = false;
     bool internal_bool = true;
     sparse_rc<s_vector> pattern_jac;
     f.rev_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_jac
     );
     //
     // compute entire reverse mode Jacobian
     sparse_rcv<s_vector, d_vector> subset( pattern_jac );
     CppAD::sparse_jac_work work;
     std::string coloring = "cppad";
     size_t n_sweep = f.sparse_jac_rev(x, subset, pattern_jac, coloring, work);
     ok &= n_sweep == 2;
     //
     const s_vector row( subset.row() );
     const s_vector col( subset.col() );
     const d_vector val( subset.val() );
     s_vector row_major = subset.row_major();
     ok  &= subset.nnz() == nnz;
     for(size_t k = 0; k < nnz; k++)
     {     ok &= row[ row_major[k] ] == check_row[k];
          ok &= col[ row_major[k] ] == check_col[k];
          ok &= val[ row_major[k] ] == check_val[k];
     }
     //
     // test using work stored by previous sparse_jac_rev
     sparse_rc<s_vector> pattern_not_used;
     std::string         coloring_not_used;
     n_sweep = f.sparse_jac_rev(x, subset, pattern_jac, coloring, work);
     ok &= n_sweep == 2;
     for(size_t k = 0; k < nnz; k++)
     {     ok &= row[ row_major[k] ] == check_row[k];
          ok &= col[ row_major[k] ] == check_col[k];
          ok &= val[ row_major[k] ] == check_val[k];
     }
     //
     // compute non-zero in col 3 only, nr = m, nc = n, nnz = 2
     sparse_rc<s_vector> pattern_col3(m, n, 2);
     pattern_col3.set(0, 1, 3);    // row[0] = 1, col[0] = 3
     pattern_col3.set(1, 2, 3);    // row[1] = 2, col[1] = 3
     sparse_rcv<s_vector, d_vector> subset_col3( pattern_col3 );
     work.clear();
     n_sweep = f.sparse_jac_rev(x, subset_col3, pattern_jac, coloring, work);
     ok &= n_sweep == 2;
     //
     const s_vector row_col3( subset_col3.row() );
     const s_vector col_col3( subset_col3.col() );
     const d_vector val_col3( subset_col3.val() );
     ok &= subset_col3.nnz() == 2;
     //
     ok &= row_col3[0] == 1;
     ok &= col_col3[0] == 3;
     ok &= val_col3[0] == 1.0;
     //
     ok &= row_col3[1] == 2;
     ok &= col_col3[1] == 3;
     ok &= val_col3[1] == x[3];
     //
     return ok;
}

Input File: example/sparse/sparse_jac_rev.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.2: Sparse Jacobian

5.6.2.a: Syntax
jac = f.SparseJacobian(x)
jac = f.SparseJacobian(xp)
n_sweep = f.SparseJacobianForward(xprowcoljacwork)
n_sweep = f.SparseJacobianReverse(xprowcoljacwork)

5.6.2.b: Purpose
We use @(@ n @)@ for the 5.1.5.d: domain size, and @(@ m @)@ for the 5.1.5.e: range size of f . We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ do denote the 12.4.a: AD function corresponding to f . The syntax above sets jac to the Jacobian @[@ jac = F^{(1)} (x) @]@ This routine takes advantage of the sparsity of the Jacobian in order to reduce the amount of computation necessary. If row and col are present, it also takes advantage of the reduced set of elements of the Jacobian that need to be computed. One can use speed tests (e.g. 8.3: speed_test ) to verify that results are computed faster than when using the routine 5.2.1: Jacobian .

5.6.2.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.6.2.m: Uses Forward below).

5.6.2.d: x
The argument x has prototype
     const 
VectorBasex
(see 5.6.2.j: VectorBase below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the Jacobian.

5.6.2.e: p
The argument p is optional and has prototype
     const 
VectorSetp
(see 5.6.2.k: VectorSet below). If it has elements of type bool, its size is @(@ m * n @)@. If it has elements of type std::set<size_t>, its size is @(@ m @)@ and all its set elements are between zero and @(@ n - 1 @)@. It specifies a 12.4.j: sparsity pattern for the Jacobian @(@ F^{(1)} (x) @)@.

If this sparsity pattern does not change between calls to SparseJacobian , it should be faster to calculate p once (using 5.5.2: ForSparseJac or 5.5.4: RevSparseJac ) and then pass p to SparseJacobian . Furthermore, if you specify work in the calling sequence, it is not necessary to keep the sparsity pattern; see the heading 5.6.2.h.b: p under the work description.

In addition, if you specify p , CppAD will use the same type of sparsity representation (vectors of bool or vectors of std::set<size_t>) for its internal calculations. Otherwise, the representation for the internal calculations is unspecified.

5.6.2.f: row, col
The arguments row and col are optional and have prototype
     const 
VectorSizerow
     const 
VectorSizecol
(see 5.6.2.l: VectorSize below). They specify which rows and columns of @(@ F^{(1)} (x) @)@ are computes and in what order. Not all the non-zero entries in @(@ F^{(1)} (x) @)@ need be computed, but all the entries specified by row and col must be possibly non-zero in the sparsity pattern. We use @(@ K @)@ to denote the value jac.size() which must also equal the size of row and col . Furthermore, for @(@ k = 0 , \ldots , K-1 @)@, it must hold that @(@ row[k] < m @)@ and @(@ col[k] < n @)@.

5.6.2.g: jac
The result jac has prototype
     
VectorBasejac
In the case where the arguments row and col are not present, the size of jac is @(@ m * n @)@ and for @(@ i = 0 , \ldots , m-1 @)@, @(@ j = 0 , \ldots , n-1 @)@, @[@ jac [ i * n + j ] = \D{ F_i }{ x_j } (x) @]@

In the case where the arguments row and col are present, we use @(@ K @)@ to denote the size of jac . The input value of its elements does not matter. Upon return, for @(@ k = 0 , \ldots , K - 1 @)@, @[@ jac [ k ] = \D{ F_i }{ x_j } (x) \; , \; \; {\rm where} \; i = row[k] \; {\rm and } \; j = col[k] @]@

5.6.2.h: work
If this argument is present, it has prototype
     sparse_jacobian_work& 
work
This object can only be used with the routines SparseJacobianForward and SparseJacobianReverse. During its the first use, information is stored in work . This is used to reduce the work done by future calls to the same mode (forward or reverse), the same f , p , row , and col . If a future call is for a different mode, or any of these values have changed, you must first call work.clear() to inform CppAD that this information needs to be recomputed.

5.6.2.h.a: color_method
The coloring algorithm determines which columns (forward mode) or rows (reverse mode) can be computed during the same sweep. This field has prototype
     std::string 
work.color_method
and its default value (after a constructor or clear()) is "cppad". If 2.2.2: colpack_prefix is specified on the 2.2.b: cmake command line, you can set this method to "colpack". This value only matters on the first call to sparse_jacobian that follows the work constructor or a call to work.clear() .

5.6.2.h.b: p
If work is present, and it is not the first call after its construction or a clear, the sparsity pattern p is not used. This enables one to free the sparsity pattern and still compute corresponding sparse Jacobians.

5.6.2.i: n_sweep
The return value n_sweep has prototype
     size_t 
n_sweep
If SparseJacobianForward (SparseJacobianReverse) is used, n_sweep is the number of first order forward (reverse) sweeps used to compute the requested Jacobian values. (This is also the number of colors determined by the coloring method mentioned above). This is proportional to the total work that SparseJacobian does, not counting the zero order forward sweep, or the work to combine multiple columns (rows) into a single sweep.

5.6.2.j: VectorBase
The type VectorBase must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.6.2.k: VectorSet
The type VectorSet must be a 8.9: SimpleVector class with 8.9.b: elements of type bool or std::set<size_t>; see 12.4.j: sparsity pattern for a discussion of the difference. The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.6.2.k.a: Restrictions
If VectorSet has elements of std::set<size_t>, then p[i] must return a reference (not a copy) to the corresponding set. According to section 26.3.2.3 of the 1998 C++ standard, std::valarray< std::set<size_t> > does not satisfy this condition.

5.6.2.l: VectorSize
The type VectorSize must be a 8.9: SimpleVector class with 8.9.b: elements of type size_t. The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.6.2.m: Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to any of the sparse Jacobian routines, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified. After SparseJacobian, the previous calls to 5.3: Forward are undefined.

5.6.2.n: Example
The routine 5.6.2.1: sparse_jacobian.cpp is examples and tests of sparse_jacobian. It return true, if it succeeds and false otherwise.
Input File: cppad/core/sparse_jacobian.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.2.1: Sparse Jacobian: Example and Test

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
bool reverse()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(AD<double>)   a_vector;
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR(size_t)       i_vector;
     size_t i, j, k, ell;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 4;
     a_vector  a_x(n);
     for(j = 0; j < n; j++)
          a_x[j] = AD<double> (0);

     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     size_t m = 3;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[1];
     a_y[1] = a_x[2] + a_x[3];
     a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // new value for the independent variable vector
     d_vector x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j);

     // Jacobian of y without sparsity pattern
     d_vector jac(m * n);
     jac = f.SparseJacobian(x);
     /*
           [ 1 1 0 0  ]
     jac = [ 0 0 1 1  ]
           [ 1 1 1 x_3]
     */
     d_vector check(m * n);
     check[0] = 1.; check[1] = 1.; check[2]  = 0.; check[3]  = 0.;
     check[4] = 0.; check[5] = 0.; check[6]  = 1.; check[7]  = 1.;
     check[8] = 1.; check[9] = 1.; check[10] = 1.; check[11] = x[3];
     for(ell = 0; ell < size_t(check.size()); ell++)
          ok &=  NearEqual(check[ell], jac[ell], eps, eps );

     // using packed boolean sparsity patterns
     CppAD::vectorBool s_b(m * m), p_b(m * n);
     for(i = 0; i < m; i++)
     {     for(ell = 0; ell < m; ell++)
               s_b[i * m + ell] = false;
          s_b[i * m + i] = true;
     }
     p_b   = f.RevSparseJac(m, s_b);
     jac   = f.SparseJacobian(x, p_b);
     for(ell = 0; ell < size_t(check.size()); ell++)
          ok &=  NearEqual(check[ell], jac[ell], eps, eps );

     // using vector of sets sparsity patterns
     std::vector< std::set<size_t> > s_s(m),  p_s(m);
     for(i = 0; i < m; i++)
          s_s[i].insert(i);
     p_s   = f.RevSparseJac(m, s_s);
     jac   = f.SparseJacobian(x, p_s);
     for(ell = 0; ell < size_t(check.size()); ell++)
          ok &=  NearEqual(check[ell], jac[ell], eps, eps );

     // using row and column indices to compute non-zero in rows 1 and 2
     // (skip row 0).
     size_t K = 6;
     i_vector row(K), col(K);
     jac.resize(K);
     k = 0;
     for(j = 0; j < n; j++)
     {     for(i = 1; i < m; i++)
          {     ell = i * n + j;
               if( p_b[ell] )
               {     ok &= check[ell] != 0.;
                    row[k] = i;
                    col[k] = j;
                    k++;
               }
          }
     }
     ok &= k == K;

     // empty work structure
     CppAD::sparse_jacobian_work work;

     // could use p_b
     size_t n_sweep = f.SparseJacobianReverse(x, p_s, row, col, jac, work);
     for(k = 0; k < K; k++)
     {     ell = row[k] * n + col[k];
          ok &= NearEqual(check[ell], jac[k], eps, eps);
     }
     ok &= n_sweep == 2;

     // now recompute at a different x value (using work from previous call)
     check[11] = x[3] = 10.;
     std::vector< std::set<size_t> > not_used;
     n_sweep = f.SparseJacobianReverse(x, not_used, row, col, jac, work);
     for(k = 0; k < K; k++)
     {     ell = row[k] * n + col[k];
          ok &= NearEqual(check[ell], jac[k], eps, eps);
     }
     ok &= n_sweep == 2;

     return ok;
}

bool forward()
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR(size_t)       i_vector;
     size_t i, j, k, ell;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 3;
     a_vector  a_x(n);
     for(j = 0; j < n; j++)
          a_x[j] = AD<double> (0);

     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     size_t m = 4;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[2];
     a_y[1] = a_x[0] + a_x[2];
     a_y[2] = a_x[1] + a_x[2];
     a_y[3] = a_x[1] + a_x[2] * a_x[2] / 2.;

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // new value for the independent variable vector
     d_vector x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j);

     // Jacobian of y without sparsity pattern
     d_vector jac(m * n);
     jac = f.SparseJacobian(x);
     /*
           [ 1 0 1   ]
     jac = [ 1 0 1   ]
           [ 0 1 1   ]
           [ 0 1 x_2 ]
     */
     d_vector check(m * n);
     check[0] = 1.; check[1]  = 0.; check[2]  = 1.;
     check[3] = 1.; check[4]  = 0.; check[5]  = 1.;
     check[6] = 0.; check[7]  = 1.; check[8]  = 1.;
     check[9] = 0.; check[10] = 1.; check[11] = x[2];
     for(ell = 0; ell < size_t(check.size()); ell++)
          ok &=  NearEqual(check[ell], jac[ell], eps, eps );

     // test using packed boolean vectors for sparsity pattern
     CppAD::vectorBool r_b(n * n), p_b(m * n);
     for(j = 0; j < n; j++)
     {     for(ell = 0; ell < n; ell++)
               r_b[j * n + ell] = false;
          r_b[j * n + j] = true;
     }
     p_b = f.ForSparseJac(n, r_b);
     jac = f.SparseJacobian(x, p_b);
     for(ell = 0; ell < size_t(check.size()); ell++)
          ok &=  NearEqual(check[ell], jac[ell], eps, eps );

     // test using vector of sets for sparsity pattern
     std::vector< std::set<size_t> > r_s(n), p_s(m);
     for(j = 0; j < n; j++)
          r_s[j].insert(j);
     p_s = f.ForSparseJac(n, r_s);
     jac = f.SparseJacobian(x, p_s);
     for(ell = 0; ell < size_t(check.size()); ell++)
          ok &=  NearEqual(check[ell], jac[ell], eps, eps );

     // using row and column indices to compute non-zero elements excluding
     // row 0 and column 0.
     size_t K = 5;
     i_vector row(K), col(K);
     jac.resize(K);
     k = 0;
     for(i = 1; i < m; i++)
     {     for(j = 1; j < n; j++)
          {     ell = i * n + j;
               if( p_b[ell] )
               {     ok &= check[ell] != 0.;
                    row[k] = i;
                    col[k] = j;
                    k++;
               }
          }
     }
     ok &= k == K;

     // empty work structure
     CppAD::sparse_jacobian_work work;

     // could use p_s
     size_t n_sweep = f.SparseJacobianForward(x, p_b, row, col, jac, work);
     for(k = 0; k < K; k++)
     {    ell = row[k] * n + col[k];
          ok &= NearEqual(check[ell], jac[k], eps, eps);
     }
     ok &= n_sweep == 2;

     // now recompute at a different x value (using work from previous call)
     check[11] = x[2] = 10.;
     n_sweep = f.SparseJacobianForward(x, p_s, row, col, jac, work);
     for(k = 0; k < K; k++)
     {    ell = row[k] * n + col[k];
          ok &= NearEqual(check[ell], jac[k], eps, eps);
     }
     ok &= n_sweep == 2;

     return ok;
}
} // End empty namespace

bool sparse_jacobian(void)
{     bool ok = true;
     ok &= forward();
     ok &= reverse();

     return ok;
}

Input File: example/sparse/sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.3: Computing Sparse Hessians

5.6.3.a: Syntax
n_sweep = f.sparse_hes(
     
xwsubsetpatterncoloringwork
)


5.6.3.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the function corresponding to f . Here n is the 5.1.5.d: domain size, and m is the 5.1.5.e: range size, or f . The syntax above takes advantage of sparsity when computing the Hessian @[@ H(x) = \dpow{2}{x} \sum_{i=0}^{m-1} w_i F_i (x) @]@ In the sparse case, this should be faster and take less memory than 5.2.2: Hessian . The matrix element @(@ H_{i,j} (x) @)@ is the second partial of @(@ w^\R{T} F (x) @)@ with respect to @(@ x_i @)@ and @(@ x_j @)@.

5.6.3.c: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.6.3.d: BaseVector
The type BaseVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.6.3.e: f
This object has prototype
     ADFun<
Basef
Note that the Taylor coefficients stored in f are affected by this operation; see 5.6.3.m: uses forward below.

5.6.3.f: x
This argument has prototype
     const 
BaseVectorx
and its size is n . It specifies the point at which to evaluate the Hessian @(@ H(x) @)@.

5.6.3.g: w
This argument has prototype
     const 
BaseVectorw
and its size is m . It specifies the weight for each of the components of @(@ F(x) @)@; i.e. @(@ w_i @)@ is the weight for @(@ F_i (x) @)@.

5.6.3.h: subset
This argument has prototype
     sparse_rcv<
SizeVectorBaseVector>& subset
Its row size and column size is n ; i.e., subset.nr() == n and subset.nc() == n . It specifies which elements of the Hessian are computed.
  1. The input value of its value vector subset.val() does not matter. Upon return it contains the value of the corresponding elements of the Hessian.
  2. All of the row, column pairs in subset must also appear in pattern ; i.e., they must be possibly non-zero.
  3. The Hessian is symmetric, so one has a choice as to which off diagonal elements to put in subset . It will probably be more efficient if one makes this choice so that the there are more entries in each non-zero column of subset ; see 5.6.3.l: n_sweep below.


5.6.3.i: pattern
This argument has prototype
     const sparse_rc<
SizeVector>& pattern
Its row size and column size is n ; i.e., pattern.nr() == n and pattern.nc() == n . It is a sparsity pattern for the Hessian @(@ H(x) @)@. If the i-th row (j-th column) does not appear in subset , the i-th row (j-th column) of pattern does not matter and need not be computed. This argument is not used (and need not satisfy any conditions), when 5.6.3.k: work is non-empty.

5.6.3.i.a: subset
If the i-th row and i-th column do not appear in subset , the i-th row and column of pattern do not matter. In this case the i-th-th row and column may have no entries in pattern even though they are possibly non-zero in @(@ H(x) @)@. (This can be used to reduce the amount of computation required to find pattern .)

5.6.3.j: coloring
The coloring algorithm determines which rows and columns can be computed during the same sweep. This field has prototype
     const std::string& 
coloring
This value only matters when work is empty; i.e., after the work constructor or work.clear() .

5.6.3.j.a: cppad.symmetric
This coloring takes advantage of the fact that the Hessian matrix is symmetric when find a coloring that requires fewer 5.6.3.l: sweeps .

5.6.3.j.b: cppad.general
This is the same as the sparse Jacobian 5.6.1.l.a: cppad method which does not take advantage of symmetry.

5.6.3.j.c: colpack.symmetric
If 2.2.2: colpack_prefix was specified on the 2.2.b: cmake command line, you can set coloring to colpack.symmetric. This also takes advantage of the fact that the Hessian matrix is symmetric.

5.6.3.j.d: colpack.general
If 2.2.2: colpack_prefix was specified on the 2.2.b: cmake command line, you can set coloring to colpack.general. This is the same as the sparse Jacobian 5.6.1.l.b: colpack method which does not take advantage of symmetry.

5.6.3.j.e: colpack.star Deprecated 2017-06-01
The colpack.star method is deprecated. It is the same as the colpack.symmetric method which should be used instead.

5.6.3.k: work
This argument has prototype
     sparse_hes_work& 
work
We refer to its initial value, and its value after work.clear() , as empty. If it is empty, information is stored in work . This can be used to reduce computation when a future call is for the same object f , and the same subset of the Hessian. If either of these values change, use work.clear() to empty this structure.

5.6.3.l: n_sweep
The return value n_sweep has prototype
     size_t 
n_sweep
It is the number of first order forward sweeps used to compute the requested Hessian values. Each first forward sweep is followed by a second order reverse sweep so it is also the number of reverse sweeps. It is also the number of colors determined by the coloring method mentioned above. This is proportional to the total computational work, not counting the zero order forward sweep, or combining multiple columns and rows into a single sweep.

5.6.3.m: Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to sparse_hes the zero order coefficients correspond to
     
f.Forward(0, x)
All the other forward mode coefficients are unspecified.

5.6.3.n: Example
The files 5.6.3.1: sparse_hes.cpp is an example and test of sparse_hes. It returns true, if it succeeds, and false otherwise.

5.6.3.o: Subset Hessian
The routine 5.6.4.3: sparse_sub_hes.cpp is an example and test that compute a subset of a sparse Hessian. It returns true, for success, and false otherwise.
Input File: cppad/core/sparse_hes.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.3.1: Computing Sparse Hessian: Example and Test
# include <cppad/cppad.hpp>
bool sparse_hes(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     //
     typedef CPPAD_TESTVECTOR(AD<double>)               a_vector;
     typedef CPPAD_TESTVECTOR(double)                   d_vector;
     typedef CPPAD_TESTVECTOR(size_t)                   s_vector;
     typedef CPPAD_TESTVECTOR(bool)                     b_vector;
     //
     // domain space vector
     size_t n = 12;  // must be greater than or equal 3; see n_sweep below
     a_vector a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);
     //
     // declare independent variables and starting recording
     CppAD::Independent(a_x);
     //
     // range space vector
     size_t m = 1;
     a_vector a_y(m);
     a_y[0] = a_x[0] * a_x[1];
     for(size_t j = 0; j < n; j++)
          a_y[0] += a_x[j] * a_x[j] * a_x[j];
     //
     // create f: x -> y and stop tape recording
     // (without executing zero order forward calculation)
     CppAD::ADFun<double> f;
     f.Dependent(a_x, a_y);
     //
     // new value for the independent variable vector, and weighting vector
     d_vector w(m), x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j);
     w[0] = 1.0;
     //
     // vector used to check the value of the hessian
     d_vector check(n * n);
     size_t ij  = 0 * n + 1;
     for(ij = 0; ij < n * n; ij++)
          check[ij] = 0.0;
     ij         = 0 * n + 1;
     check[ij]  = 1.0;
     ij         = 1 * n + 0;
     check[ij]  = 1.0 ;
     for(size_t j = 0; j < n; j++)
     {     ij = j * n + j;
          check[ij] = 6.0 * x[j];
     }
     //
     // compute Hessian sparsity pattern
     b_vector select_domain(n), select_range(m);
     for(size_t j = 0; j < n; j++)
          select_domain[j] = true;
     select_range[0] = true;
     //
     CppAD::sparse_rc<s_vector> hes_pattern;
     bool internal_bool = false;
     f.for_hes_sparsity(
          select_domain, select_range, internal_bool, hes_pattern
     );
     //
     // compute entire sparse Hessian (really only need lower triangle)
     CppAD::sparse_rcv<s_vector, d_vector> subset( hes_pattern );
     CppAD::sparse_hes_work work;
     std::string coloring = "cppad.symmetric";
     size_t n_sweep = f.sparse_hes(x, w, subset, hes_pattern, coloring, work);
     ok &= n_sweep == 2;
     //
     const s_vector row( subset.row() );
     const s_vector col( subset.col() );
     const d_vector val( subset.val() );
     size_t nnz = subset.nnz();
     ok &= nnz == n + 2;
     for(size_t k = 0; k < nnz; k++)
     {     ij = row[k] * n + col[k];
          ok &= val[k] == check[ij];
     }
     return ok;
}

Input File: example/sparse/sparse_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.4: Sparse Hessian

5.6.4.a: Syntax
hes = f.SparseHessian(xw)
hes = f.SparseHessian(xwp)
n_sweep = f.SparseHessian(xwprowcolheswork)

5.6.4.b: Purpose
We use @(@ n @)@ for the 5.1.5.d: domain size, and @(@ m @)@ for the 5.1.5.e: range size of f . We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ do denote the 12.4.a: AD function corresponding to f . The syntax above sets hes to the Hessian @[@ H(x) = \dpow{2}{x} \sum_{i=1}^m w_i F_i (x) @]@ This routine takes advantage of the sparsity of the Hessian in order to reduce the amount of computation necessary. If row and col are present, it also takes advantage of the reduced set of elements of the Hessian that need to be computed. One can use speed tests (e.g. 8.3: speed_test ) to verify that results are computed faster than when using the routine 5.2.2: Hessian .

5.6.4.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.6.4.n: Uses Forward below).

5.6.4.d: x
The argument x has prototype
     const 
VectorBasex
(see 5.6.4.k: VectorBase below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . It specifies that point at which to evaluate the Hessian.

5.6.4.e: w
The argument w has prototype
     const 
VectorBasew
and size @(@ m @)@. It specifies the value of @(@ w_i @)@ in the expression for hes . The more components of @(@ w @)@ that are identically zero, the more sparse the resulting Hessian may be (and hence the more efficient the calculation of hes may be).

5.6.4.f: p
The argument p is optional and has prototype
     const 
VectorSetp
(see 5.6.4.l: VectorSet below) If it has elements of type bool, its size is @(@ n * n @)@. If it has elements of type std::set<size_t>, its size is @(@ n @)@ and all its set elements are between zero and @(@ n - 1 @)@. It specifies a 12.4.j: sparsity pattern for the Hessian @(@ H(x) @)@.

5.6.4.f.a: Purpose
If this sparsity pattern does not change between calls to SparseHessian , it should be faster to calculate p once and pass this argument to SparseHessian . If you specify p , CppAD will use the same type of sparsity representation (vectors of bool or vectors of std::set<size_t>) for its internal calculations. Otherwise, the representation for the internal calculations is unspecified.

5.6.4.f.b: work
If you specify work in the calling sequence, it is not necessary to keep the sparsity pattern; see the heading 5.6.4.i.c: p under the work description.

5.6.4.f.c: Column Subset
If the arguments row and col are present, and 5.6.4.i.a: color_method is cppad.general or cppad.symmetric, it is not necessary to compute the entire sparsity pattern. Only the following subset of column values will matter:
     { 
col[k] : k = 0 , ... , K-1 }
.

5.6.4.g: row, col
The arguments row and col are optional and have prototype
     const 
VectorSizerow
     const 
VectorSizecol
(see 5.6.4.m: VectorSize below). They specify which rows and columns of @(@ H (x) @)@ are returned and in what order. We use @(@ K @)@ to denote the value hes.size() which must also equal the size of row and col . Furthermore, for @(@ k = 0 , \ldots , K-1 @)@, it must hold that @(@ row[k] < n @)@ and @(@ col[k] < n @)@. In addition, all of the @(@ (row[k], col[k]) @)@ pairs must correspond to a true value in the sparsity pattern p .

5.6.4.h: hes
The result hes has prototype
     
VectorBase hes
In the case where row and col are not present, the size of hes is @(@ n * n @)@ and its size is @(@ n * n @)@. In this case, for @(@ i = 0 , \ldots , n - 1 @)@ and @(@ ell = 0 , \ldots , n - 1 @)@ @[@ hes [ j * n + \ell ] = \DD{ w^{\rm T} F }{ x_j }{ x_\ell } ( x ) @]@

In the case where the arguments row and col are present, we use @(@ K @)@ to denote the size of hes . The input value of its elements does not matter. Upon return, for @(@ k = 0 , \ldots , K - 1 @)@, @[@ hes [ k ] = \DD{ w^{\rm T} F }{ x_j }{ x_\ell } (x) \; , \; \; {\rm where} \; j = row[k] \; {\rm and } \; \ell = col[k] @]@

5.6.4.i: work
If this argument is present, it has prototype
     sparse_hessian_work& 
work
This object can only be used with the routines SparseHessian. During its the first use, information is stored in work . This is used to reduce the work done by future calls to SparseHessian with the same f , p , row , and col . If a future call is made where any of these values have changed, you must first call work.clear() to inform CppAD that this information needs to be recomputed.

5.6.4.i.a: color_method
The coloring algorithm determines which rows and columns can be computed during the same sweep. This field has prototype
     std::string 
work.color_method
This value only matters on the first call to sparse_hessian that follows the work constructor or a call to work.clear() .

"cppad.symmetric"
This is the default coloring method (after a constructor or clear()). It takes advantage of the fact that the Hessian matrix is symmetric to find a coloring that requires fewer 5.6.4.j: sweeps .

"cppad.general"
This is the same as the "cppad" method for the 5.6.2.h.a: sparse_jacobian calculation.

"colpack.symmetric"
This method requires that 2.2.2: colpack_prefix was specified on the 2.2.b: cmake command line. It also takes advantage of the fact that the Hessian matrix is symmetric.

"colpack.general"
This is the same as the "colpack" method for the 5.6.2.h.a: sparse_jacobian calculation.

5.6.4.i.b: colpack.star Deprecated 2017-06-01
The colpack.star method is deprecated. It is the same as the colpack.symmetric which should be used instead.

5.6.4.i.c: p
If work is present, and it is not the first call after its construction or a clear, the sparsity pattern p is not used. This enables one to free the sparsity pattern and still compute corresponding sparse Hessians.

5.6.4.j: n_sweep
The return value n_sweep has prototype
     size_t 
n_sweep
It is the number of first order forward sweeps used to compute the requested Hessian values. Each first forward sweep is followed by a second order reverse sweep so it is also the number of reverse sweeps. This is proportional to the total work that SparseHessian does, not counting the zero order forward sweep, or the work to combine multiple columns into a single forward-reverse sweep pair.

5.6.4.k: VectorBase
The type VectorBase must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.6.4.l: VectorSet
The type VectorSet must be a 8.9: SimpleVector class with 8.9.b: elements of type bool or std::set<size_t>; see 12.4.j: sparsity pattern for a discussion of the difference. The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.6.4.l.a: Restrictions
If VectorSet has elements of std::set<size_t>, then p[i] must return a reference (not a copy) to the corresponding set. According to section 26.3.2.3 of the 1998 C++ standard, std::valarray< std::set<size_t> > does not satisfy this condition.

5.6.4.m: VectorSize
The type VectorSize must be a 8.9: SimpleVector class with 8.9.b: elements of type size_t. The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.6.4.n: Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to any of the sparse Hessian routines, the zero order Taylor coefficients correspond to f.Forward(0, x) and the other coefficients are unspecified.

5.6.4.o: Example
The routine 5.6.4.1: sparse_hessian.cpp is examples and tests of sparse_hessian. It return true, if it succeeds and false otherwise.

5.6.4.p: Subset Hessian
The routine 5.6.4.2: sub_sparse_hes.cpp is an example and test that compute a sparse Hessian for a subset of the variables. It returns true, for success, and false otherwise.
Input File: cppad/core/sparse_hessian.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.4.1: Sparse Hessian: Example and Test
# include <cppad/cppad.hpp>
bool sparse_hessian(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     size_t i, j, k, ell;
     typedef CPPAD_TESTVECTOR(AD<double>)               a_vector;
     typedef CPPAD_TESTVECTOR(double)                     d_vector;
     typedef CPPAD_TESTVECTOR(size_t)                     i_vector;
     typedef CPPAD_TESTVECTOR(bool)                       b_vector;
     typedef CPPAD_TESTVECTOR(std::set<size_t>)         s_vector;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n = 12;  // must be greater than or equal 3; see n_sweep below
     a_vector a_x(n);
     for(j = 0; j < n; j++)
          a_x[j] = AD<double> (0);

     // declare independent variables and starting recording
     CppAD::Independent(a_x);

     // range space vector
     size_t m = 1;
     a_vector a_y(m);
     a_y[0] = a_x[0]*a_x[1];
     for(j = 0; j < n; j++)
          a_y[0] += a_x[j] * a_x[j] * a_x[j];

     // create f: x -> y and stop tape recording
     // (without executing zero order forward calculation)
     CppAD::ADFun<double> f;
     f.Dependent(a_x, a_y);

     // new value for the independent variable vector, and weighting vector
     d_vector w(m), x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j);
     w[0] = 1.0;

     // vector used to check the value of the hessian
     d_vector check(n * n);
     for(ell = 0; ell < n * n; ell++)
          check[ell] = 0.0;
     ell        = 0 * n + 1;
     check[ell] = 1.0;
     ell        = 1 * n + 0;
     check[ell] = 1.0 ;
     for(j = 0; j < n; j++)
     {     ell = j * n + j;
          check[ell] = 6.0 * x[j];
     }

     // -------------------------------------------------------------------
     // second derivative of y[0] w.r.t x
     d_vector hes(n * n);
     hes = f.SparseHessian(x, w);
     for(ell = 0; ell < n * n; ell++)
          ok &=  NearEqual(w[0] * check[ell], hes[ell], eps, eps );

     // --------------------------------------------------------------------
     // example using vectors of bools to compute sparsity pattern for Hessian
     b_vector r_bool(n * n);
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
               r_bool[i * n + j] = false;
          r_bool[i * n + i] = true;
     }
     f.ForSparseJac(n, r_bool);
     //
     b_vector s_bool(m);
     for(i = 0; i < m; i++)
          s_bool[i] = w[i] != 0;
     b_vector p_bool = f.RevSparseHes(n, s_bool);

     hes = f.SparseHessian(x, w, p_bool);
     for(ell = 0; ell < n * n; ell++)
          ok &=  NearEqual(w[0] * check[ell], hes[ell], eps, eps );

     // --------------------------------------------------------------------
     // example using vectors of sets to compute sparsity pattern for Hessian
     s_vector r_set(n);
     for(i = 0; i < n; i++)
          r_set[i].insert(i);
     f.ForSparseJac(n, r_set);
     //
     s_vector s_set(m);
     for(i = 0; i < m; i++)
          if( w[i] != 0. )
               s_set[0].insert(i);
     s_vector p_set = f.RevSparseHes(n, s_set);

     // example passing sparsity pattern to SparseHessian
     hes = f.SparseHessian(x, w, p_set);
     for(ell = 0; ell < n * n; ell++)
          ok &=  NearEqual(w[0] * check[ell], hes[ell], eps, eps );

     // --------------------------------------------------------------------
     // use row and column indices to specify upper triangle of
     // non-zero elements of Hessian
     size_t K = n + 1;
     i_vector row(K), col(K);
     hes.resize(K);
     k = 0;
     for(j = 0; j < n; j++)
     {     // diagonal of Hessian
          row[k] = j;
          col[k] = j;
          k++;
     }
     // only off diagonal non-zero elemenet in upper triangle
     row[k] = 0;
     col[k] = 1;
     k++;
     ok &= k == K;
     CppAD::sparse_hessian_work work;

     // can use p_set or p_bool.
     size_t n_sweep = f.SparseHessian(x, w, p_set, row, col, hes, work);
     for(k = 0; k < K; k++)
     {     ell = row[k] * n + col[k];
          ok &=  NearEqual(w[0] * check[ell], hes[k], eps, eps );
     }
     ok &= n_sweep == 2;

     // now recompute at a different x and w (using work from previous call
     w[0]       = 2.0;
     x[1]       = 0.5;
     ell        = 1 * n + 1;
     check[ell] = 6.0 * x[1];
     s_vector   not_used;
     n_sweep    = f.SparseHessian(x, w, not_used, row, col, hes, work);
     for(k = 0; k < K; k++)
     {     ell = row[k] * n + col[k];
          ok &=  NearEqual(w[0] * check[ell], hes[k], eps, eps );
     }
     ok &= n_sweep == 2;



     return ok;
}

Input File: example/sparse/sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.4.2: Computing Sparse Hessian for a Subset of Variables

5.6.4.2.a: Purpose
This example uses 10.2.10: multiple levels of AD to compute the Hessian for a subset of the variables without having to compute the sparsity pattern for the entire function.

5.6.4.2.b: See Also
5.6.4.3: sparse_sub_hes.cpp , 5.5.6.2: sparsity_sub.cpp ,

5.6.4.2.c: Function
We consider the function @(@ f : \B{R}^{nu} \times \B{R}^{nv} \rightarrow \B{R} @)@ defined by @[@ f (u, v) = \left( \sum_{j=0}^{nu-1} u_j^3 \right) \left( \sum_{j=0}^{nv-1} v_j \right) @]@

5.6.4.2.d: Subset
Suppose that we are only interested computing the function @[@ H(u, v) = \partial_u \partial_u f (u, v) @]@ where this Hessian is sparse.

5.6.4.2.e: Example
The following code shows one way to compute this subset of the Hessian of @(@ f @)@.
# include <cppad/cppad.hpp>

namespace {
     using CppAD::vector;
     template <class Scalar>
     Scalar f(const vector<Scalar>& u,const vector<Scalar>& v)
     {     size_t i;
          Scalar sum_v = Scalar(0);
          for(i = 0; i < v.size(); i++)
               sum_v += v[i];
          Scalar sum_cube_u = Scalar(0);
          for(i = 0; i < u.size(); i++)
               sum_cube_u += u[i] * u[i] * u[i] / 6.0;
          return sum_v * sum_cube_u;
     }
}

bool sub_sparse_hes(void)
{     bool ok = true;
     using CppAD::AD;
     typedef AD<double>   adouble;
     typedef AD<adouble> a2double;
     typedef vector< std::set<size_t> > pattern;
     double eps = 10. * std::numeric_limits<double>::epsilon();
     size_t i, j;

     // start recording with x = (u , v)
     size_t nu = 10;
     size_t nv = 5;
     size_t n  = nu + nv;
     vector<adouble> ax(n);
     for(j = 0; j < n; j++)
          ax[j] = adouble(j + 2);
     CppAD::Independent(ax);

     // extract u as independent variables
     vector<a2double> a2u(nu);
     for(j = 0; j < nu; j++)
          a2u[j] = a2double(j + 2);
     CppAD::Independent(a2u);

     // extract v as parameters
     vector<a2double> a2v(nv);
     for(j = 0; j < nv; j++)
          a2v[j] = ax[nu+j];

     // record g(u)
     vector<a2double> a2y(1);
     a2y[0] = f(a2u, a2v);
     CppAD::ADFun<adouble> g;
     g.Dependent(a2u, a2y);

     // compue sparsity pattern for Hessian of g(u)
     pattern r(nu), s(1);
     for(j = 0; j < nu; j++)
          r[j].insert(j);
     g.ForSparseJac(nu, r);
     s[0].insert(0);
     pattern p = g.RevSparseHes(nu, s);

     // Row and column indices for non-zeros in lower triangle of Hessian
     vector<size_t> row, col;
     for(i = 0; i < nu; i++)
     {     std::set<size_t>::const_iterator itr;
          for(itr = p[i].begin(); itr != p[i].end(); itr++)
          {     j = *itr;
               if( j <= i )
               {     row.push_back(i);
                    col.push_back(j);
               }
          }
     }
     size_t K = row.size();
     CppAD::sparse_hessian_work work;
     vector<adouble> au(nu), ahes(K), aw(1);
     aw[0] = 1.0;
     for(j = 0; j < nu; j++)
          au[j] = ax[j];
     size_t n_sweep = g.SparseHessian(au, aw, p, row, col, ahes, work);

     // The Hessian w.r.t u is diagonal
     ok &= n_sweep == 1;

     // record H(u, v) = Hessian of f w.r.t u
     CppAD::ADFun<double> H(ax, ahes);

     // remove unecessary operations
     H.optimize();

     // Now evaluate the Hessian at a particular value for u, v
     vector<double> u(nu), v(nv), x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j + 2);
     vector<double> hes = H.Forward(0, x);

     // Now check the Hessian
     double sum_v = 0.0;
     for(j = 0; j < nv; j++)
          sum_v += x[nu + j];
     for(size_t k = 0; k < K; k++)
     {     i     = row[k];
          j     = col[k];
          ok   &= i == j;
          double check = sum_v * x[i];
          ok &= CppAD::NearEqual(hes[k], check, eps, eps);
     }
     return ok;
}

Input File: example/sparse/sub_sparse_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.4.3: Subset of a Sparse Hessian: Example and Test

5.6.4.3.a: Purpose
This example uses a 5.6.4.f.c: column subset of the sparsity pattern to compute a subset of the Hessian.

5.6.4.3.b: See Also
5.6.4.2: sub_sparse_hes.cpp
# include <cppad/cppad.hpp>
bool sparse_sub_hes(void)
{     bool ok = true;
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(size_t)     SizeVector;
     typedef CPPAD_TESTVECTOR(double)     DoubleVector;
     typedef CppAD::sparse_rc<SizeVector> sparsity;
     //
     // domain space vector
     size_t n = 4;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     for(size_t j = 0; j < n; j++)
          ax[j] = double(j);

     // declare independent variables and start recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     ay[0] = 0.0;
     for(size_t j = 0; j < n; j++)
          ay[0] += double(j+1) * ax[0] * ax[j];

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);

     // sparsity pattern for the identity matrix
     size_t nr     = n;
     size_t nc     = n;
     size_t nnz_in = n;
     sparsity pattern_in(nr, nc, nnz_in);
     for(size_t k = 0; k < nnz_in; k++)
     {     size_t r = k;
          size_t c = k;
          pattern_in.set(k, r, c);
     }
     // compute sparsity pattern for J(x) = f'(x)
     bool transpose       = false;
     bool dependency      = false;
     bool internal_bool   = false;
     sparsity pattern_out;
     f.for_jac_sparsity(
          pattern_in, transpose, dependency, internal_bool, pattern_out
     );
     //
     // compute sparsity pattern for H(x) = f''(x)
     CPPAD_TESTVECTOR(bool) select_range(m);
     select_range[0]      = true;
     CppAD::sparse_hes_work work;
     f.rev_hes_sparsity(
          select_range, transpose, internal_bool, pattern_out
     );
     size_t nnz = pattern_out.nnz();
     ok        &= nnz == 7;
     ok        &= pattern_out.nr() == n;
     ok        &= pattern_out.nc() == n;
     {     // check results
          const SizeVector& row( pattern_out.row() );
          const SizeVector& col( pattern_out.col() );
          SizeVector row_major = pattern_out.row_major();
          //
          ok &= row[ row_major[0] ] ==  0  && col[ row_major[0] ] ==  0;
          ok &= row[ row_major[1] ] ==  0  && col[ row_major[1] ] ==  1;
          ok &= row[ row_major[2] ] ==  0  && col[ row_major[2] ] ==  2;
          ok &= row[ row_major[3] ] ==  0  && col[ row_major[3] ] ==  3;
          //
          ok &= row[ row_major[4] ] ==  1  && col[ row_major[4] ] ==  0;
          ok &= row[ row_major[5] ] ==  2  && col[ row_major[5] ] ==  0;
          ok &= row[ row_major[6] ] ==  3  && col[ row_major[6] ] ==  0;
     }
     //
     // Only interested in cross-terms. Since we are not computing rwo 0,
     // we do not need sparsity entries in row 0.
     CppAD::sparse_rc<SizeVector> subset_pattern(n, n, 3);
     for(size_t k = 0; k < 3; k++)
          subset_pattern.set(k, k+1, 0);
     CppAD::sparse_rcv<SizeVector, DoubleVector> subset( subset_pattern );
     //
     // argument and weight values for computation
     CPPAD_TESTVECTOR(double) x(n), w(m);
     for(size_t j = 0; j < n; j++)
          x[j] = double(n) / double(j+1);
     w[0] = 1.0;
     //
     std::string coloring = "cppad.general";
     size_t n_sweep = f.sparse_hes(
          x, w, subset, subset_pattern, coloring, work
     );
     ok &= n_sweep == 1;
     for(size_t k = 0; k < 3; k++)
     {     size_t i = k + 1;
          ok &= subset.val()[k] == double(i + 1);
     }
     //
     // convert subset from lower triangular to upper triangular
     for(size_t k = 0; k < 3; k++)
          subset_pattern.set(k, 0, k+1);
     subset = CppAD::sparse_rcv<SizeVector, DoubleVector>( subset_pattern );
     //
     // This will require more work because the Hessian is computed
     // column by column (not row by row).
     work.clear();
     n_sweep = f.sparse_hes(
          x, w, subset, subset_pattern, coloring, work
     );
     ok &= n_sweep == 3;
     //
     // but it will get the right answer
     for(size_t k = 0; k < 3; k++)
     {     size_t i = k + 1;
          ok &= subset.val()[k] == double(i + 1);
     }
     return ok;
}

Input File: example/sparse/sparse_sub_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.5: Compute Sparse Jacobians Using Subgraphs

5.6.5.a: Syntax
f.subgraph_jac_rev(xsubset)
f.subgraph_jac_rev(
     
select_domainselect_rangexmatrix_out
)


5.6.5.b: Purpose
We use @(@ F : \B{R}^n \rightarrow \B{R}^m @)@ to denote the function corresponding to f . Here n is the 5.1.5.d: domain size, and m is the 5.1.5.e: range size, or f . The syntax above takes advantage of sparsity when computing the Jacobian @[@ J(x) = F^{(1)} (x) @]@ The first syntax computes the sparsity pattern and the value of the Jacobian at the same time. If one only wants the sparsity pattern, it should be faster to use 5.5.11: subgraph_sparsity .

5.6.5.c: Method
This routine uses a subgraph technique. To be specific, for each dependent variable, it a subgraph of the operation sequence to determine which independent variables affect it. This avoids to overhead of performing set operations that is inherent in other methods for computing sparsity patterns.

5.6.5.d: BaseVector
The type BaseVector is a 8.9: SimpleVector class with 8.9.b: elements of type Base .

5.6.5.e: SizeVector
The type SizeVector is a 8.9: SimpleVector class with 8.9.b: elements of type size_t.

5.6.5.f: BoolVector
The type BoolVector is a 8.9: SimpleVector class with 8.9.b: elements of type bool.

5.6.5.g: f
This object has prototype
     ADFun<
Basef
Note that the Taylor coefficients stored in f are affected by this operation; see 5.6.1.o: uses forward below.

5.6.5.h: x
This argument has prototype
     const 
BaseVectorx
It is the value of x at which we are computing the Jacobian.

5.6.5.i: Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After a call to sparse_jac_forward or sparse_jac_rev, the zero order coefficients correspond to
     
f.Forward(0, x)
All the other forward mode coefficients are unspecified.

5.6.5.j: subset
This argument has prototype
     sparse_rcv<
SizeVectorBaseVector>& subset
Its row size is subset.nr() == m , and its column size is subset.nc() == n . It specifies which elements of the Jacobian are computed. The input elements in its value vector subset.val() do not matter. Upon return it contains the value of the corresponding elements of the Jacobian.

5.6.5.k: select_domain
The argument select_domain has prototype
     const 
BoolVectorselect_domain
It has size @(@ n @)@ and specifies which independent variables to include.

5.6.5.l: select_range
The argument select_range has prototype
     const 
BoolVectorselect_range
It has size @(@ m @)@ and specifies which components of the range to include in the calculation. A subgraph is built for each dependent variable and the selected set of independent variables.

5.6.5.m: matrix_out
This argument has prototype
     sparse_rcv<
SizeVectorBaseVector>& matrix_out
This input value of matrix_out does not matter. Upon return matrix_out is 8.28: sparse matrix representation of @(@ F^{(1)} (x) @)@. The matrix has @(@ m @)@ rows, @(@ n @)@ columns. If select_domain[j] is true, select_range[i] is true, and @(@ F_i (x) @)@ depends on @(@ x_j @)@, then the pair @(@ (i, j) @)@ is in matrix_out . For each k = 0 , ...matrix_out.nnz() , let
     
i = matrix_out.row()[k]
     
j = matrix_out.col()[k]
     
v = matrix_out.val()[k]
It follows that the partial of @(@ F_i (x) @)@ with respect to @(@ x_j @)@ is equal to @(@ v @)@.

5.6.5.n: Example
The files 5.6.5.1: subgraph_jac_rev.cpp and 5.6.5.2: subgraph_hes2jac.cpp are examples and tests using subgraph_jac_rev. They returns true for success and false for failure.
Input File: cppad/core/subgraph_jac_rev.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
# include <cppad/cppad.hpp>
bool subgraph_jac_rev(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::sparse_rc;
     using CppAD::sparse_rcv;
     //
     typedef CPPAD_TESTVECTOR(AD<double>) a_vector;
     typedef CPPAD_TESTVECTOR(double)     d_vector;
     typedef CPPAD_TESTVECTOR(size_t)     s_vector;
     typedef CPPAD_TESTVECTOR(bool)       b_vector;
     //
     // domain space vector
     size_t n = 4;
     a_vector  a_x(n);
     for(size_t j = 0; j < n; j++)
          a_x[j] = AD<double> (0);
     //
     // declare independent variables and starting recording
     CppAD::Independent(a_x);
     //
     size_t m = 3;
     a_vector  a_y(m);
     a_y[0] = a_x[0] + a_x[1];
     a_y[1] = a_x[2] + a_x[3];
     a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.;
     //
     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);
     //
     // new value for the independent variable vector
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j);
     /*
            [ 1 1 0 0  ]
     J(x) = [ 0 0 1 1  ]
            [ 1 1 1 x_3]
     */
     //
     // row-major order values of J(x)
     size_t nnz = 8;
     s_vector check_row(nnz), check_col(nnz);
     d_vector check_val(nnz);
     for(size_t k = 0; k < nnz; k++)
     {     // check_val
          if( k < 7 )
               check_val[k] = 1.0;
          else
               check_val[k] = x[3];
          //
          // check_row and check_col
          check_col[k] = k;
          if( k < 2 )
               check_row[k] = 0;
          else if( k < 4 )
               check_row[k] = 1;
          else
          {     check_row[k] = 2;
               check_col[k] = k - 4;
          }
     }
     //
     // select all range components of domain and range
     b_vector select_domain(n), select_range(m);
     for(size_t j = 0; j < n; ++j)
          select_domain[j] = true;
     for(size_t i = 0; i < m; ++i)
          select_range[i] = true;
     // -----------------------------------------------------------------------
     // Compute Jacobian using f.subgraph_jac_rev(x, subset)
     // -----------------------------------------------------------------------
     //
     // get sparsity pattern
     bool transpose     = false;
     sparse_rc<s_vector> pattern_jac;
     f.subgraph_sparsity(
          select_domain, select_range, transpose, pattern_jac
     );
     // f.subgraph_jac_rev(x, subset)
     sparse_rcv<s_vector, d_vector> subset( pattern_jac );
     f.subgraph_jac_rev(x, subset);
     //
     // check result
     ok  &= subset.nnz() == nnz;
     s_vector row_major = subset.row_major();
     for(size_t k = 0; k < nnz; k++)
     {     ok &= subset.row()[ row_major[k] ] == check_row[k];
          ok &= subset.col()[ row_major[k] ] == check_col[k];
          ok &= subset.val()[ row_major[k] ] == check_val[k];
     }
     // -----------------------------------------------------------------------
     // f.subgraph_jac_rev(select_domain, select_range, x, matrix_out)
     // -----------------------------------------------------------------------
     sparse_rcv<s_vector, d_vector>  matrix_out;
     f.subgraph_jac_rev(select_domain, select_range, x, matrix_out);
     //
     // check result
     ok  &= matrix_out.nnz() == nnz;
     row_major = matrix_out.row_major();
     for(size_t k = 0; k < nnz; k++)
     {     ok &= matrix_out.row()[ row_major[k] ] == check_row[k];
          ok &= matrix_out.col()[ row_major[k] ] == check_col[k];
          ok &= matrix_out.val()[ row_major[k] ] == check_val[k];
     }
     //
     return ok;
}

Input File: example/sparse/subgraph_jac_rev.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
# include <cppad/cppad.hpp>
bool subgraph_hes2jac(void)
{     bool ok = true;
     using CppAD::NearEqual;
     typedef CppAD::AD<double>                      a1double;
     typedef CppAD::AD<a1double>                    a2double;
     typedef CPPAD_TESTVECTOR(double)               d_vector;
     typedef CPPAD_TESTVECTOR(a1double)             a1vector;
     typedef CPPAD_TESTVECTOR(a2double)             a2vector;
     typedef CPPAD_TESTVECTOR(size_t)               s_vector;
     typedef CPPAD_TESTVECTOR(bool)                 b_vector;
     typedef CppAD::sparse_rcv<s_vector, d_vector>  sparse_matrix;
     //
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     //
     // double version of x
     size_t n = 12;
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j + 2);
     //
     // a1double version of x
     a1vector a1x(n);
     for(size_t j = 0; j < n; j++)
          a1x[j] = x[j];
     //
     // a2double version of x
     a2vector a2x(n);
     for(size_t j = 0; j < n; j++)
          a2x[j] = a1x[j];
     //
     // declare independent variables and starting recording
     CppAD::Independent(a2x);
     //
     // a2double version of y = f(x) = 5 * x0 * x1 + sum_j xj^3
     size_t m = 1;
     a2vector a2y(m);
     a2y[0] = 5.0 * a2x[0] * a2x[1];
     for(size_t j = 0; j < n; j++)
          a2y[0] += a2x[j] * a2x[j] * a2x[j];
     //
     // create a1double version of f: x -> y and stop tape recording
     // (without executing zero order forward calculation)
     CppAD::ADFun<a1double> a1f;
     a1f.Dependent(a2x, a2y);
     //
     // Optimize this function to reduce future computations.
     // Perhaps only one optimization at the end would be faster.
     a1f.optimize();
     //
     // declare independent variables and start recording g(x) = f'(x)
     Independent(a1x);
     //
     // Use one reverse mode pass to compute z = f'(x)
     a1vector a1w(m), a1z(n);
     a1w[0] = 1.0;
     a1f.Forward(0, a1x);
     a1z = a1f.Reverse(1, a1w);
     //
     // create double version of g : x -> f'(x)
     CppAD::ADFun<double> g;
     g.Dependent(a1x, a1z);
     //
     // Optimize this function to reduce future computations.
     // Perhaps no optimization would be faster.
     g.optimize();
     //
     // compute f''(x) = g'(x)
     b_vector select_domain(n), select_range(n);
     for(size_t j = 0; j < n; ++j)
     {     select_domain[j] = true;
          select_range[j]  = true;
     }
     sparse_matrix hessian;
     g.subgraph_jac_rev(select_domain, select_range, x, hessian);
     // -------------------------------------------------------------------
     // check number of non-zeros in the Hessian
     // (only x0 * x1 generates off diagonal terms)
     ok &= hessian.nnz() == n + 2;
     //
     for(size_t k = 0; k < hessian.nnz(); ++k)
     {     size_t r = hessian.row()[k];
          size_t c = hessian.col()[k];
          double v = hessian.val()[k];
          //
          if( r == c )
          {     // a diagonal element
               double check = 6.0 * x[r];
               ok          &= NearEqual(v, check, eps, eps);
          }
          else
          {     // off diagonal element
               ok   &= (r == 0 && c == 1) || (r == 1 && c == 0);
               double check = 5.0;
               ok          &= NearEqual(v, check, eps, eps);
          }
     }
     return ok;
}

Input File: example/sparse/subgraph_hes2jac.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7: Optimize an ADFun Object Tape

5.7.a: Syntax
f.optimize()
f.optimize(options)

5.7.b: Purpose
The operation sequence corresponding to an 5: ADFun object can be very large and involve many operations; see the size functions in 5.1.5: seq_property . The f.optimize procedure reduces the number of operations, and thereby the time and the memory, required to compute function and derivative values.

5.7.c: f
The object f has prototype
     ADFun<
Basef

5.7.d: options
This argument has prototype
     const std::string& 
options
The default for options is the empty string. If it is present, it must consist of one or more of the options below separated by a single space character.

5.7.d.a: no_conditional_skip
The optimize function can create conditional skip operators to improve the speed of conditional expressions; see 4.4.4.j: optimize . If the sub-string no_conditional_skip appears in options , conditional skip operations are not be generated. This may make the optimize routine use significantly less memory and take less time to optimize f . If conditional skip operations are generated, it may save a significant amount of time when using f for 5.3: forward or 5.4: reverse mode calculations; see 5.3.9: number_skip .

5.7.d.b: no_compare_op
If the sub-string no_compare_op appears in options , comparison operators will be removed from the optimized function. These operators are necessary for the 5.3.7: compare_change functions to be meaningful. On the other hand, they are not necessary, and take extra time, when the compare_change functions are not used.

5.7.d.c: no_print_for_op
If the sub-string no_compare_op appears in options , 4.3.6: PrintFor operations will be removed form the optimized function. These operators are useful for reporting problems evaluating derivatives at independent variable values different from those used to record a function.

5.7.e: Examples
5.7.1: forward_active.cpp Example Optimization and Forward Activity Analysis
5.7.2: reverse_active.cpp Example Optimization and Reverse Activity Analysis
5.7.3: compare_op.cpp Example Optimization and Comparison Operators
5.7.4: print_for_op.cpp Example Optimization and Print Forward Operators
5.7.5: conditional_skip.cpp Example Optimization and Conditional Expressions
5.7.6: nest_conditional.cpp Example Optimization and Nested Conditional Expressions
5.7.7: cumulative_sum.cpp Example Optimization and Cumulative Sum Operations

5.7.f: Efficiency
If a 5.3.1: zero order forward calculation is done during the construction of f , it will require more memory and time than required after the optimization procedure. In addition, it will need to be redone. For this reason, it is more efficient to use
     ADFun<
Basef;
     
f.Dependent(xy);
     
f.optimize();
instead of
     ADFun<
Basef(xy)
     
f.optimize();
See the discussion about 5.1.2.g: sequence constructors .

5.7.g: Speed Testing
You can run the CppAD 11.1: speed tests and see the corresponding changes in number of variables and execution time. Note that there is an interaction between using 11.1.f.c: optimize and 11.1.f.a: onetape . If onetape is true and optimize is true, the optimized tape will be reused many times. If onetape is false and optimize is true, the tape will be re-optimized for each test.

5.7.h: Atomic Functions
There are some subtitle issue with optimized 4.4.7: atomic functions @(@ v = g(u) @)@:

5.7.h.a: rev_sparse_jac
The 4.4.7.2.7: atomic_rev_sparse_jac function is be used to determine which components of u affect the dependent variables of f . For each atomic operation, the current 4.4.7.2.2.b: atomic_sparsity setting is used to determine if pack_sparsity_enum, bool_sparsity_enum, or set_sparsity_enum is used to determine dependency relations between argument and result variables.

5.7.h.b: nan
If u[i] does not affect the value of the dependent variables for f , the value of u[i] is set to 8.11: nan .

5.7.i: Checking Optimization
If 12.1.j.a: NDEBUG is not defined, and 5.3.6: f.size_order() is greater than zero, a 5.3.1: forward_zero calculation is done using the optimized version of f and the results are checked to see that they are the same as before. If they are not the same, the 8.1: ErrorHandler is called with a known error message related to f.optimize() .
Input File: cppad/core/optimize.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.1: Example Optimization and Forward Activity Analysis
# include <cppad/cppad.hpp>
namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     template <class Vector> void fun(
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;

          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // adding the constant zero does not take any operations
          scalar zero   = 0.0 + x[0];
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;

          // multiplication by the constant one does not take any operations
          scalar one    = 1.0 * x[1];
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;

          // multiplication by the constant zero does not take any operations
          // and results in the constant zero.
          scalar two    = 0.0 * x[0];

          // operations that only involve constants do not take any operations
          scalar three  = (1.0 + two) * 3.0;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;

          // The optimizer will reconize that zero + one = one + zero
          // for all values of x.
          scalar four   = zero + one;
          scalar five   = one  + zero;
          before.n_var += 2; before.n_op  += 2;
          after.n_var  += 1; after.n_op   += 1;

          // The optimizer will reconize that sin(x[3]) = sin(x[3])
          // for all values of x. Note that, for computation of derivatives,
          // sin(x[3]) and cos(x[3]) are stored on the tape as a pair.
          scalar six    = sin(x[2]);
          scalar seven  = sin(x[2]);
          before.n_var += 4; before.n_op  += 2;
          after.n_var  += 2; after.n_op   += 1;

          // If we used addition here, five + seven = zero + one + seven
          // which would get converted to a cumulative summation operator.
          scalar eight = five * seven;
          before.n_var += 1; before.n_op  += 1;
          after.n_var  += 1; after.n_op   += 1;

          // Use two, three, four and six in order to avoid a compiler warning
          // Note that addition of two and three does not take any operations.
          // Also note that optimizer reconizes four * six == five * seven.
          scalar nine  = eight + four * six * (two + three);
          before.n_var += 3; before.n_op  += 3;
          after.n_var  += 2; after.n_op   += 2;

          // results for this operation sequence
          y[0] = nine;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
     }
}

bool forward_active(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 3;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.5;
     ax[1] = 1.5;
     ax[2] = 2.0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     tape_size before, after;
     fun(ax, ay, before, after);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);
     ok &= f.size_var() == before.n_var;
     ok &= f.size_op()  == before.n_op;

     // Optimize the operation sequence
     // Note that, for this case, all the optimization was done during
     // the recording and there is no benifit to the optimization.
     f.optimize();
     ok &= f.size_var() == after.n_var;
     ok &= f.size_op()  == after.n_op;

     // check zero order forward with different argument value
     CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
     for(size_t i = 0; i < n; i++)
          x[i] = double(i + 2);
     y    = f.Forward(0, x);
     fun(x, check, before, after);
     ok &= NearEqual(y[0], check[0], eps10, eps10);

     return ok;
}

Input File: example/optimize/forward_active.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.2: Example Optimization and Reverse Activity Analysis
# include <cppad/cppad.hpp>
namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     template <class Vector> void fun(
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;

          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // initilized product of even and odd variables
          scalar prod_even = x[0];
          scalar prod_odd  = x[1];
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
          //
          // compute product of even and odd variables
          for(size_t i = 2; i < size_t( x.size() ); i++)
          {     if( i % 2 == 0 )
               {     // prod_even will affect dependent variable
                    prod_even = prod_even * x[i];
                    before.n_var += 1; before.n_op += 1;
                    after.n_var  += 1; after.n_op  += 1;
               }
               else
               {     // prod_odd will not affect dependent variable
                    prod_odd  = prod_odd * x[i];
                    before.n_var += 1; before.n_op += 1;
                    after.n_var  += 0; after.n_op  += 0;
               }
          }

          // dependent variable for this operation sequence
          y[0] = prod_even;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
     }
}

bool reverse_active(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 6;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     for(size_t i = 0; i < n; i++)
          ax[i] = AD<double>(i + 1);

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     tape_size before, after;
     fun(ax, ay, before, after);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);
     ok &= f.size_var() == before.n_var;
     ok &= f.size_op()  == before.n_op;

     // Optimize the operation sequence
     f.optimize();
     ok &= f.size_var() == after.n_var;
     ok &= f.size_op()  == after.n_op;

     // check zero order forward with different argument value
     CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
     for(size_t i = 0; i < n; i++)
          x[i] = double(i + 2);
     y    = f.Forward(0, x);
     fun(x, check, before, after);
     ok &= NearEqual(y[0], check[0], eps10, eps10);

     return ok;
}

Input File: example/optimize/reverse_active.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.3: Example Optimization and Comparison Operators

5.7.3.a: See Also
4.4.4.1: cond_exp.cpp
# include <cppad/cppad.hpp>
namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     template <class Vector> void fun(
          const std::string& options ,
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;

          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // Create a variable that is is only used in the comparision operation
          // It is not used when the comparison operator is not included
          scalar one = 1. / x[0];
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 0; after.n_op  += 0;
          // If we keep comparision operators, we must compute their operands
          if( options.find("no_compare_op") == std::string::npos )
          {     after.n_var += 1;  after.n_op += 1;
          }

          // Create a variable that is used by the result
          scalar two = x[0] * 5.;
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op += 1;

          // Only one variable created for this comparison operation
          // but the value depends on which branch is taken.
          scalar three;
          if( one < x[0] )        // comparison operator
               three = two / 2.0;  // division operator
          else
               three = 2.0 * two;  // multiplication operator
          // comparison and either division of multiplication operator
          before.n_var += 1; before.n_op += 2;
          // comparison operator depends on optimization options
          after.n_var += 1;  after.n_op += 1;
          // check if we are keeping the comparison operator
          if( options.find("no_compare_op") == std::string::npos )
               after.n_op += 1;

          // results for this operation sequence
          y[0] = three;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
     }
}

bool compare_op(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.5;

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);

     for(size_t k = 0; k < 2; k++)
     {     // optimization options
          std::string options = "";
          if( k == 0 )
               options = "no_compare_op";

          // declare independent variables and start tape recording
          CppAD::Independent(ax);

          // compute function value
          tape_size before, after;
          fun(options, ax, ay, before, after);

          // create f: x -> y and stop tape recording
          CppAD::ADFun<double> f(ax, ay);
          ok &= f.size_var() == before.n_var;
          ok &= f.size_op() == before.n_op;

          // Optimize the operation sequence
          f.optimize(options);
          ok &= f.size_var() == after.n_var;
          ok &= f.size_op() == after.n_op;

          // Check result for a zero order calculation for a different x,
          // where the result of the comparison is he same.
          CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
          x[0] = 0.75;
          y    = f.Forward(0, x);
          if ( options == "" )
               ok  &= f.compare_change_number() == 0;
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);

          // Check case where result of the comparision is differnent
          // (hence one needs to re-tape to get correct result)
          x[0] = 2.0;
          y    = f.Forward(0, x);
          if ( options == "" )
               ok  &= f.compare_change_number() == 1;
          fun(options, x, check, before, after);
          ok  &= std::fabs(y[0] - check[0]) > 0.5;
     }
     return ok;
}

Input File: example/optimize/compare_op.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.4: Example Optimization and Print Forward Operators
# include <cppad/cppad.hpp>

namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     void PrintFor(
          double pos, const char* before, double var, const char* after
     )
     {     if( pos <= 0.0 )
               std::cout << before << var << after;
          return;
     }
     template <class Vector> void fun(
          const std::string& options ,
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;

          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // Argument to PrintFor is only needed
          // if we are keeping print forward operators
          scalar minus_one = x[0] - 1.0;
          before.n_var += 1; before.n_op += 1;
          if( options.find("no_print_for_op") == std::string::npos )
          {     after.n_var += 1;  after.n_op += 1;
          }

          // print argument to log function minus one, if it is <= 0
          PrintFor(minus_one, "minus_one == ", minus_one , " is <=  0\n");
          before.n_var += 0; before.n_op += 1;
          if( options.find("no_print_for_op") == std::string::npos )
          {     after.n_var += 0;  after.n_op += 1;
          }

          // now compute log
          y[0] = log( x[0] );
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;
     }
}

bool print_for(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 1.5;

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);

     for(size_t k = 0; k < 2; k++)
     {     // optimization options
          std::string options = "";
          if( k == 0 )
               options = "no_print_for_op";

          // declare independent variables and start tape recording
          CppAD::Independent(ax);

          // compute function value
          tape_size before, after;
          fun(options, ax, ay, before, after);

          // create f: x -> y and stop tape recording
          CppAD::ADFun<double> f(ax, ay);
          ok &= f.size_var() == before.n_var;
          ok &= f.size_op() == before.n_op;

          // Optimize the operation sequence
          f.optimize(options);
          ok &= f.size_var() == after.n_var;
          ok &= f.size_op() == after.n_op;

          // Check result for a zero order calculation for a different x
          CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
          x[0] = 2.75;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
     }
     return ok;
}

Input File: example/optimize/print_for.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.5: Example Optimization and Conditional Expressions

5.7.5.a: See Also
4.4.4.1: cond_exp.cpp
# include <cppad/cppad.hpp>
namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     template <class Vector> void fun(
          const std::string& options ,
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;


          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // Create a variable that is is only used as left operand
          // in the comparision operation
          scalar left = 1. / x[0];
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // right operand in comparison operation
          scalar right  = x[0];
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;

          // Note that the left and right operand in the CondExpLt comparison
          // are determined at this point. Hence the conditional skip operator
          // will be inserted here so that the operations mentioned below can
          // also be skipped during zero order foward mode.
          if( options.find("no_conditional_skip") == std::string::npos )
               after.n_op += 1; // for conditional skip operation

          // Create a variable that is only used when comparison result is true
          // (can be skipped when the comparison result is false)
          scalar if_true = x[0] * 5.0;
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // Create two variables only used when the comparison result is false
          // (can be skipped when the comparison result is true)
          scalar temp      = 5.0 + x[0];
          scalar if_false  = temp * 3.0;
          before.n_var += 2; before.n_op += 2;
          after.n_var  += 2; after.n_op  += 2;

          // conditional comparision is 1 / x[0] < x[0]
          scalar value = CppAD::CondExpLt(left, right, if_true, if_false);
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // results for this operation sequence
          y[0] = value;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
     }
}

bool conditional_skip(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 1;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.5;

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);

     for(size_t k = 0; k < 2; k++)
     {     // optimization options
          std::string options = "";
          if( k == 0 )
               options = "no_conditional_skip";

          // declare independent variables and start tape recording
          CppAD::Independent(ax);

          // compute function computation
          tape_size before, after;
          fun(options, ax, ay, before, after);

          // create f: x -> y and stop tape recording
          CppAD::ADFun<double> f(ax, ay);
          ok &= f.size_var() == before.n_var;
          ok &= f.size_op()  == before.n_op;

          // Optimize the operation sequence
          f.optimize(options);
          ok &= f.size_var() == after.n_var;
          ok &= f.size_op()  == after.n_op;

          // Check case where result of the comparison is true (x[0] > 1.0).
          CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
          x[0] = 1.75;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
          if( options == "" )
               ok  &= f.number_skip() == 2;
          else
               ok &= f.number_skip() == 0;

          // Check case where result of the comparision is false (x[0] <= 1.0)
          x[0] = 0.5;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
          if( options == "" )
               ok  &= f.number_skip() == 1;
          else
               ok &= f.number_skip() == 0;
     }
     return ok;
}

Input File: example/optimize/conditional_skip.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.6: Example Optimization and Nested Conditional Expressions

5.7.6.a: See Also
4.4.4.1: cond_exp.cpp
# include <cppad/cppad.hpp>
namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     template <class Vector> void fun(
          const std::string& options ,
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;

          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // Create a variable that is is only used in the second comparision
          scalar two = 1. + x[0];
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // Conditional skip for second comparison will be inserted here.
          if( options.find("no_conditional_skip") == std::string::npos )
               after.n_op += 1; // for conditional skip operation

          // Create a variable that is is only used in the first comparision
          // (can be skipped when second comparison result is false)
          scalar one = 1. / x[0];
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // Conditional skip for first comparison will be inserted here.
          if( options.find("no_conditional_skip") == std::string::npos )
               after.n_op += 1; // for conditional skip operation

          // value when first comparison if false
          scalar one_false = 5.0;

          // Create a variable that is only used when second comparison is true
          // (can be skipped when it is false)
          scalar one_true = x[0] / 5.0;
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // value when second comparison is false
          scalar two_false = 3.0;

          // First conditional compaison is 1 / x[0] < x[0]
          // is only used when second conditional expression is true
          // (can be skipped when it is false)
          scalar two_true  = CppAD::CondExpLt(one, x[0], one_true, one_false);
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // Second conditional compaison is 1 + x[0] < x[1]
          scalar two_value = CppAD::CondExpLt(two, x[1], two_true, two_false);
          before.n_var += 1; before.n_op += 1;
          after.n_var  += 1; after.n_op  += 1;

          // results for this operation sequence
          y[0] = two_value;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
     }
}

bool nest_conditional(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.5;
     ax[1] = 0.5;

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);

     for(size_t k = 0; k < 2; k++)
     {     // optimization options
          std::string options = "";
          if( k == 0 )
               options = "no_conditional_skip";

          // declare independent variables and start tape recording
          CppAD::Independent(ax);

          // compute function computation
          tape_size before, after;
          fun(options, ax, ay, before, after);

          // create f: x -> y and stop tape recording
          CppAD::ADFun<double> f(ax, ay);
          ok &= f.size_var() == before.n_var;
          ok &= f.size_op()  == before.n_op;

          // Optimize the operation sequence
          f.optimize(options);
          ok &= f.size_var() == after.n_var;
          ok &= f.size_op()  == after.n_op;

          // Check case where result of the second comparison is true
          // and first comparison is true
          CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
          x[0] = 1.75;
          x[1] = 4.0;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
          ok  &= f.number_skip() == 0;

          // Check case where result of the second comparison is true
          // and first comparison is false
          x[0] = 0.5;
          x[1] = 4.0;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
          if( options == "" )
               ok  &= f.number_skip() == 1;
          else
               ok &= f.number_skip() == 0;

          // Check case where result of the second comparison is false
          // and first comparison is true
          x[0] = 1.75;
          x[1] = 0.0;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
          if( options == "" )
               ok  &= f.number_skip() == 3;
          else
               ok &= f.number_skip() == 0;

          // Check case where result of the second comparison is false
          // and first comparison is false
          x[0] = 0.5;
          x[1] = 0.0;
          y    = f.Forward(0, x);
          fun(options, x, check, before, after);
          ok &= NearEqual(y[0], check[0], eps10, eps10);
          if( options == "" )
               ok  &= f.number_skip() == 3;
          else
               ok &= f.number_skip() == 0;
     }
     return ok;
}

Input File: example/optimize/nest_conditional.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.7.7: Example Optimization and Cumulative Sum Operations
# include <cppad/cppad.hpp>

namespace {
     struct tape_size { size_t n_var; size_t n_op; };

     template <class Vector> void fun(
          const Vector& x, Vector& y, tape_size& before, tape_size& after
     )
     {     typedef typename Vector::value_type scalar;

          // phantom variable with index 0 and independent variables
          // begin operator, independent variable operators and end operator
          before.n_var = 1 + x.size(); before.n_op  = 2 + x.size();
          after.n_var  = 1 + x.size(); after.n_op   = 2 + x.size();

          // operators that are identical, and that will be made part of the
          // cummulative summation. Make sure do not replace second variable
          // using the first and then remove the first as part of the
          // cumulative summation.
          scalar first  = x[0] + x[1];
          scalar second = x[0] + x[1];
          before.n_var += 2; before.n_op  += 2;
          after.n_var  += 0; after.n_op   += 0;

          // test that subtractions are also included in cumulative summations
          scalar third = x[1] - 2.0;
          before.n_var += 1; before.n_op  += 1;
          after.n_var  += 0; after.n_op   += 0;

          // the finial summation is converted to a cumulative summation
          // the other is removed.
          scalar csum = first + second + third;
          before.n_var += 2; before.n_op  += 2;
          after.n_var  += 1; after.n_op   += 1;

          // results for this operation sequence
          y[0] = csum;
          before.n_var += 0; before.n_op  += 0;
          after.n_var  += 0; after.n_op   += 0;
     }
}
bool cumulative_sum(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps10 = 10.0 * std::numeric_limits<double>::epsilon();

     // domain space vector
     size_t n  = 2;
     CPPAD_TESTVECTOR(AD<double>) ax(n);
     ax[0] = 0.5;
     ax[1] = 1.5;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     CPPAD_TESTVECTOR(AD<double>) ay(m);
     tape_size before, after;
     fun(ax, ay, before, after);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(ax, ay);
     ok &= f.size_var() == before.n_var;
     ok &= f.size_op()  == before.n_op;

     // Optimize the operation sequence
     f.optimize();
     ok &= f.size_var() == after.n_var;
     ok &= f.size_op()  == after.n_op;

     // Check result for a zero order calculation for a different x,
     CPPAD_TESTVECTOR(double) x(n), y(m), check(m);
     x[0] = 0.75;
     x[1] = 2.25;
     y    = f.Forward(0, x);
     fun(x, check, before, after);
     ok  &= CppAD::NearEqual(y[0], check[0], eps10, eps10);

     return ok;
}

Input File: example/optimize/cumulative_sum.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8: Abs-normal Representation of Non-Smooth Functions

5.8.a: Reference
Andreas Griewank, Jens-Uwe Bernt, Manuel Radons, Tom Streubel, Solving piecewise linear systems in abs-normal form, Linear Algebra and its Applications, vol. 471 (2015), pages 500-530.

5.8.b: Contents
abs_normal_fun: 5.8.1Create An Abs-normal Representation of a Function
abs_print_mat: 5.8.2abs_normal: Print a Vector or Matrix
abs_eval: 5.8.3abs_normal: Evaluate First Order Approximation
simplex_method: 5.8.4abs_normal: Solve a Linear Program Using Simplex Method
lp_box: 5.8.5abs_normal: Solve a Linear Program With Box Constraints
abs_min_linear: 5.8.6abs_normal: Minimize a Linear Abs-normal Approximation
min_nso_linear: 5.8.7Non-Smooth Optimization Using Abs-normal Linear Approximations
qp_interior: 5.8.8Solve a Quadratic Program Using Interior Point Method
qp_box: 5.8.9abs_normal: Solve a Quadratic Program With Box Constraints
abs_min_quad: 5.8.10abs_normal: Minimize a Linear Abs-normal Approximation
min_nso_quad: 5.8.11Non-Smooth Optimization Using Abs-normal Quadratic Approximations

Input File: example/abs_normal/abs_normal.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.1: Create An Abs-normal Representation of a Function

5.8.1.a: Syntax
f.abs_normal_fun(ga)

5.8.1.b: f
The object f has prototype
     const ADFun<
Base>& f
It represents a function @(@ f : \B{R}^n \rightarrow \B{R}^m @)@. We assume that the only non-smooth terms in the representation are absolute value functions and use @(@ s \in \B{Z}_+ @)@ to represent the number of these terms.

5.8.1.b.a: n
We use n to denote the dimension of the domain space for f .

5.8.1.b.b: m
We use m to denote the dimension of the range space for f .

5.8.1.b.c: s
We use s to denote the number of absolute value terms in f .

5.8.1.c: a
The object a has prototype
     ADFun<
Basea
The initial function representation in a is lost. Upon return it represents the result of the absolute terms @(@ a : \B{R}^n \rightarrow \B{R}^s @)@; see @(@ a(x) @)@ defined below. Note that a is constructed by copying f and then changing the dependent variables. There may be many calculations in this representation that are not necessary and can be removed using
     
a.optimize()
This optimization is not done automatically by abs_normal_fun because it may take a significant amount of time.

5.8.1.c.a: zeta
Let @(@ \zeta_0 ( x ) @)@ denote the argument for the first absolute value term in @(@ f(x) @)@, @(@ \zeta_1 ( x , |\zeta_0 (x)| ) @)@ for the second term, and so on.

5.8.1.c.b: a(x)
For @(@ i = 0 , \ldots , {s-1} @)@ define @[@ a_i (x) = | \zeta_i ( x , a_0 (x) , \ldots , a_{i-1} (x ) ) | @]@ This defines @(@ a : \B{R}^n \rightarrow \B{R}^s @)@.

5.8.1.d: g
The object g has prototype
     ADFun<
Baseg
The initial function representation in g is lost. Upon return it represents the smooth function @(@ g : \B{R}^{n + s} \rightarrow \B{R}^{m + s} @)@ is defined by @[@ g( x , u ) = \left[ \begin{array}{c} y(x, u) \\ z(x, u) \end{array} \right] @]@ were @(@ y(x, u) @)@ and @(@ z(x, u) @)@ are defined below.

5.8.1.d.a: z(x, u)
Define the smooth function @(@ z : \B{R}^{n + s} \rightarrow \B{R}^s @)@ by @[@ z_i ( x , u ) = \zeta_i ( x , u_0 , \ldots , u_{i-1} ) @]@ Note that the partial of @(@ z_i @)@ with respect to @(@ u_j @)@ is zero for @(@ j \geq i @)@.

5.8.1.d.b: y(x, u)
There is a smooth function @(@ y : \B{R}^{n + s} \rightarrow \B{R}^m @)@ such that @(@ y( x , u ) = f(x) @)@ whenever @(@ u = a(x) @)@.

5.8.1.e: Affine Approximation
We define the affine approximations @[@ \begin{array}{rcl} y[ \hat{x} ]( x , u ) & = & y ( \hat{x}, a( \hat{x} ) ) + \partial_x y ( \hat{x}, a( \hat{x} ) ) ( x - \hat{x} ) + \partial_u y ( \hat{x}, a( \hat{x} ) ) ( u - a( \hat{x} ) ) \\ z[ \hat{x} ]( x , u ) & = & z ( \hat{x}, a( \hat{x} ) ) + \partial_x z ( \hat{x}, a( \hat{x} ) ) ( x - \hat{x} ) + \partial_u z ( \hat{x}, a( \hat{x} ) ) ( u - a( \hat{x} ) ) \end{array} @]@ It follows that @[@ \begin{array}{rcl} y( x , u ) & = & y[ \hat{x} ]( x , u ) + o ( x - \hat{x}, u - a( \hat{x} ) ) \\ z( x , u ) & = & z[ \hat{x} ]( x , u ) + o ( x - \hat{x}, u - a( \hat{x} ) ) \end{array} @]@

5.8.1.f: Abs-normal Approximation

5.8.1.f.a: Approximating a(x)
The function @(@ a(x) @)@ is not smooth, but it is equal to @(@ | z(x, u) | @)@ when @(@ u = a(x) @)@. Furthermore @[@ z[ \hat{x} ]( x , u ) = z ( \hat{x}, a( \hat{x} ) ) + \partial_x z ( \hat{x}, a( \hat{x} ) ) ( x - \hat{x} ) + \partial_u z ( \hat{x}, a( \hat{x} ) ) ( u - a( \hat{x} ) ) @]@ The partial of @(@ z_i @)@ with respect to @(@ u_j @)@ is zero for @(@ j \geq i @)@. It follows that @[@ z_i [ \hat{x} ]( x , u ) = z_i ( \hat{x}, a( \hat{x} ) ) + \partial_x z_i ( \hat{x}, a( \hat{x} ) ) ( x - \hat{x} ) + \sum_{j < i} \partial_{u(j)} z_i ( \hat{x}, a( \hat{x} ) ) ( u_j - a_j ( \hat{x} ) ) @]@ Considering the case @(@ i = 0 @)@ we define @[@ a_0 [ \hat{x} ]( x ) = | z_0 [ \hat{x} ]( x , u ) | = \left| z_0 ( \hat{x}, a( \hat{x} ) ) + \partial_x z_0 ( \hat{x}, a( \hat{x} ) ) ( x - \hat{x} ) \right| @]@ It follows that @[@ a_0 (x) = a_0 [ \hat{x} ]( x ) + o ( x - \hat{x} ) @]@ In general, we define @(@ a_i [ \hat{x} ] @)@ using @(@ a_j [ \hat{x} ] @)@ for @(@ j < i @)@ as follows: @[@ a_i [ \hat{x} ]( x ) = \left | z_i ( \hat{x}, a( \hat{x} ) ) + \partial_x z_i ( \hat{x}, a( \hat{x} ) ) ( x - \hat{x} ) + \sum_{j < i} \partial_{u(j)} z_i ( \hat{x}, a( \hat{x} ) ) ( a_j [ \hat{x} ] ( x ) - a_j ( \hat{x} ) ) \right| @]@ It follows that @[@ a (x) = a[ \hat{x} ]( x ) + o ( x - \hat{x} ) @]@ Note that in the case where @(@ z(x, u) @)@ and @(@ y(x, u) @)@ are affine, @[@ a[ \hat{x} ]( x ) = a( x ) @]@

5.8.1.f.b: Approximating f(x)
@[@ f(x) = y ( x , a(x ) ) = y [ \hat{x} ] ( x , a[ \hat{x} ] ( x ) ) + o( \Delta x ) @]@
5.8.1.g: Correspondence to Literature
Using the notation @(@ Z = \partial_x z(\hat{x}, \hat{u}) @)@, @(@ L = \partial_u z(\hat{x}, \hat{u}) @)@, @(@ J = \partial_x y(\hat{x}, \hat{u}) @)@, @(@ Y = \partial_u y(\hat{x}, \hat{u}) @)@, the approximation for @(@ z @)@ and @(@ y @)@ are @[@ \begin{array}{rcl} z[ \hat{x} ]( x , u ) & = & z ( \hat{x}, a( \hat{x} ) ) + Z ( x - \hat{x} ) + L ( u - a( \hat{x} ) ) \\ y[ \hat{x} ]( x , u ) & = & y ( \hat{x}, a( \hat{x} ) ) + J ( x - \hat{x} ) + Y ( u - a( \hat{x} ) ) \end{array} @]@ Moving the terms with @(@ \hat{x} @)@ together, we have @[@ \begin{array}{rcl} z[ \hat{x} ]( x , u ) & = & z ( \hat{x}, a( \hat{x} ) ) - Z \hat{x} - L a( \hat{x} ) + Z x + L u \\ y[ \hat{x} ]( x , u ) & = & y ( \hat{x}, a( \hat{x} ) ) - J \hat{x} - Y a( \hat{x} ) + J x + Y u \end{array} @]@ Using the notation @(@ c = z ( \hat{x}, \hat{u} ) - Z \hat{x} - L \hat{u} @)@, @(@ b = y ( \hat{x}, \hat{u} ) - J \hat{x} - Y \hat{u} @)@, we have @[@ \begin{array}{rcl} z[ \hat{x} ]( x , u ) & = & c + Z x + L u \\ y[ \hat{x} ]( x , u ) & = & b + J x + Y u \end{array} @]@ Considering the affine case, where the approximations are exact, and choosing @(@ u = a(x) = |z(x, u)| @)@, we obtain @[@ \begin{array}{rcl} z( x , a(x ) ) & = & c + Z x + L |z( x , a(x ) )| \\ y( x , a(x ) ) & = & b + J x + Y |z( x , a(x ) )| \end{array} @]@ This is Equation (2) of the 5.8.a: reference .

5.8.1.h: Example
The file 5.8.1.1: abs_get_started.cpp contains an example and test using this operation.
Input File: cppad/core/abs_normal_fun.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.1.1: abs_normal Getting Started: Example and Test

5.8.1.1.a: Purpose
Creates an 5.8: abs_normal representation @(@ g @)@ for the function @(@ f : \B{R}^3 \rightarrow \B{R} @)@ defined by @[@ f( x_0, x_1, x_2 ) = | x_0 + x_1 | + | x_1 + x_2 | @]@ The corresponding 5.8.1.d: g @(@ : \B{R}^5 \rightarrow \B{R}^3 @)@ is given by @[@ \begin{array}{rclrcl} g_0 ( x_0, x_1, x_2, u_0, u_1 ) & = & u_0 + u_1 & = & y_0 (x, u) \\ g_1 ( x_0, x_1, x_2, u_0, u_1 ) & = & x_0 + x_1 & = & z_0 (x, u) \\ g_1 ( x_0, x_1, x_2, u_0, u_1 ) & = & x_1 + x_2 & = & z_1 (x, u) \end{array} @]@

5.8.1.1.b: Source

# include <cppad/cppad.hpp>
namespace {
     CPPAD_TESTVECTOR(double) join(
          const CPPAD_TESTVECTOR(double)& x ,
          const CPPAD_TESTVECTOR(double)& u )
     {     size_t n = x.size();
          size_t s = u.size();
          CPPAD_TESTVECTOR(double) xu(n + s);
          for(size_t j = 0; j < n; j++)
               xu[j] = x[j];
          for(size_t j = 0; j < s; j++)
               xu[n + j] = u[j];
          return xu;
     }
}
bool get_started(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::ADFun;
     //
     size_t n = 3; // size of x
     size_t m = 1; // size of y
     size_t s = 2; // size of u and z
     //
     // record the function f(x)
     CPPAD_TESTVECTOR( AD<double> ) ax(n), ay(m);
     for(size_t j = 0; j < n; j++)
          ax[j] = double(j + 1);
     Independent( ax );
     // for this example, we ensure first absolute value is | x_0 + x_1 |
     AD<double> a0 = abs( ax[0] + ax[1] );
     // and second absolute value is | x_1 + x_2 |
     AD<double> a1 = abs( ax[1] + ax[2] );
     ay[0]         = a0 + a1;
     ADFun<double> f(ax, ay);

     // create its abs_normal representation in g, a
     ADFun<double> g, a;
     f.abs_normal_fun(g, a);

     // check dimension of domain and range space for g
     ok &= g.Domain() == n + s;
     ok &= g.Range() == m + s;

     // check dimension of domain and range space for a
     ok &= a.Domain() == n;
     ok &= a.Range() == s;

     // --------------------------------------------------------------------
     // a(x) has all the operations used to compute f(x), but the sum of the
     // absolute values is not needed for a(x), so optimize it out.
     size_t n_op = f.size_op();
     ok         &= a.size_op() == n_op;
     a.optimize();
     ok         &= a.size_op() < n_op;

     // --------------------------------------------------------------------
     // zero order forward mode calculation using g(x, u)
     CPPAD_TESTVECTOR(double) x(n), u(s), xu(n+s), yz(m+s);
     for(size_t j = 0; j < n; j++)
          x[j] = double(j + 2);
     for(size_t j = 0; j < s; j++)
          u[j] = double(j + n + 2);
     xu = join(x, u);
     yz = g.Forward(0, xu);

     // check y_0(x, u)
     double y0 = u[0] + u[1];
     ok       &= y0 == yz[0];

     // check z_0 (x, u)
     double z0 = x[0] + x[1];
     ok       &= z0 == yz[1];

     // check z_1 (x, u)
     double z1 = x[1] + x[2];
     ok       &= z1 == yz[2];


     // --------------------------------------------------------------------
     // check that y(x, a(x) ) == f(x)
     CPPAD_TESTVECTOR(double) y(m);
     y  = f.Forward(0, x);  // y  = f(x)
     u  = a.Forward(0, x);  // u  = a(x)
     xu = join(x, u);       // xu = ( x, a(x) )
     yz = g.Forward(0, xu); // yz = ( y(x, a(x)), z(x, a(x)) )
     ok &= yz[0] == y[0];

     return ok;
}

Input File: example/abs_normal/get_started.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.2: abs_normal: Print a Vector or Matrix

5.8.2.a: Syntax
abs_print_mat(namenrncmat)

5.8.2.b: Prototype

template <class Vector>
void abs_print_mat(
     const std::string& name ,
     size_t             nr   ,
     size_t             nc   ,
     const Vector&      mat  )

5.8.2.c: Purpose
This routine is used by the 5.8: abs_normal examples to print vectors and matrices. A new-line is printed at the end of this output.

5.8.2.d: name
This is a name that is printed before the vector or matrix.

5.8.2.e: nr
This is the number of rows in the matrix. Use nr = 1 for row vectors.

5.8.2.f: nc
This is the number of columns in the matrix. Use nc = 1 for column vectors.

5.8.2.g: mat
This is a 12.4.i: row-major representation of the matrix (hence a 8.9: SimpleVector ). The syntax
     std::cout << 
mat[i]
must output the i-th element of the simple vector mat .
Input File: example/abs_normal/abs_print_mat.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.3: abs_normal: Evaluate First Order Approximation

5.8.3.a: Syntax
g_tilde = abs_eval(nmsg_hatg_jacdelta_x)

5.8.3.b: Prototype

template <class Vector>
Vector abs_eval(
     size_t        n       ,
     size_t        m       ,
     size_t        s       ,
     const Vector& g_hat   ,
     const Vector& g_jac   ,
     const Vector& delta_x )

5.8.3.c: Source
This following is a link to the source code for this example: 5.8.3.2: abs_eval.hpp .

5.8.3.d: Purpose
Given a current that abs-normal representation at a point @(@ \hat{x} \in \B{R}^n @)@, and a @(@ \Delta x \in \B{R}^n @)@, this routine evaluates the abs-normal 5.8.1.f.b: approximation for f(x) where @(@ x = \hat{x} + \Delta x @)@.

5.8.3.e: Vector
The type Vector is a simple vector with elements of type double.

5.8.3.f: f
We use the notation f for the original function; see 5.8.1.b: f .

5.8.3.g: n
This is the dimension of the domain space for f ; see 5.8.1.b.a: n .

5.8.3.h: m
This is the dimension of the range space for f ; see 5.8.1.b.b: m .

5.8.3.i: s
This is the number of absolute value terms in f ; see

5.8.3.j: g
We use the notation g for the abs-normal representation of f ; see 5.8.1.d: g .

5.8.3.k: g_hat
This vector has size m + s and is the value of g(x, u) at @(@ x = \hat{x} @)@ and @(@ u = a( \hat{x} ) @)@.

5.8.3.l: g_jac
This vector has size (m + s) * (n + s) and is the Jacobian of @(@ g(x, u) @)@ at @(@ x = \hat{x} @)@ and @(@ u = a( \hat{x} ) @)@.

5.8.3.m: delta_x
This vector has size n and is the difference @(@ \Delta x = x - \hat{x} @)@, where @(@ x @)@ is the point that we are approximating @(@ f(x) @)@.

5.8.3.n: g_tilde
This vector has size m + s and is a the first order approximation for 5.8.1.d: g that corresponds to the point @(@ x = \hat{x} + \Delta x @)@ and @(@ u = a(x) @)@.

5.8.3.o: Example
The file 5.8.3.1: abs_eval.cpp contains an example and test of abs_eval. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/abs_eval.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.3.1: abs_eval: Example and Test

5.8.3.1.a: Purpose
The function @(@ f : \B{R}^3 \rightarrow \B{R} @)@ defined by @[@ f( x_0, x_1, x_2 ) = | x_0 + x_1 | + | x_1 + x_2 | @]@ is affine, except for its absolute value terms. For this case, the abs_normal approximation should be equal to the function itself.

5.8.3.1.b: Source

# include <cppad/cppad.hpp>
# include "abs_eval.hpp"

namespace {
     CPPAD_TESTVECTOR(double) join(
          const CPPAD_TESTVECTOR(double)& x ,
          const CPPAD_TESTVECTOR(double)& u )
     {     size_t n = x.size();
          size_t s = u.size();
          CPPAD_TESTVECTOR(double) xu(n + s);
          for(size_t j = 0; j < n; j++)
               xu[j] = x[j];
          for(size_t j = 0; j < s; j++)
               xu[n + j] = u[j];
          return xu;
     }
}
bool abs_eval(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::ADFun;
     //
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR( AD<double> ) ad_vector;
     //
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     //
     size_t n = 3; // size of x
     size_t m = 1; // size of y
     size_t s = 2; // number of absolute value terms
     //
     // record the function f(x)
     ad_vector ad_x(n), ad_y(m);
     for(size_t j = 0; j < n; j++)
          ad_x[j] = double(j + 1);
     Independent( ad_x );
     // for this example, we ensure first absolute value is | x_0 + x_1 |
     AD<double> ad_0 = abs( ad_x[0] + ad_x[1] );
     // and second absolute value is | x_1 + x_2 |
     AD<double> ad_1 = abs( ad_x[1] + ad_x[2] );
     ad_y[0]         = ad_0 + ad_1;
     ADFun<double> f(ad_x, ad_y);

     // create its abs_normal representation in g, a
     ADFun<double> g, a;
     f.abs_normal_fun(g, a);

     // check dimension of domain and range space for g
     ok &= g.Domain() == n + s;
     ok &= g.Range()  == m + s;

     // check dimension of domain and range space for a
     ok &= a.Domain() == n;
     ok &= a.Range()  == s;

     // --------------------------------------------------------------------
     // Choose a point x_hat
     d_vector x_hat(n);
     for(size_t j = 0; j < n; j++)
          x_hat[j] = double(j - 1);

     // value of a_hat = a(x_hat)
     d_vector a_hat = a.Forward(0, x_hat);

     // (x_hat, a_hat)
     d_vector xu_hat = join(x_hat, a_hat);

     // value of g[ x_hat, a_hat ]
     d_vector g_hat = g.Forward(0, xu_hat);

     // Jacobian of g[ x_hat, a_hat ]
     d_vector g_jac = g.Jacobian(xu_hat);

     // value of delta_x
     d_vector delta_x(n);
     delta_x[0] =  1.0;
     delta_x[1] = -2.0;
     delta_x[2] = +2.0;

     // value of x
     d_vector x(n);
     for(size_t j = 0; j < n; j++)
          x[j] = x_hat[j] + delta_x[j];

     // value of f(x)
     d_vector y = f.Forward(0, x);

     // value of g_tilde
     d_vector g_tilde = CppAD::abs_eval(n, m, s, g_hat, g_jac, delta_x);

     // should be equal because f is affine, except for abs terms
     ok &= CppAD::NearEqual(y[0], g_tilde[0], eps99, eps99);

     return ok;
}

Input File: example/abs_normal/abs_eval.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.3.2: abs_eval Source Code
namespace CppAD { // BEGIN_CPPAD_NAMESPACE
// BEGIN PROTOTYPE
template <class Vector>
Vector abs_eval(
     size_t        n       ,
     size_t        m       ,
     size_t        s       ,
     const Vector& g_hat   ,
     const Vector& g_jac   ,
     const Vector& delta_x )
// END PROTOTYPE
{     using std::fabs;
     //
     CPPAD_ASSERT_KNOWN(
          size_t(delta_x.size()) == n,
          "abs_eval: size of delta_x not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g_hat.size()) == m + s,
          "abs_eval: size of g_hat not equal to m + s"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g_jac.size()) == (m + s) * (n + s),
          "abs_eval: size of g_jac not equal to (m + s)*(n + s)"
     );
# ifndef NDEBUG
     // Check that partial_u z(x, u) is strictly lower triangular
     for(size_t i = 0; i < s; i++)
     {     for(size_t j = i; j < s; j++)
          {     // index in g_jac of partial of z_i w.r.t u_j
               // (note that g_jac has n + s elements in each row)
               size_t index = (m + i) * (n + s) + (n + j);
               CPPAD_ASSERT_KNOWN(
                    g_jac[index] == 0.0,
                    "abs_eval: partial z_i w.r.t u_j non-zero for i <= j"
               );
          }
     }
# endif
     // return value
     Vector g_tilde(m + s);
     //
     // compute z_tilde, the last s components of g_tilde
     for(size_t i = 0; i < s; i++)
     {     // start at z_hat_i
          g_tilde[m + i] = g_hat[m + i];
          // contribution for change x
          for(size_t j = 0; j < n; j++)
          {     // index in g_jac of partial of z_i w.r.t x_j
               size_t index = (m + i) * (n + s) + j;
               // add contribution for delta_x_j to z_tilde_i
               g_tilde[m + i] += g_jac[index] * delta_x[j];
          }
          // contribution for change in u_j for j < i
          for(size_t j = 0; j < i; j++)
          {     // approixmation for change in absolute value
               double delta_a_j = fabs(g_tilde[m + j]) - fabs(g_hat[m + j]);
               // index in g_jac of partial of z_i w.r.t u_j
               size_t index = (m + i) * (n + s) + n + j;
               // add constribution for delta_a_j to s_tilde_i
               g_tilde[m + i] += g_jac[index] * delta_a_j;
          }
     }
     //
     // compute y_tilde, the first m components of g_tilde
     for(size_t i = 0; i < m; i++)
     {     // start at y_hat_i
          g_tilde[i] = g_hat[i];
          // contribution for change x
          for(size_t j = 0; j < n; j++)
          {     // index in g_jac of partial of y_i w.r.t x_j
               size_t index = i * (n + s) + j;
               // add contribution for delta_x_j to y_tilde_i
               g_tilde[i] += g_jac[index] * delta_x[j];
          }
          // contribution for change in u_j
          for(size_t j = 0; j < s; j++)
          {     // approximation for change in absolute value
               double delta_a_j = fabs(g_tilde[m + j]) - fabs(g_hat[m + j]);
               // index in g_jac of partial of y_i w.r.t u_j
               size_t index = i * (n + s) + n + j;
               // add constribution for delta_a_j to s_tilde_i
               g_tilde[i] += g_jac[index] * delta_a_j;
          }
     }
     return g_tilde;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/abs_eval.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.4: abs_normal: Solve a Linear Program Using Simplex Method

5.8.4.a: Syntax
ok = simplex_method(levelbAcmaxitrxout)

5.8.4.b: Prototype

template <class Vector>
bool simplex_method(
     size_t        level   ,
     const Vector& A       ,
     const Vector& b       ,
     const Vector& c       ,
     size_t        maxitr  ,
     Vector&       xout    )

5.8.4.c: Source
This following is a link to the source code for this example: 5.8.4.2: simplex_method.hpp .

5.8.4.d: Problem
We are given @(@ A \in \B{R}^{m \times n} @)@, @(@ b \in \B{R}^m @)@, @(@ c \in \B{R}^n @)@. This routine solves the problem @[@ \begin{array}{rl} \R{minimize} & g^T x \; \R{w.r.t} \; x \in \B{R}_+^n \\ \R{subject \; to} & A x + b \leq 0 \end{array} @]@

5.8.4.e: Vector
The type Vector is a simple vector with elements of type double.

5.8.4.f: level
This value is less than or equal two. If level == 0 , no tracing is printed. If level >= 1 , a trace @(@ x @)@ and the corresponding objective @(@ z @)@ is printed at each iteration. If level == 2 , a trace of the simplex Tableau is printed at each iteration.

5.8.4.g: A
This is a 12.4.i: row-major representation of the matrix @(@ A @)@ in the problem.

5.8.4.h: b
This is the vector @(@ b @)@ in the problem.

5.8.4.i: c
This is the vector @(@ c @)@ in the problem.

5.8.4.j: maxitr
This is the maximum number of simplex iterations to try before giving up on convergence.

5.8.4.k: xout
This argument has size is n and the input value of its elements does no matter. Upon return it is the primal variables corresponding to the problem solution.

5.8.4.l: ok
If the return value ok is true, a solution has been found.

5.8.4.m: Example
The file 5.8.4.1: simplex_method.cpp contains an example and test of simplex_method. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/simplex_method.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.4.1: abs_normal simplex_method: Example and Test

5.8.4.1.a: Problem
Our original problem is @[@ \R{minimize} \; | u - 1| \; \R{w.r.t} \; u \in \B{R} @]@ We reformulate this as the following problem @[@ \begin{array}{rlr} \R{minimize} & v & \R{w.r.t} \; (u,v) \in \B{R}^2 \\ \R{subject \; to} & u - 1 \leq v \\ & 1 - u \leq v \end{array} @]@ We know that the value of @(@ v @)@ at the solution is greater than or equal zero. Hence we can reformulate this problem as @[@ \begin{array}{rlr} \R{minimize} & v & \R{w.r.t} \; ( u_- , u_+ , v) \in \B{R}_+^3 \\ \R{subject \; to} & u_+ - u_- - 1 \leq v \\ & 1 - u_+ + u_- \leq v \end{array} @]@ This is equivalent to @[@ \begin{array}{rlr} \R{minimize} & (0, 0, 1) \cdot ( u_+, u_- , v)^T & \R{w.r.t} \; (u,v) \in \B{R}_+^3 \\ \R{subject \; to} & \left( \begin{array}{ccc} +1 & -1 & -1 \\ -1 & +1 & +1 \end{array} \right) \left( \begin{array}{c} u_+ \\ u_- \\ v \end{array} \right) + \left( \begin{array}{c} -1 \\ 1 \end{array} \right) \leq 0 \end{array} @]@ which is in the form expected by 5.8.4: simplex_method .

5.8.4.1.b: Source

# include <limits>
# include <cppad/utility/vector.hpp>
# include "simplex_method.hpp"

bool simplex_method(void)
{     bool ok = true;
     typedef CppAD::vector<double> vector;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     //
     size_t n = 3;
     size_t m = 2;
     vector A(m * n), b(m), c(n), xout(n);
     A[ 0 * n + 0 ] =  1.0; // A(0,0)
     A[ 0 * n + 1 ] = -1.0; // A(0,1)
     A[ 0 * n + 2 ] = -1.0; // A(0,2)
     //
     A[ 1 * n + 0 ] = -1.0; // A(1,0)
     A[ 1 * n + 1 ] = +1.0; // A(1,1)
     A[ 1 * n + 2 ] = -1.0; // A(1,2)
     //
     b[0]           = -1.0;
     b[1]           =  1.0;
     //
     c[0]           =  0.0;
     c[1]           =  0.0;
     c[2]           =  1.0;
     //
     size_t maxitr  = 10;
     size_t level   = 0;
     //
     ok &= CppAD::simplex_method(level, A, b, c,  maxitr, xout);
     //
     // check optimal value for u
     ok &= std::fabs( xout[0] - 1.0 ) < eps99;
     //
     // check optimal value for v
     ok &= std::fabs( xout[1] ) < eps99;
     //
     return ok;
}

Input File: example/abs_normal/simplex_method.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.4.2: simplex_method Source Code
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class Vector>
bool simplex_method(
     size_t        level   ,
     const Vector& A       ,
     const Vector& b       ,
     const Vector& c       ,
     size_t        maxitr  ,
     Vector&       xout    )
// END PROTOTYPE
{     // number of equations
     size_t ne  = b.size();
     // number of x variables
     size_t nx = c.size();
     CPPAD_ASSERT_UNKNOWN( size_t(A.size()) == ne * nx );
     CPPAD_ASSERT_UNKNOWN( level <= 2 );
     //
     if( level > 0 )
     {     std::cout << "start simplex_method\n";
          CppAD::abs_print_mat("A", ne, nx, A);
          CppAD::abs_print_mat("b", ne,  1, b);
          CppAD::abs_print_mat("c", nx, 1, c);
     }
     //
     // variables (columns) in the Tableau:
     // x: the original primary variables with size n
     // s: slack variables, one for each equation
     // a: auxillary variables, one for each negative right hand size
     // r: right hand size for equations
     //
     // Determine number of auxillary variables
     size_t na = 0;
     for(size_t i = 0; i < ne; i++)
     {     if( b[i] > 0.0 )
               ++na;
     }
     // size of columns in the Tableau
     size_t nc = nx + ne + na + 1;

     // number of rows in Tableau, the equations plust two objectives
     size_t nr = ne + 2;

     // Initilize Tableau as zero
     Vector T(nr * nc);
     for(size_t i = 0; i < nr * nc; i++)
          T[i] = 0.0;

     // initialize basic variable flag as false
     CppAD::vector<size_t> basic(nc);
     for(size_t j = 0; j < nc; j++)
          basic[j] = false;

     // For i = 0 , ... , m-1, place the Equations
     // sum_j A_{i,j} * x_j + b_i <= 0 in Tableau
     na = 0; // use as index of next auxillary variable
     for(size_t i = 0; i < ne; i++)
     {     if( b[i] > 0.0)
          {     // convert to - sum_j A_{i,j} x_j - b_i >= 0
               for(size_t j = 0; j < nx; j++)
                    T[i * nc + j] = - A[i * nx + j];
               // slack variable has negative coefficient
               T[i * nc + (nx + i)] = -1.0;
               // auxillary variable is basic for this constraint
               T[i * nc + (nx + ne + na)] = 1.0;
               basic[nx + ne + na]        = true;
               // right hand side
               T[i * nc + (nc - 1)] = b[i];
               //
               ++na;
          }
          else
          {     // sum_j A_{i,j} x_j + b_i <= 0
               for(size_t j = 0; j < nx; j++)
                    T[i * nc + j] = A[i * nx + j];
               //  slack variable is also basic
               T[ i * nc + (nx + i) ]  = 1.0;
               basic[nx + i]           = true;
               // right hand side for equations
               T[ i * nc + (nc - 1) ] = - b[i];
          }
     }
     // na is back to its original value
     CPPAD_ASSERT_UNKNOWN( nc == nx + ne + na + 1 );
     //
     // place the equation objective equation in Tablueau
     // row ne corresponds to the equation z - sum_j c_j x_j = 0
     // column index for z is nx + ne + na
     for(size_t j = 0; j < nx; j++)
          T[ne * nc + j] = - c[j];
     //
     // row ne+1 corresponds to the equation w - a_0 - ... - a_{na-1} = 0
     // column index for w is nx + ne + na +1
     for(size_t j = 0; j < na; j++)
          T[(ne + 1) * nc + (nx + ne + j)] = -1.0;
     //
     // fix auxillary objective so coefficients in w
     // for auxillary variables are zero
     for(size_t k = 0; k < na; k++)
     {     size_t ja  = nx + ne + k;
          size_t ia  = ne;
          for(size_t i = 0; i < ne; i++)
          {     if( T[i * nc + ja] != 0.0 )
               {     CPPAD_ASSERT_UNKNOWN( T[i * nc + ja] == 1.0 );
                    CPPAD_ASSERT_UNKNOWN( T[(ne + 1) * nc + ja] == -1.0 )
                    CPPAD_ASSERT_UNKNOWN( ia == ne );
                    ia = i;
               }
          }
          CPPAD_ASSERT_UNKNOWN( ia < ne );
          for(size_t j = 0; j < nc; j++)
               T[(ne + 1) * nc + j] += T[ia * nc + j];
          // The result in column ja is zero, avoid roundoff
          T[(ne + 1) * nc + ja] = 0.0;
     }
     //
     // index of current objective
     size_t iobj = ne;  // original objective z
     if( na > 0 )
          iobj = ne + 1; // auxillary objective w
     //
     // simplex interations
     for(size_t itr = 0; itr < maxitr; itr++)
     {     // current value for xout
          for(size_t j = 0; j < nx; j++)
          {     xout[j] = 0.0;
               if( basic[j] )
               {     // determine which row of column j is non-zero
                    xout[j] = std::numeric_limits<double>::quiet_NaN();
                    for(size_t i = 0; i < ne; i++)
                    {     double T_ij = T[i * nc + j];
                         CPPAD_ASSERT_UNKNOWN( T_ij == 0.0 || T_ij == 1.0 );
                         if( T_ij == 1.0 )
                         {     // corresponding value in right hand side
                              xout[j] = T[ i * nc + (nc-1) ];
                         }
                    }
               }
          }
          if( level > 1 )
               CppAD::abs_print_mat("T", nr, nc, T);
          if( level > 0 )
          {     CppAD::abs_print_mat("x", nx, 1, xout);
               std::cout << "itr = " << itr;
               if( iobj > ne )
                    std::cout << ", auxillary objective w = ";
               else
                    std::cout << ", objective z = ";
               std::cout << T[iobj * nc + (nc - 1)] << "\n";
          }
          //
          // number of variables depends on objective
          size_t nv = nx + ne;   // (x, s)
          if( iobj == ne + 1 )
          {     // check if we have solved the auxillary problem
               bool done = true;
               for(size_t k = 0; k < na; k++)
                    if( basic[nx + ne + k] )
                         done = false;
               if( done )
               {     // switch to optimizing the original objective
                    iobj = ne;
               }
               else
                    nv = nx + ne + na; // (x, s, a)
          }
          //
          // determine variable with maximuim coefficient in objective row
          double cmax = 0.0;
          size_t jmax = nv;
          for(size_t j = 0; j < nv; j++)
          {     if( T[iobj * nc + j] > cmax )
               {     CPPAD_ASSERT_UNKNOWN( ! basic[j] );
                    cmax = T[ iobj * nc + j];
                    jmax = j;
               }
          }
          // check for solution
          if( jmax == nv )
          {     if( iobj == ne )
               {     if( level > 0 )
                         std::cout << "end simplex_method\n";
                    return true;
               }
               if( level > 0 )
                    std::cout << "end_simples_method: no feasible solution\n";
               return false;
          }
          //
          // We will increase the j-th variable.
          // Determine which row will be the pivot row.
          double rmin = std::numeric_limits<double>::infinity();
          size_t imin = ne;
          for(size_t i = 0; i < ne; i++)
          {     if( T[i * nc + jmax] > 0.0 )
               {     double r =     T[i * nc + (nc-1) ] / T[i * nc + jmax];
                    if( r < rmin )
                    {     rmin = r;
                         imin = i;
                    }
               }
          }
          if( imin == ne )
          {     // not auxillary objective
               CPPAD_ASSERT_UNKNOWN( iobj == ne );
               if( level > 0 ) std::cout
                    << "end simplex_method: objective is unbounded below\n";
               return false;
          }
          double pivot = T[imin * nc + jmax];
          //
          // Which variable is changing from basic to non-basic.
          // Initilaize as not yet determined.
          size_t basic2not = nc;
          //
          // Divide row imin by pivot element
          for(size_t j = 0; j < nc; j++)
          {     if( basic[j] && T[imin * nc + j] == 1.0 )
               {     CPPAD_ASSERT_UNKNOWN( basic2not == nc );
                    basic2not = j;
               }
               T[imin * nc + j] /= pivot;
          }
          // The result in column jmax is one, avoid roundoff
          T[imin * nc + jmax ] = 1.0;
          //
          // Check that we found the variable going from basic to non-basic
          CPPAD_ASSERT_UNKNOWN( basic2not < nv && basic2not != jmax );
          //
          // convert variable for column jmax to basic
          // and for column basic2not to non-basic
          for(size_t i = 0; i < nr; i++) if( i != imin )
          {     double r =     T[i * nc + jmax ] / T[imin * nc + jmax];
               // row_i = row_i - r * row_imin
               for(size_t j = 0; j < nc; j++)
                    T[i * nc + j] -= r * T[imin * nc + j];
               // The result in column jmax is zero, avoid roundoff
               T[i * nc + jmax] = 0.0;
          }
          // update flag for basic variables
          basic[ basic2not ] = false;
          basic[ jmax ]      = true;
     }
     if( level > 0 ) std::cout
          << "end simplex_method: maximum # iterations without solution\n";
     return false;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/simplex_method.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.5: abs_normal: Solve a Linear Program With Box Constraints

5.8.5.a: Syntax
ok = lp_box(
     
levelAbcdmaxitrxout
)


5.8.5.b: Prototype

template <class Vector>
bool lp_box(
     size_t        level   ,
     const Vector& A       ,
     const Vector& b       ,
     const Vector& c       ,
     const Vector& d       ,
     size_t        maxitr  ,
     Vector&       xout    )

5.8.5.c: Source
This following is a link to the source code for this example: 5.8.5.2: lp_box.hpp .

5.8.5.d: Problem
We are given @(@ A \in \B{R}^{m \times n} @)@, @(@ b \in \B{R}^m @)@, @(@ c \in \B{R}^n @)@, @(@ d \in \B{R}^n @)@, This routine solves the problem @[@ \begin{array}{rl} \R{minimize} & c^T x \; \R{w.r.t} \; x \in \B{R}^n \\ \R{subject \; to} & A x + b \leq 0 \; \R{and} \; - d \leq x \leq d \end{array} @]@

5.8.5.e: Vector
The type Vector is a simple vector with elements of type double.

5.8.5.f: level
This value is less that or equal two. If level == 0 , no tracing is printed. If level >= 1 , a trace of the lp_box operations is printed. If level >= 2 , the objective and primal variables @(@ x @)@ are printed at each 5.8.4: simplex_method iteration. If level == 3 , the simplex tableau is printed at each simplex iteration.

5.8.5.g: A
This is a 12.4.i: row-major representation of the matrix @(@ A @)@ in the problem.

5.8.5.h: b
This is the vector @(@ b @)@ in the problem.

5.8.5.i: c
This is the vector @(@ c @)@ in the problem.

5.8.5.j: d
This is the vector @(@ d @)@ in the problem. If @(@ d_j @)@ is infinity, there is no limit for the size of @(@ x_j @)@.

5.8.5.k: maxitr
This is the maximum number of newton iterations to try before giving up on convergence.

5.8.5.l: xout
This argument has size is n and the input value of its elements does no matter. Upon return it is the primal variables @(@ x @)@ corresponding to the problem solution.

5.8.5.m: ok
If the return value ok is true, an optimal solution was found.

5.8.5.n: Example
The file 5.8.5.1: lp_box.cpp contains an example and test of lp_box. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/lp_box.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.5.1: abs_normal lp_box: Example and Test

5.8.5.1.a: Problem
Our original problem is @[@ \begin{array}{rl} \R{minimize} & x_0 - x_1 \; \R{w.r.t} \; x \in \B{R}^2 \\ \R{subject \; to} & -2 \leq x_0 \leq +2 \; \R{and} \; -2 \leq x_1 \leq +2 \end{array} @]@

5.8.5.1.b: Source

# include <limits>
# include <cppad/utility/vector.hpp>
# include "lp_box.hpp"

bool lp_box(void)
{     bool ok = true;
     typedef CppAD::vector<double> vector;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     //
     size_t n = 2;
     size_t m = 0;
     vector A(m), b(m), c(n), d(n), xout(n);
     c[0] = +1.0;
     c[1] = -1.0;
     //
     d[0] = +2.0;
     d[1] = +2.0;
     //
     size_t level   = 0;
     size_t maxitr  = 20;
     //
     ok &= CppAD::lp_box(level, A, b, c, d, maxitr, xout);
     //
     // check optimal value for x
     ok &= std::fabs( xout[0] + 2.0 ) < eps99;
     ok &= std::fabs( xout[1] - 2.0 ) < eps99;
     //
     return ok;
}

Input File: example/abs_normal/lp_box.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.5.2: lp_box Source Code
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class Vector>
bool lp_box(
     size_t        level   ,
     const Vector& A       ,
     const Vector& b       ,
     const Vector& c       ,
     const Vector& d       ,
     size_t        maxitr  ,
     Vector&       xout    )
// END PROTOTYPE
{     double inf = std::numeric_limits<double>::infinity();
     //
     size_t m = b.size();
     size_t n = c.size();
     //
     CPPAD_ASSERT_KNOWN(
          level <= 3, "lp_box: level is greater than 3");
     CPPAD_ASSERT_KNOWN(
          size_t(A.size()) == m * n, "lp_box: size of A is not m * n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(d.size()) == n, "lp_box: size of d is not n"
     );
     if( level > 0 )
     {     std::cout << "start lp_box\n";
          CppAD::abs_print_mat("A", m, n, A);
          CppAD::abs_print_mat("b", m, 1, b);
          CppAD::abs_print_mat("c", n, 1, c);
          CppAD::abs_print_mat("d", n, 1, d);
     }
     //
     // count number of limits
     size_t n_limit = 0;
     for(size_t j = 0; j < n; j++)
     {     if( d[j] < inf )
               n_limit += 1;
     }
     //
     // A_simplex and b_simplex define the extended constraints
     Vector A_simplex((m + 2 * n_limit) * (2 * n) ), b_simplex(m + 2 * n_limit);
     for(size_t i = 0; i < size_t(A_simplex.size()); i++)
          A_simplex[i] = 0.0;
     //
     // put A * x + b <= 0 in A_simplex, b_simplex
     for(size_t i = 0; i < m; i++)
     {     b_simplex[i] = b[i];
          for(size_t j = 0; j < n; j++)
          {     // x_j^+ coefficient (positive component)
               A_simplex[i * (2 * n) + 2 * j]     =   A[i * n + j];
               // x_j^- coefficient (negative component)
               A_simplex[i * (2 * n) + 2 * j + 1] = - A[i * n + j];
          }
     }
     //
     // put | x_j | <= d_j in A_simplex, b_simplex
     size_t i_limit = 0;
     for(size_t j = 0; j < n; j++) if( d[j] < inf )
     {
          // x_j^+ <= d_j constraint
          b_simplex[ m + 2 * i_limit]                         = - d[j];
          A_simplex[(m + 2 * i_limit) * (2 * n) + 2 * j]      = 1.0;
          //
          // x_j^- <= d_j constraint
          b_simplex[ m + 2 * i_limit + 1]                         = - d[j];
          A_simplex[(m + 2 * i_limit + 1) * (2 * n) + 2 * j + 1]  = 1.0;
          //
          ++i_limit;
     }
     //
     // c_simples
     Vector c_simplex(2 * n);
     for(size_t j = 0; j < n; j++)
     {     // x_j+ component
          c_simplex[2 * j]     = c[j];
          // x_j^- component
          c_simplex[2 * j + 1] = - c[j];
     }
     size_t level_simplex = 0;
     if( level >= 2 )
          level_simplex = level - 1;
     //
     Vector x_simplex(2 * n);
     bool ok = CppAD::simplex_method(
          level_simplex, A_simplex, b_simplex, c_simplex, maxitr, x_simplex
     );
     for(size_t j = 0; j < n; j++)
          xout[j] = x_simplex[2 * j] - x_simplex[2 * j + 1];
     if( level > 0 )
     {     CppAD::abs_print_mat("xout", n, 1, xout);
          if( ok )
               std::cout << "end lp_box: ok = true\n";
          else
               std::cout << "end lp_box: ok = false\n";
     }
     return ok;
}

} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/lp_box.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.6: abs_normal: Minimize a Linear Abs-normal Approximation

5.8.6.a: Syntax
ok = abs_min_linear(
     
levelnms,
     
g_hatg_jacboundepsilonmaxitrdelta_x
)


5.8.6.b: Prototype

template <class DblVector, class SizeVector>
bool abs_min_linear(
     size_t            level   ,
     size_t            n       ,
     size_t            m       ,
     size_t            s       ,
     const DblVector&  g_hat   ,
     const DblVector&  g_jac   ,
     const DblVector&  bound   ,
     const DblVector&  epsilon ,
     const SizeVector& maxitr  ,
     DblVector&        delta_x )

5.8.6.c: Source
This following is a link to the source code for this example: 5.8.6.2: abs_min_linear.hpp .

5.8.6.d: Purpose
We are given a point @(@ \hat{x} \in \B{R}^n @)@ and use the notation @(@ \tilde{f} (x) @)@ for the abs-normal 5.8.1.f.b: approximation for f(x) near @(@ \hat{x} @)@. We are also given a vector @(@ b \in \B{R}_+^n @)@. This routine solves the problem @[@ \begin{array}{lll} \R{minimize} & \tilde{f}(x) & \R{w.r.t} \; x \in \B{R}^n \\ \R{subject \; to} & | x_j - \hat{x}_j | \leq b_j & j = 0 , \ldots , n-1 \end{array} @]@

5.8.6.e: DblVector
is a 8.9: SimpleVector class with elements of type double.

5.8.6.f: SizeVector
is a 8.9: SimpleVector class with elements of type size_t.

5.8.6.g: f
We use the notation f for the original function; see 5.8.1.b: f .

5.8.6.h: level
This value is less that or equal 4. If level == 0 , no tracing of the optimization is printed. If level >= 1 , a trace of each iteration of abs_min_linear is printed. If level >= 2 , a trace of the 5.8.5: lp_box sub-problem is printed. If level >= 3 , a trace of the objective and primal variables @(@ x @)@ are printed at each 5.8.4: simplex_method iteration. If level == 4 , the simplex tableau is printed at each simplex iteration.

5.8.6.i: n
This is the dimension of the domain space for f ; see 5.8.1.b.a: n .

5.8.6.j: m
This is the dimension of the range space for f ; see 5.8.1.b.b: m . This must be one so that @(@ f @)@ is an objective function.

5.8.6.k: s
This is the number of absolute value terms in f ; see 5.8.1.b.c: s .

5.8.6.l: g
We use the notation g for the abs-normal representation of f ; see 5.8.1.d: g .

5.8.6.m: g_hat
This vector has size m + s and is the value of g(x, u) at @(@ x = \hat{x} @)@ and @(@ u = a( \hat{x} ) @)@.

5.8.6.n: g_jac
This vector has size (m + s) * (n + s) and is the Jacobian of @(@ g(x, u) @)@ at @(@ x = \hat{x} @)@ and @(@ u = a( \hat{x} ) @)@.

5.8.6.o: bound
This vector has size n and we denote its value by @(@ b \in \B{R}^n @)@. The trust region is defined as the set of @(@ x @)@ such that @[@ | x_j - \hat{x}_j | \leq b_j @]@ for @(@ j = 0 , \ldots , n-1 @)@, where @(@ x @)@ is the point that we are approximating @(@ f(x) @)@.

5.8.6.p: epsilon
The value epsilon[0] is convergence criteria in terms of the infinity norm of the difference of delta_x between iterations. The value epsilon[1] is convergence criteria in terms of the derivative of the objective; i.e., @(@ \tilde{f}(x) @)@.

5.8.6.q: maxitr
This is a vector with size 2. The value maxitr[0] is the maximum number of abs_min_linear iterations to try before giving up on convergence. The value maxitr[1] is the maximum number of iterations in the 5.8.4.j: simplex_method sub-problems.

5.8.6.r: delta_x
This vector @(@ \Delta x @)@ has size n . The input value of its elements does not matter. Upon return, the approximate minimizer of @(@ \tilde{f}(x) @)@ with respect to the trust region is @(@ x = \hat{x} + \Delta x @)@.

5.8.6.s: Method

5.8.6.s.a: sigma
We use the notation @[@ \sigma (x) = \R{sign} ( z[ x , a(x) ] ) @]@ where 5.8.1.c.b: a(x) and 5.8.1.d.a: z(x, u) are as defined in the abs-normal representation of @(@ f(x) @)@.

5.8.6.s.b: Cutting Planes
At each iteration, we are given affine functions @(@ p_k (x) @)@ such that @(@ p_k ( x_k ) = \tilde{f}( x_k ) @)@ and @(@ p_k^{(1)} ( x_k ) @)@ is the derivative @(@ \tilde{f}^{(1)} ( x_k ) @)@ corresponding to @(@ \sigma ( x_k ) @)@.

5.8.6.s.c: Iteration
At iteration @(@ k @)@, we solve the problem @[@ \begin{array}{lll} \R{minimize} & \max \{ p_k (x) \W{:} k = 0 , \ldots , K-1 \} & \R{w.r.t} \; x \\ \R{subject \; to} & - b \leq x \leq + b \end{array} @]@ The solution is the new point @(@ x_K @)@ at which the new affine approximation @(@ p_K (x) @)@ is constructed. This process is iterated until the difference @(@ x_K - x_{K-1} @)@ is small enough.

5.8.6.t: Example
The file 5.8.6.1: abs_min_linear.cpp contains an example and test of abs_min_linear. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/abs_min_linear.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.6.1: abs_min_linear: Example and Test

5.8.6.1.a: Purpose
The function @(@ f : \B{R}^3 \rightarrow \B{R} @)@ defined by @[@ \begin{array}{rcl} f( x_0, x_1 ) & = & | d_0 - x_0 | + | d_1 - x_0 | + | d_2 - x_0 | \\ & + & | d_3 - x_1 | + | d_4 - x_1 | + | d_5 - x_1 | \\ \end{array} @]@ is affine, except for its absolute value terms. For this case, the abs_normal approximation should be equal to the function itself. In addition, the function is convex and 5.8.6: abs_min_linear should find its global minimizer. The minimizer of this function is @(@ x_0 = \R{median}( d_0, d_1, d_2 ) @)@ and @(@ x_1 = \R{median}( d_3, d_4, d_5 ) @)@

5.8.6.1.b: Source

# include <cppad/cppad.hpp>
# include "abs_min_linear.hpp"

namespace {
     CPPAD_TESTVECTOR(double) join(
          const CPPAD_TESTVECTOR(double)& x ,
          const CPPAD_TESTVECTOR(double)& u )
     {     size_t n = x.size();
          size_t s = u.size();
          CPPAD_TESTVECTOR(double) xu(n + s);
          for(size_t j = 0; j < n; j++)
               xu[j] = x[j];
          for(size_t j = 0; j < s; j++)
               xu[n + j] = u[j];
          return xu;
     }
}
bool abs_min_linear(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::ADFun;
     //
     typedef CPPAD_TESTVECTOR(size_t)       s_vector;
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR( AD<double> ) ad_vector;
     //
     size_t dpx   = 3;          // number of data points per x variable
     size_t level = 0;          // level of tracing
     size_t n     = 2;          // size of x
     size_t m     = 1;          // size of y
     size_t s     = dpx * n;    // number of data points and absolute values
     // data points
     d_vector  data(s);
     for(size_t i = 0; i < s; i++)
          data[i] = double(s - i) + 5.0 - double(i % 2) / 2.0;
     //
     // record the function f(x)
     ad_vector ad_x(n), ad_y(m);
     for(size_t j = 0; j < n; j++)
          ad_x[j] = double(j + 1);
     Independent( ad_x );
     AD<double> sum = 0.0;
     for(size_t j = 0; j < n; j++)
          for(size_t k = 0; k < dpx; k++)
               sum += abs( data[j * dpx + k] - ad_x[j] );
     ad_y[0] = sum;
     ADFun<double> f(ad_x, ad_y);

     // create its abs_normal representation in g, a
     ADFun<double> g, a;
     f.abs_normal_fun(g, a);

     // check dimension of domain and range space for g
     ok &= g.Domain() == n + s;
     ok &= g.Range()  == m + s;

     // check dimension of domain and range space for a
     ok &= a.Domain() == n;
     ok &= a.Range()  == s;

     // --------------------------------------------------------------------
     // Choose a point x_hat
     d_vector x_hat(n);
     for(size_t j = 0; j < n; j++)
          x_hat[j] = double(0.0);

     // value of a_hat = a(x_hat)
     d_vector a_hat = a.Forward(0, x_hat);

     // (x_hat, a_hat)
     d_vector xu_hat = join(x_hat, a_hat);

     // value of g[ x_hat, a_hat ]
     d_vector g_hat = g.Forward(0, xu_hat);

     // Jacobian of g[ x_hat, a_hat ]
     d_vector g_jac = g.Jacobian(xu_hat);

     // trust region bound (make large enough to include solutuion)
     d_vector bound(n);
     for(size_t j = 0; j < n; j++)
          bound[j] = 10.0;

     // convergence criteria
     d_vector epsilon(2);
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     epsilon[0]   = eps99;
     epsilon[1]   = eps99;

     // maximum number of iterations
     s_vector maxitr(2);
     maxitr[0] = 10; // maximum number of abs_min_linear iterations
     maxitr[1] = 35; // maximum number of qp_interior iterations

     // minimize the approxiamtion for f, which is equal to f because
     // f is affine, except for absolute value terms
     d_vector delta_x(n);
     ok &= CppAD::abs_min_linear(
          level, n, m, s, g_hat, g_jac, bound, epsilon, maxitr, delta_x
     );

     // number of data points per variable is odd
     ok &= dpx % 2 == 1;

     // check that the solution is the median of the corresponding data`
     for(size_t j = 0; j < n; j++)
     {     // data[j * dpx + 0] , ... , data[j * dpx + dpx - 1] corresponds to x[j]
          // the median of this data has index j * dpx + dpx / 2
          size_t j_median = j * dpx + (dpx / 2);
          //
          ok &= CppAD::NearEqual( delta_x[j], data[j_median], eps99, eps99 );
     }

     return ok;
}

Input File: example/abs_normal/abs_min_linear.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.6.2: abs_min_linear Source Code
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class DblVector, class SizeVector>
bool abs_min_linear(
     size_t            level   ,
     size_t            n       ,
     size_t            m       ,
     size_t            s       ,
     const DblVector&  g_hat   ,
     const DblVector&  g_jac   ,
     const DblVector&  bound   ,
     const DblVector&  epsilon ,
     const SizeVector& maxitr  ,
     DblVector&        delta_x )
// END PROTOTYPE
{     using std::fabs;
     bool ok    = true;
     double inf = std::numeric_limits<double>::infinity();
     //
     CPPAD_ASSERT_KNOWN(
          level <= 4,
          "abs_min_linear: level is not less that or equal 4"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(epsilon.size()) == 2,
          "abs_min_linear: size of epsilon not equal to 2"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(maxitr.size()) == 2,
          "abs_min_linear: size of maxitr not equal to 2"
     );
     CPPAD_ASSERT_KNOWN(
          m == 1,
          "abs_min_linear: m is not equal to 1"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(delta_x.size()) == n,
          "abs_min_linear: size of delta_x not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(bound.size()) == n,
          "abs_min_linear: size of bound not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g_hat.size()) == m + s,
          "abs_min_linear: size of g_hat not equal to m + s"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g_jac.size()) == (m + s) * (n + s),
          "abs_min_linear: size of g_jac not equal to (m + s)*(n + s)"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(bound.size()) == n,
          "abs_min_linear: size of bound is not equal to n"
     );
     if( level > 0 )
     {     std::cout << "start abs_min_linear\n";
          CppAD::abs_print_mat("bound", n, 1, bound);
          CppAD::abs_print_mat("g_hat", m + s, 1, g_hat);
          CppAD::abs_print_mat("g_jac", m + s, n + s, g_jac);

     }
     // partial y(x, u) w.r.t x (J in reference)
     DblVector py_px(n);
     for(size_t j = 0; j < n; j++)
          py_px[ j ] = g_jac[ j ];
     //
     // partial y(x, u) w.r.t u (Y in reference)
     DblVector py_pu(s);
     for(size_t j = 0; j < s; j++)
          py_pu[ j ] = g_jac[ n + j ];
     //
     // partial z(x, u) w.r.t x (Z in reference)
     DblVector pz_px(s * n);
     for(size_t i = 0; i < s; i++)
     {     for(size_t j = 0; j < n; j++)
          {     pz_px[ i * n + j ] = g_jac[ (n + s) * (i + m) + j ];
          }
     }
     // partial z(x, u) w.r.t u (L in reference)
     DblVector pz_pu(s * s);
     for(size_t i = 0; i < s; i++)
     {     for(size_t j = 0; j < s; j++)
          {     pz_pu[ i * s + j ] = g_jac[ (n + s) * (i + m) + n + j ];
          }
     }
     // initailize delta_x
     for(size_t j = 0; j < n; j++)
          delta_x[j] = 0.0;
     //
     // value of approximation for g(x, u) at current delta_x
     DblVector g_tilde = CppAD::abs_eval(n, m, s, g_hat, g_jac, delta_x);
     //
     // value of sigma at delta_x = 0; i.e., sign( z(x, u) )
     CppAD::vector<double> sigma(s);
     for(size_t i = 0; i < s; i++)
          sigma[i] = CppAD::sign( g_tilde[m + i] );
     //
     // current set of cutting planes
     DblVector C(maxitr[0] * n), c(maxitr[0]);
     //
     //
     size_t n_plane = 0;
     for(size_t itr = 0; itr < maxitr[0]; itr++)
     {
          // Equation (5), Propostion 3.1 of reference
          // dy_dx = py_px + py_pu * Sigma * (I - pz_pu * Sigma)^-1 * pz_px
          //
          // tmp_ss = I - pz_pu * Sigma
          DblVector tmp_ss(s * s);
          for(size_t i = 0; i < s; i++)
          {     for(size_t j = 0; j < s; j++)
                    tmp_ss[i * s + j] = - pz_pu[i * s + j] * sigma[j];
               tmp_ss[i * s + i] += 1.0;
          }
          // tmp_sn = (I - pz_pu * Sigma)^-1 * pz_px
          double logdet;
          DblVector tmp_sn(s * n);
          LuSolve(s, n, tmp_ss, pz_px, tmp_sn, logdet);
          //
          // tmp_sn = Sigma * (I - pz_pu * Sigma)^-1 * pz_px
          for(size_t i = 0; i < s; i++)
          {     for(size_t j = 0; j < n; j++)
                    tmp_sn[i * n + j] *= sigma[i];
          }
          // dy_dx = py_px + py_pu * Sigma * (I - pz_pu * Sigma)^-1 * pz_px
          DblVector dy_dx(n);
          for(size_t j = 0; j < n; j++)
          {     dy_dx[j] = py_px[j];
               for(size_t k = 0; k < s; k++)
                    dy_dx[j] += py_pu[k] * tmp_sn[ k * n + j];
          }
          //
          // check for case where derivative of hyperplane is zero
          // (in convex case, this is the minimizer)
          bool near_zero = true;
          for(size_t j = 0; j < n; j++)
               near_zero &= std::fabs( dy_dx[j] ) < epsilon[1];
          if( near_zero )
          {     if( level > 0 )
                    std::cout << "end abs_min_linear: local derivative near zero\n";
               return true;
          }

          // value of hyperplane at delta_x
          double plane_at_zero = g_tilde[0];
          // value of hyperplane at 0
          for(size_t j = 0; j < n; j++)
               plane_at_zero -= dy_dx[j] * delta_x[j];
          //
          // add a cutting plane with value g_tilde[0] at delta_x
          // and derivative dy_dx
          c[n_plane] = plane_at_zero;
          for(size_t j = 0; j < n; j++)
               C[n_plane * n + j] = dy_dx[j];
          ++n_plane;
          //
          // variables for cutting plane problem are (dx, w)
          // c[i] + C[i,:]*dx <= w
          DblVector b_box(n_plane), A_box(n_plane * (n + 1));
          for(size_t i = 0; i < n_plane; i++)
          {     b_box[i] = c[i];
               for(size_t j = 0; j < n; j++)
                    A_box[i * (n+1) + j] = C[i * n + j];
               A_box[i *(n+1) + n] = -1.0;
          }
          // w is the objective
          DblVector c_box(n + 1);
          for(size_t i = 0; i < size_t(c_box.size()); i++)
               c_box[i] = 0.0;
          c_box[n] = 1.0;
          //
          // d_box
          DblVector d_box(n+1);
          for(size_t j = 0; j < n; j++)
               d_box[j] = bound[j];
          d_box[n] = inf;
          //
          // solve the cutting plane problem
          DblVector xout_box(n + 1);
          size_t level_box = 0;
          if( level > 0 )
               level_box = level - 1;
          ok &= CppAD::lp_box(
               level_box,
               A_box,
               b_box,
               c_box,
               d_box,
               maxitr[1],
               xout_box
          );
          if( ! ok )
          {     if( level > 0 )
               {     CppAD::abs_print_mat("delta_x", n, 1, delta_x);
                    std::cout << "end abs_min_linear: lp_box failed\n";
               }
               return false;
          }
          //
          // check for convergence
          double max_diff = 0.0;
          for(size_t j = 0; j < n; j++)
          {     double diff = delta_x[j] - xout_box[j];
               max_diff    = std::max( max_diff, std::fabs(diff) );
          }
          //
          // check for descent in value of approximation objective
          DblVector delta_new(n);
          for(size_t j = 0; j < n; j++)
               delta_new[j] = xout_box[j];
          DblVector g_new = CppAD::abs_eval(n, m, s, g_hat, g_jac, delta_new);
          if( level > 0 )
          {     std::cout << "itr = " << itr << ", max_diff = " << max_diff
                    << ", y_cur = " << g_tilde[0] << ", y_new = " << g_new[0]
                    << "\n";
               CppAD::abs_print_mat("delta_new", n, 1, delta_new);
          }
          //
          g_tilde = g_new;
          delta_x = delta_new;
          //
          // value of sigma at new delta_x; i.e., sign( z(x, u) )
          for(size_t i = 0; i < s; i++)
               sigma[i] = CppAD::sign( g_tilde[m + i] );
          //
          if( max_diff < epsilon[0] )
          {     if( level > 0 )
                    std::cout << "end abs_min_linear: change in delta_x near zero\n";
               return true;
          }
     }
     if( level > 0 )
          std::cout << "end abs_min_linear: maximum number of iterations exceeded\n";
     return false;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/abs_min_linear.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations

5.8.7.a: Syntax
ok = min_nso_linear(
     
levelgaepsilonmaxitrb_inx_inx_out
)


5.8.7.b: Prototype

template <class DblVector, class SizeVector>
bool min_nso_linear(
     size_t           level     ,
     ADFun<double>&   g         ,
     ADFun<double>&   a         ,
     const DblVector& epsilon   ,
     SizeVector       maxitr    ,
     double           b_in      ,
     const DblVector& x_in      ,
     DblVector&       x_out     )

5.8.7.c: Source
This following is a link to the source code for this example: 5.8.7.2: min_nso_linear.hpp .

5.8.7.d: Purpose
Given a current that abs-normal representation 5.8.1.d: g , 5.8.1.c: a , for a function @(@ f(x) @)@, this routine minimizes @(@ f(x) @)@.

5.8.7.e: DblVector
is a 8.9: SimpleVector class with elements of type double.

5.8.7.f: SizeVector
is a 8.9: SimpleVector class with elements of type size_t.

5.8.7.g: f
We use the notation f for the original function; see 5.8.1.b: f .

5.8.7.g.a: n
We use n to denote the dimension of the domain for f ; i.e., f.Domain() .

5.8.7.g.b: m
We use m to denote the dimension of the range for f ; i.e., f.Range() . This must be equal to one.

5.8.7.g.c: s
We use 5.8.1.b.c: s to denote the number absolute terms in f .

5.8.7.h: level
This value is less that or equal 5. If level == 0 , no tracing of the optimization is printed. If level >= 1 , a trace of each iteration of min_nso_linear is printed. If level >= 2 , a trace of each iteration of the abs_min_linear sub-problem is printed. If level >= 3 , a trace of the 5.8.5: lp_box sub-problem is printed. If level >= 4 , a trace of the objective and primal variables @(@ x @)@ are printed at each 5.8.4: simplex_method iteration. If level == 5 , the simplex tableau is printed at each simplex iteration.

5.8.7.i: g
This is the function 5.8.1.d: g in the abs-normal representation of f .

5.8.7.j: a
This is the function 5.8.1.c: a in the abs-normal representation of f .

5.8.7.k: epsilon
This is a vector with size 2. The value epsilon[0] is convergence criteria in terms of the infinity norm of the difference of x_out between iterations. The value epsilon[1] is convergence criteria in terms of the derivative of @(@ f(x) @)@. This derivative is actually the average of the directional derivative in the direction of the sub-problem minimizer.

5.8.7.l: maxitr
This is a vector with size 3. The value maxitr[0] is the maximum number of min_nso_linear iterations to try before giving up on convergence. The value maxitr[1] is the maximum number of iterations in the abs_min_linear sub-problem. The value maxitr[2] is the maximum number of iterations in the 5.8.4.j: simplex_method sub-problems.

5.8.7.m: b_in
This the initial bound on the trust region size. To be specific, if @(@ b @)@ is the current trust region size, at each iteration affine approximation is minimized with respect to @(@ \Delta x @)@ and subject to @[@ -b \leq \Delta x_j \leq b @]@ for j = 0 , ...n-1 . It must hold that b_in > epsilon[0] .

5.8.7.n: x_in
This vector x_out has size n . It is the starting point for the optimization procedure; i.e., the min_nso_linear iterations.

5.8.7.o: x_out
This vector x_out has size n . The input value of its elements does not matter. Upon return, it is the approximate minimizer of the abs-normal approximation for @(@ f(x) @)@ over the trust region is @(@ x = \hat{x} + \Delta x @)@.

5.8.7.p: Example
The file 5.8.7.1: min_nso_linear.cpp contains an example and test of min_nso_linear. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/min_nso_linear.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.7.1: abs_normal min_nso_linear: Example and Test

5.8.7.1.a: Purpose
We minimize the function @(@ f : \B{R}^3 \rightarrow \B{R} @)@ defined by @[@ \begin{array}{rcl} f( x_0, x_1, x_2 ) & = & x_0^2 + 2 (x_0 + x_1)^2 + | x_2 | \end{array} @]@

5.8.7.1.b: Discussion
This routine uses 5.8.6: abs_min_linear which uses 5.8.5: lp_box , a linear programming algorithm. It is mean to be compared with 5.8.11.1: min_nso_quad.cpp which uses a quadratic programing algorithm for the same problem. To see this comparison, set level = 1 is both examples.

5.8.7.1.c: Source

# include <cppad/cppad.hpp>
# include "min_nso_linear.hpp"

bool min_nso_linear(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::ADFun;
     //
     typedef CPPAD_TESTVECTOR(size_t)       s_vector;
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR( AD<double> ) ad_vector;
     //
     size_t level = 0;    // level of tracing
     size_t n     = 3;    // size of x
     size_t m     = 1;    // size of y
     size_t s     = 1;    // number of data points and absolute values
     //
     // start recording the function f(x)
     ad_vector ax(n), ay(m);
     for(size_t j = 0; j < n; j++)
          ax[j] = double(j + 1);
     Independent( ax );
     //
     ay[0]  =  ax[0] * ax[0];
     ay[0] += 2.0 * (ax[0] + ax[1]) * (ax[0] + ax[1]);
     ay[0] += fabs( ax[2] );
     ADFun<double> f(ax, ay);
     //
     // create its abs_normal representation in g, a
     ADFun<double> g, a;
     f.abs_normal_fun(g, a);

     // check dimension of domain and range space for g
     ok &= g.Domain() == n + s;
     ok &= g.Range()  == m + s;

     // check dimension of domain and range space for a
     ok &= a.Domain() == n;
     ok &= a.Range()  == s;

     // epsilon
     d_vector epsilon(2);
     double eps = 1e-3;
     epsilon[0] = eps;
     epsilon[1] = eps;

     // maxitr
     s_vector maxitr(3);
     maxitr[0] = 100;
     maxitr[1] = 20;
     maxitr[2] = 20;

     // b_in
     double b_in = 1.0;

     // call min_nso_linear
     d_vector x_in(n), x_out(n);
     for(size_t j = 0; j < n; j++)
          x_in[j]  = double(j + 1);

     //
     ok &= CppAD::min_nso_linear(
          level, g, a, epsilon, maxitr, b_in, x_in, x_out
     );
     //
     for(size_t j = 0; j < n; j++)
          ok &= std::fabs( x_out[j] ) < eps;

     return ok;
}

Input File: example/abs_normal/min_nso_linear.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.7.2: min_nso_linear Source Code
namespace {
     CPPAD_TESTVECTOR(double) min_nso_linear_join(
          const CPPAD_TESTVECTOR(double)& x ,
          const CPPAD_TESTVECTOR(double)& u )
     {     size_t n = x.size();
          size_t s = u.size();
          CPPAD_TESTVECTOR(double) xu(n + s);
          for(size_t j = 0; j < n; j++)
               xu[j] = x[j];
          for(size_t j = 0; j < s; j++)
               xu[n + j] = u[j];
          return xu;
     }
}

// BEGIN C++
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class DblVector, class SizeVector>
bool min_nso_linear(
     size_t           level     ,
     ADFun<double>&   g         ,
     ADFun<double>&   a         ,
     const DblVector& epsilon   ,
     SizeVector       maxitr    ,
     double           b_in      ,
     const DblVector& x_in      ,
     DblVector&       x_out     )
// END PROTOTYPE
{
     using std::fabs;
     //
     // number of absolute value terms
     size_t s  = a.Range();
     //
     // size of domain for f
     size_t n  = g.Domain() - s;
     //
     // size of range space for f
     size_t m = g.Range() - s;
     //
     CPPAD_ASSERT_KNOWN(
          level <= 5,
          "min_nso_linear: level is not less that or equal 5"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(epsilon.size()) == 2,
          "min_nso_linear: size of epsilon not equal to 2"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(maxitr.size()) == 3,
          "min_nso_linear: size of maxitr not equal to 3"
     );
     CPPAD_ASSERT_KNOWN(
          g.Domain() > s && g.Range() > s,
          "min_nso_linear: g, a is not an abs-normal representation"
     );
     CPPAD_ASSERT_KNOWN(
          m == 1,
          "min_nso_linear: m is not equal to 1"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(x_in.size()) == n,
          "min_nso_linear: size of x_in not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(x_out.size()) == n,
          "min_nso_linear: size of x_out not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          epsilon[0] < b_in,
          "min_nso_linear: b_in <= epsilon[0]"
     );
     if( level > 0 )
     {     std::cout << "start min_nso_linear\n";
          std::cout << "b_in = " << b_in << "\n";
          CppAD::abs_print_mat("x_in", n, 1, x_in);
     }
     // level in abs_min_linear sub-problem
     size_t level_tilde = 0;
     if( level > 0 )
          level_tilde = level - 1;
     //
     // maxitr in abs_min_linear sub-problem
     SizeVector maxitr_tilde(2);
     maxitr_tilde[0] = maxitr[1];
     maxitr_tilde[1] = maxitr[2];
     //
     // epsilon in abs_min_linear sub-problem
     DblVector eps_tilde(2);
     eps_tilde[0] = epsilon[0] / 10.;
     eps_tilde[1] = epsilon[1] / 10.;
     //
     // current bound
     double b_cur = b_in;
     //
     // initilaize the current x
     x_out = x_in;
     //
     // value of a(x) at current x
     DblVector a_cur = a.Forward(0, x_out);
     //
     // (x_out, a_cur)
     DblVector xu_cur = min_nso_linear_join(x_out, a_cur);
     //
     // value of g[ x_cur, a_cur ]
     DblVector g_cur = g.Forward(0, xu_cur);
     //
     for(size_t itr = 0; itr < maxitr[0]; itr++)
     {
          // Jacobian of g[ x_cur, a_cur ]
          DblVector g_jac = g.Jacobian(xu_cur);
          //
          // bound in abs_min_linear sub-problem
          DblVector bound_tilde(n);
          for(size_t j = 0; j < n; j++)
               bound_tilde[j] = b_cur;
          //
          DblVector delta_x(n);
          bool ok = abs_min_linear(
               level_tilde, n, m, s,
               g_cur, g_jac, bound_tilde, eps_tilde, maxitr_tilde, delta_x
          );
          if( ! ok )
          {     if( level > 0 )
                    std::cout << "end min_nso_linear: abs_min_linear failed\n";
               return false;
          }
          //
          // new candidate value for x
          DblVector x_new(n);
          double max_delta_x = 0.0;
          for(size_t j = 0; j < n; j++)
          {     x_new[j] = x_out[j] + delta_x[j];
               max_delta_x = std::max(max_delta_x, std::fabs( delta_x[j] ) );
          }
          //
          if( max_delta_x < b_cur && max_delta_x < epsilon[0] )
          {     if( level > 0 )
                    std::cout << "end min_nso_linear: delta_x is near zero\n";
               return true;
          }
          // value of abs-normal approximation at minimizer
          DblVector g_tilde = CppAD::abs_eval(n, m, s, g_cur, g_jac, delta_x);
          //
          double derivative = (g_tilde[0] - g_cur[0]) / max_delta_x;
          CPPAD_ASSERT_UNKNOWN( derivative <= 0.0 )
          if( - epsilon[1] < derivative )
          {     if( level > 0 )
                    std::cout << "end min_nso_linear: derivative near zero\n";
               return true;
          }
          //
          // value of a(x) at new x
          DblVector a_new = a.Forward(0, x_new);
          //
          // (x_new, a_new)
          DblVector xu_new = min_nso_linear_join(x_new, a_new);
          //
          // value of g[ x_new, a_new ]
          DblVector g_new = g.Forward(0, xu_new);
          //
          //
          // check for descent of objective
          double rate_new = (g_new[0] - g_cur[0]) / max_delta_x;
          if( - epsilon[1] < rate_new )
          {     // did not get sufficient descent
               b_cur /= 2.0;
               if( level > 0 )
                    std::cout << "itr = " << itr
                    << ", rate_new = " << rate_new
                    << ", b_cur = " << b_cur << "\n";
               //
          }
          else
          {     // got sufficient descent so accept candidate for x
               x_out  = x_new;
               a_cur  = a_new;
               g_cur  = g_new;
               xu_cur = xu_new;
               //
               if( level >  0 )
               {     std::cout << "itr = " << itr
                    << ", derivative = "<< derivative
                    << ", max_delta_x = "<< max_delta_x
                    << ", objective = " << g_cur[0] << "\n";
                    abs_print_mat("x_out", n, 1, x_out);
               }
          }
     }
     if( level > 0 )
          std::cout << "end min_nso_linear: maximum number of iterations exceeded\n";
     return false;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/min_nso_linear.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.8: Solve a Quadratic Program Using Interior Point Method

5.8.8.a: Syntax
ok = qp_interior(
levelcCgGepsilonmaxitrxinxoutyoutsout
)


5.8.8.b: Prototype

template <class Vector>
bool qp_interior(
     size_t        level   ,
     const Vector& c       ,
     const Vector& C       ,
     const Vector& g       ,
     const Vector& G       ,
     double        epsilon ,
     size_t        maxitr  ,
     const Vector& xin     ,
     Vector&       xout    ,
     Vector&       yout    ,
     Vector&       sout    )

5.8.8.c: Source
This following is a link to the source code for this example: 5.8.8.2: qp_interior.hpp .

5.8.8.d: Purpose
This routine could be used to create a version of 5.8.6: abs_min_linear that solved Quadratic programs (instead of linear programs).

5.8.8.e: Problem
We are given @(@ C \in \B{R}^{m \times n} @)@, @(@ c \in \B{R}^m @)@, @(@ G \in \B{R}^{n \times n} @)@, @(@ g \in \B{R}^n @)@, where @(@ G @)@ is positive semi-definite and @(@ G + C^T C @)@ is positive definite. This routine solves the problem @[@ \begin{array}{rl} \R{minimize} & \frac{1}{2} x^T G x + g^T x \; \R{w.r.t} \; x \in \B{R}^n \\ \R{subject \; to} & C x + c \leq 0 \end{array} @]@

5.8.8.f: Vector
The type Vector is a simple vector with elements of type double.

5.8.8.g: level
This value is zero or one. If level == 0 , no tracing is printed. If level == 1 , a trace of the qp_interior optimization is printed.

5.8.8.h: c
This is the vector @(@ c @)@ in the problem.

5.8.8.i: C
This is a 12.4.i: row-major of the matrix @(@ C @)@ in the problem.

5.8.8.j: g
This is the vector @(@ g @)@ in the problem.

5.8.8.k: G
This is a 12.4.i: row-major of the matrix @(@ G @)@ in the problem.

5.8.8.l: epsilon
This argument is the convergence criteria; see 5.8.8.s: KKT conditions below. It must be greater than zero.

5.8.8.m: maxitr
This is the maximum number of newton iterations to try before giving up on convergence.

5.8.8.n: xin
This argument has size n and is the initial point for the algorithm. It must strictly satisfy the constraints; i.e., @(@ C x - c < 0 @)@ for x = xin .

5.8.8.o: xout
This argument has size is n and the input value of its elements does no matter. Upon return it is the primal variables corresponding to the problem solution.

5.8.8.p: yout
This argument has size is m and the input value of its elements does no matter. Upon return it the components of yout are all positive and it is the dual variables corresponding to the program solution.

5.8.8.q: sout
This argument has size is m and the input value of its elements does no matter. Upon return it the components of sout are all positive and it is the slack variables corresponding to the program solution.

5.8.8.r: ok
If the return value ok is true, convergence is obtained; i.e., @[@ | F_0 (xout , yout, sout) |_\infty \leq epsilon @]@ where @(@ | v |_\infty @)@ is the maximum absolute element for the vector @(@ v @)@ and @(@ F_\mu (x, y, s) @)@ is defined below.

5.8.8.s: KKT Conditions
Give a vector @(@ v \in \B{R}^m @)@ we define @(@ D(v) \in \B{R}^{m \times m} @)@ as the corresponding diagonal matrix. We also define @(@ 1_m \in \B{R}^m @)@ as the vector of ones. We define @(@ F_\mu : \B{R}^{n + m + m } \rightarrow \B{R}^{n + m + m} @)@ by @[@ F_\mu ( x , y , s ) = \left( \begin{array}{c} g + G x + y^T C \\ C x + c + s \\ D(s) D(y) 1_m - \mu 1_m \end{array} \right) @]@ The KKT conditions for a solution of this problem is @(@ 0 \leq y @)@, @(@ 0 \leq s @)@, and @(@ F_0 (x , y, s) = 0 @)@.

5.8.8.t: Newton Step
The derivative of @(@ F_\mu @)@ is given by @[@ F_\mu^{(1)} (x, y, s) = \left( \begin{array}{ccc} G & C^T & 0_{n,m} \\ C & 0 & I_{m,m} \\ 0_{m,m} & D(s) & D(y) \end{array} \right) @]@ The Newton step solves the following equation for @(@ \Delta x @)@, @(@ \Delta y @)@, and @(@ \Delta z @)@ @[@ F_\mu^{(1)} (x, y, s) \left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right) = - F_\mu (x, y, s) @]@ To simplify notation, we define @[@ \begin{array}{rcl} r_x (x, y, s) & = & g + G x + y^T C \\ r_y (x, y, s) & = & C x + c + s \\ r_s (x, y, s) & = & D(s) D(y) 1_m - \mu 1_m \end{array} @]@ It follows that @[@ \left( \begin{array}{ccc} G & C^T & 0_{n,m} \\ C & 0 & I_{m,m} \\ 0_{m,m} & D(s) & D(y) \end{array} \right) \left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right) = - \left( \begin{array}{c} r_x (x, y, s) \\ r_y (x, y, s) \\ r_s (x, y, s) \end{array} \right) @]@

5.8.8.t.a: Elementary Row Reduction
Subtracting @(@ D(y) @)@ times the second row from the third row we obtain: @[@ \left( \begin{array}{ccc} G & C^T & 0_{n,m} \\ C & 0 & I_{m,m} \\ - D(y) C & D(s) & 0_{m,m} \end{array} \right) \left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right) = - \left( \begin{array}{c} r_x (x, y, s) \\ r_y (x, y, s) \\ r_s (x, y, s) - D(y) r_y(x, y, s) \end{array} \right) @]@ Multiplying the third row by @(@ D(s)^{-1} @)@ we obtain: @[@ \left( \begin{array}{ccc} G & C^T & 0_{n,m} \\ C & 0 & I_{m,m} \\ - D(y/s) C & I_{m,m} & 0_{m,m} \end{array} \right) \left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right) = - \left( \begin{array}{c} r_x (x, y, s) \\ r_y (x, y, s) \\ D(s)^{-1} r_s (x, y, s) - D(y/s) r_y(x, y, s) \end{array} \right) @]@ where @(@ y/s @)@ is the vector in @(@ \B{R}^m @)@ defined by @(@ (y/s)_i = y_i / s_i @)@. Subtracting @(@ C^T @)@ times the third row from the first row we obtain: @[@ \left( \begin{array}{ccc} G + C^T D(y/s) C & 0_{n,m} & 0_{n,m} \\ C & 0 & I_{m,m} \\ - D(y/s) C & I_{m,m} & 0_{m,m} \end{array} \right) \left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right) = - \left( \begin{array}{c} r_x (x, y, s) - C^T D(s)^{-1} \left[ r_s (x, y, s) - D(y) r_y(x, y, s) \right] \\ r_y (x, y, s) \\ D(s)^{-1} r_s (x, y, s) - D(y/s) r_y(x, y, s) \end{array} \right) @]@

5.8.8.u: Solution
It follows that @(@ G + C^T D(y/s) C @)@ is invertible and we can determine @(@ \Delta x @)@ by solving the equation @[@ [ G + C^T D(y/s) C ] \Delta x = C^T D(s)^{-1} \left[ r_s (x, y, s) - D(y) r_y(x, y, s) \right] - r_x (x, y, s) @]@ Given @(@ \Delta x @)@ we have that @[@ \Delta s = - r_y (x, y, s) - C \Delta x @]@ @[@ \Delta y = D(s)^{-1}[ D(y) r_y(x, y, s) - r_s (x, y, s) + D(y) C \Delta x ] @]@

5.8.8.v: Example
The file 5.8.8.1: qp_interior.cpp contains an example and test of qp_interior. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/qp_interior.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.8.1: abs_normal qp_interior: Example and Test

5.8.8.1.a: Problem
Our original problem is @[@ \R{minimize} \; | u - 1| \; \R{w.r.t} \; u \in \B{R} @]@ We reformulate this as the following problem @[@ \begin{array}{rlr} \R{minimize} & v & \R{w.r.t} \; (u,v) \in \B{R}^2 \\ \R{subject \; to} & u - 1 \leq v \\ & 1 - u \leq v \end{array} @]@ This is equivalent to @[@ \begin{array}{rlr} \R{minimize} & (0, 1) \cdot (u, v)^T & \R{w.r.t} \; (u,v) \in \B{R}^2 \\ \R{subject \; to} & \left( \begin{array}{cc} 1 & -1 \\ -1 & -1 \end{array} \right) \left( \begin{array}{c} u \\ v \end{array} \right) + \left( \begin{array}{c} -1 \\ 1 \end{array} \right) \leq 0 \end{array} @]@ which is in the form expected by 5.8.8: qp_interior .

5.8.8.1.b: Source

# include <limits>
# include <cppad/utility/vector.hpp>
# include "qp_interior.hpp"

bool qp_interior(void)
{     bool ok = true;
     typedef CppAD::vector<double> vector;
     //
     size_t n = 2;
     size_t m = 2;
     vector C(m*n), c(m), G(n*n), g(n), xin(n), xout(n), yout(m), sout(m);
     C[ 0 * n + 0 ] =  1.0; // C(0,0)
     C[ 0 * n + 1 ] = -1.0; // C(0,1)
     C[ 1 * n + 0 ] = -1.0; // C(1,0)
     C[ 1 * n + 1 ] = -1.0; // C(1,1)
     //
     c[0]           = -1.0;
     c[1]           =  1.0;
     //
     g[0]           =  0.0;
     g[1]           =  1.0;
     //
     // G = 0
     for(size_t i = 0; i < n * n; i++)
          G[i] = 0.0;
     //
     // If (u, v) = (0,2), C * (u, v) + c = (-2,-2)^T + (1,-1)^T < 0
     // Hence (0, 2) is feasible.
     xin[0] = 0.0;
     xin[1] = 2.0;
     //
     double epsilon = 99.0 * std::numeric_limits<double>::epsilon();
     size_t maxitr  = 10;
     size_t level   = 0;
     //
     ok &= CppAD::qp_interior(
          level, c, C, g, G, epsilon, maxitr, xin, xout, yout, sout
     );
     //
     // check optimal value for u
     ok &= std::fabs( xout[0] - 1.0 ) < epsilon;
     // check optimal value for v
     ok &= std::fabs( xout[1] ) < epsilon;
     //
     return ok;
}

Input File: example/abs_normal/qp_interior.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.8.2: qp_interior Source Code
namespace {
     // ------------------------------------------------------------------------
     template <class Vector>
     double qp_interior_max_abs(const Vector& v)
     {     double max_abs = 0.0;
          for(size_t j = 0; j < size_t(v.size()); j++)
               max_abs = std::max( max_abs, std::fabs(v[j]) );
          return max_abs;
     }
     // ------------------------------------------------------------------------
     template <class Vector>
     void qp_interior_split(
          const Vector& v, Vector& v_x, Vector& v_y, Vector& v_s
     )
     {     size_t n = v_x.size();
          size_t m = v_y.size();
          CPPAD_ASSERT_UNKNOWN( size_t(v_s.size()) == m );
          CPPAD_ASSERT_UNKNOWN( size_t(v.size()) == n + m + m );
          for(size_t i = 0; i < n; i++)
               v_x[i] = v[i];
          for(size_t i = 0; i < m; i++)
          {     v_y[i] = v[n + i];
               v_s[i] = v[n + m + i];
          }
          return;
     }
     // ------------------------------------------------------------------------
     template <class Vector>
     void qp_interior_join(
          Vector& v, const Vector& v_x, const Vector& v_y, const Vector& v_s
     )
     {     size_t n = v_x.size();
          size_t m = v_y.size();
          CPPAD_ASSERT_UNKNOWN( size_t(v_s.size()) == m );
          CPPAD_ASSERT_UNKNOWN( size_t(v.size()) == n + m + m );
          for(size_t i = 0; i < n; i++)
               v[i] = v_x[i];
          for(size_t i = 0; i < m; i++)
               v[n + i] = v_y[i];
          for(size_t i = 0; i < m; i++)
               v[n + m + i] = v_s[i];
          return;
     }
     // ------------------------------------------------------------------------
     template <class Vector>
     Vector qp_interior_F_0(
          const Vector& c       ,
          const Vector& C       ,
          const Vector& g       ,
          const Vector& G       ,
          const Vector& x       ,
          const Vector& y       ,
          const Vector& s       )
     {     size_t n = g.size();
          size_t m = c.size();
          // compute r_x(x, y, s) = g + G x + y^T C
          Vector r_x(n);
          for(size_t j = 0; j < n; j++)
          {     r_x[j] = g[j];
               for(size_t i = 0; i < n; i++)
                    r_x[j] += G[j * n + i] * x[i];
               for(size_t i = 0; i < m; i++)
                    r_x[j] += y[i] * C[i * n + j];
          }
          // compute r_y(x, y, s) = C x + c + s
          Vector r_y(m);
          for(size_t i = 0; i < m; i++)
          {     r_y[i] = c[i] + s[i];
               for(size_t j = 0; j < n; j++)
                    r_y[i] += C[i * n + j] * x[j];
          }
          // compute r_s(x, y, s) = D(s) * D(y) * 1_m - mu * 1_m
          // where mu = 0
          Vector r_s(m);
          for(size_t i = 0; i < m; i++)
               r_s[i] = s[i] * y[i];
          //
          // combine into one vector
          Vector F_0(n + m + m);
          qp_interior_join(F_0, r_x, r_y, r_s);
          //
          return F_0;
     }
}
// BEGIN C++
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class Vector>
bool qp_interior(
     size_t        level   ,
     const Vector& c       ,
     const Vector& C       ,
     const Vector& g       ,
     const Vector& G       ,
     double        epsilon ,
     size_t        maxitr  ,
     const Vector& xin     ,
     Vector&       xout    ,
     Vector&       yout    ,
     Vector&       sout    )
// END PROTOTYPE
{     size_t m = c.size();
     size_t n = g.size();
     CPPAD_ASSERT_KNOWN(
          level <= 1,
          "qp_interior: level is greater than one"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(C.size()) == m * n,
          "qp_interior: size of C is not m * n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(G.size()) == n * n,
          "qp_interior: size of G is not n * n"
     );
     if( level > 0 )
     {     std::cout << "start qp_interior\n";
          CppAD::abs_print_mat("c", m, 1, c);
          CppAD::abs_print_mat("C", m, n, C);
          CppAD::abs_print_mat("g", n, 1, g);
          CppAD::abs_print_mat("G", n, n, G);
          CppAD::abs_print_mat("xin", n, 1, xin);
     }
     //
     // compute the maximum absolute element of the problem vectors and matrices
     double max_element = 0.0;
     for(size_t i = 0; i < size_t(C.size()); i++)
          max_element = std::max(max_element , std::fabs(C[i]) );
     for(size_t i = 0; i < size_t(c.size()); i++)
          max_element = std::max(max_element , std::fabs(c[i]) );
     for(size_t i = 0; i < size_t(G.size()); i++)
          max_element = std::max(max_element , std::fabs(G[i]) );
     for(size_t i = 0; i < size_t(g.size()); i++)
          max_element = std::max(max_element , std::fabs(g[i]) );
     //
     double mu = 1e-1 * max_element;
     //
     if( max_element == 0.0 )
     {     if( level > 0 )
               std::cout << "end qp_interior: line_search failed\n";
          return false;
     }
     //
     // initialize x, y, s
     xout = xin;
     for(size_t i = 0; i < m; i++)
     {     double sum = c[i];
          for(size_t j = 0; j < n; j++)
               sum += C[ i * n + j ] * xout[j];
          if( sum > 0.0 )
          {     if( level > 0 ) std::cout <<
                    "end qp_interior: xin is not in interior of feasible set\n";
               return false;
          }
          //
          sout[i] = std::sqrt(mu);
          yout[i] = std::sqrt(mu);
     }
     // ----------------------------------------------------------------------
     // initialie F_0(xout, yout, sout)
     Vector F_0       = qp_interior_F_0(c, C, g, G, xout, yout, sout);
     double F_max_abs = qp_interior_max_abs( F_0 );
     for(size_t itr = 0; itr <= maxitr; itr++)
     {
          // check for convergence
          if( F_max_abs <= epsilon )
          {     if( level > 0 )
                    std::cout << "end qp_interior: ok = true\n";
               return true;
          }
          if( itr == maxitr )
          {     if( level > 0 ) std::cout <<
                    "end qp_interior: max # iterations without convergence\n";
               return false;
          }
          //
          // compute F_mu(xout, yout, sout)
          Vector F_mu  = F_0;
          for(size_t i = 0; i < m; i++)
               F_mu[n + m + i] -= mu;
          //
          // r_x, r_y, r_s (xout, yout, sout)
          Vector r_x(n), r_y(m), r_s(m);
          qp_interior_split(F_mu, r_x, r_y, r_s);
          //
          // tmp_m = D(s)^{-1} * [ r_s - D(y) r_y ]
          Vector tmp_m(m);
          for(size_t i = 0; i < m; i++)
               tmp_m[i]  = ( r_s[i] - yout[i] * r_y[i] ) / sout[i];
          //
          // right_x = C^T * D(s)^{-1} * [ r_s - D(y) r_y ] - r_x
          Vector right_x(n);
          for(size_t j = 0; j < n; j++)
          {     right_x[j] = 0.0;
               for(size_t i = 0; i < m; i++)
                    right_x[j] += C[ i * n + j ] * tmp_m[i];
               right_x[j] -= r_x[j];
          }
          //
          // Left_x = G + C^T * D(y / s) * C
          Vector Left_x = G;
          for(size_t i = 0; i < n; i++)
          {     for(size_t j = 0; j < n; j++)
               {     for(size_t k = 0; k < m; k++)
                    {     double y_s = yout[k] / sout[k];
                         Left_x[ i * n + j] += C[k * n + j] * y_s * C[k * n + i];
                    }
               }
          }
          // delta_x
          Vector delta_x(n);
          double logdet;
          LuSolve(n, 1, Left_x, right_x, delta_x, logdet);
          //
          // C_delta_x = C * delta_x
          Vector C_delta_x(m);
          for(size_t i = 0; i < m; i++)
          {     C_delta_x[i] = 0.0;
               for(size_t j = 0; j < n; j++)
                    C_delta_x[i] += C[ i * n + j ] * delta_x[j];
          }
          //
          // delta_y = D(s)^-1 * [D(y) * r_y - r_s + D(y) * C * delta_x]
          Vector delta_y(m);
          for(size_t i = 0; i < m; i++)
          {     delta_y[i] = yout[i] * r_y[i] - r_s[i] + yout[i] * C_delta_x[i];
               delta_y[i] /= sout[i];
          }
          // delta_s = - r_y - C * delta_x
          Vector delta_s(m);
          for(size_t i = 0; i < m; i++)
               delta_s[i] = - r_y[i] - C_delta_x[i];
          //
          // delta_xys
          Vector delta_xys(n + m + m);
          qp_interior_join(delta_xys, delta_x, delta_y, delta_s);
          // -------------------------------------------------------------------
          //
          // The initial derivative in direction  Delta_xys is equal to
          // the negative of the norm square of F_mu
          //
          // line search parameter lam
          Vector x(n), y(m), s(m);
          double  lam = 2.0;
          bool lam_ok = false;
          while( ! lam_ok && lam > 1e-5 )
          {     lam = lam / 2.0;
               for(size_t j = 0; j < n; j++)
                    x[j] = xout[j] + lam * delta_xys[j];
               lam_ok = true;
               for(size_t i = 0; i < m; i++)
               {     y[i] = yout[i] + lam * delta_xys[n + i];
                    s[i] = sout[i] + lam * delta_xys[n + m + i];
                    lam_ok &= s[i] > 0.0 && y[i] > 0.0;
               }
               if( lam_ok )
               {     Vector F_mu_tmp = qp_interior_F_0(c, C, g, G, x, y, s);
                    for(size_t i = 0; i < m; i++)
                         F_mu_tmp[n + m + i] -= mu;
                    // avoid cancellation roundoff in difference of norm squared
                    // |v + dv|^2         = v^T * v + 2 * v^T * dv + dv^T * dv
                    // |v + dv|^2 - |v|^2 =           2 * v^T * dv + dv^T * dv
                    double F_norm_sq    = 0.0;
                    double diff_norm_sq = 0.0;
                    for(size_t i = 0; i < n + m + m; i++)
                    {     double dv     = F_mu_tmp[i] - F_mu[i];
                         F_norm_sq    += F_mu[i] * F_mu[i];
                         diff_norm_sq += 2.0 * F_mu[i] * dv + dv * dv;
                    }
                    lam_ok &= diff_norm_sq < - lam * F_norm_sq / 4.0;
               }
          }
          if( ! lam_ok )
          {     if( level > 0 )
                    std::cout << "end qp_interior: line search failed\n";
               return false;
          }
          //
          // update current solution
          xout = x;
          yout = y;
          sout = s;
          //
          // updage F_0
          F_0       = qp_interior_F_0(c, C, g, G, xout, yout, sout);
          F_max_abs = qp_interior_max_abs( F_0 );
          //
          // update mu
          if( F_max_abs <= 1e1 *  mu )
               mu = mu / 1e2;
          if( level > 0 )
          {     std::cout << "itr = " << itr
                    << ", mu = " << mu
                    << ", lam = " << lam
                    << ", F_max_abs = " << F_max_abs << "\n";
               abs_print_mat("xout", 1, n, xout);
          }
     }
     if( level > 0 )
          std::cout << "end qp_interior: progam error\n";
     return false;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/qp_interior.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints

5.8.9.a: Syntax
ok = qp_box(
     
levelabcCgGepsilonmaxitrxinxout
)


5.8.9.b: Prototype

template <class Vector>
bool qp_box(
     size_t        level   ,
     const Vector& a       ,
     const Vector& b       ,
     const Vector& c       ,
     const Vector& C       ,
     const Vector& g       ,
     const Vector& G       ,
     double        epsilon ,
     size_t        maxitr  ,
     const Vector& xin     ,
     Vector&       xout    )

5.8.9.c: Source
This following is a link to the source code for this example: 5.8.9.2: qp_box.hpp .

5.8.9.d: Purpose
This routine could be used to create a version of 5.8.6: abs_min_linear that solved quadratic programs (instead of linear programs).

5.8.9.e: Problem
We are given @(@ a \in \B{R}^n @)@, @(@ b \in \B{R}^n @)@, @(@ c \in \B{R}^m @)@, @(@ C \in \B{R}^{m \times n} @)@, @(@ g \in \B{R}^n @)@, @(@ G \in \B{R}^{n \times n} @)@, where @(@ G @)@ is positive semi-definite. This routine solves the problem @[@ \begin{array}{rl} \R{minimize} & \frac{1}{2} x^T G x + g^T x \; \R{w.r.t} \; x \in \B{R}^n \\ \R{subject \; to} & C x + c \leq 0 \; \R{and} \; a \leq x \leq b \end{array} @]@ The matrix @(@ G + C^T C @)@ must be positive definite on components of the vector @(@ x @)@ where the lower limit minus infinity and the upper limit is plus infinity; see a and b below.

5.8.9.f: Vector
The type Vector is a simple vector with elements of type double.

5.8.9.g: level
This value is less that or equal two. If level == 0 , no tracing is printed. If level >= 1 , a trace of the qp_box operations is printed. If level == 2 , a trace of the 5.8.8: qp_interior sub-problem is printed.

5.8.9.h: a
This is the vector of lower limits for @(@ x @)@ in the problem. If a[j] is minus infinity, there is no lower limit for @(@ x_j @)@.

5.8.9.i: b
This is the vector of upper limits for @(@ x @)@ in the problem. If a[j] is plus infinity, there is no upper limit for @(@ x_j @)@.

5.8.9.j: c
This is the value of the inequality constraint function at @(@ x = 0 @)@.

5.8.9.k: C
This is a 12.4.i: row-major representation of thee the inequality constraint matrix @(@ C @)@.

5.8.9.l: g
This is the gradient of the objective function.

5.8.9.m: G
This is a row-major representation of the Hessian of the objective function. For @(@ j = 0 , \ldots , n-1 @)@, @(@ - \infty < a_j @)@ or @(@ b_j < + \infty @)@ or @(@ G_{j,j} > 0.0 @)@.

5.8.9.n: epsilon
This argument is the convergence criteria; see 5.8.9.s: KKT conditions below. It must be greater than zero.

5.8.9.o: maxitr
This is the maximum number of 5.8.8: qp_interior iterations to try before giving up on convergence.

5.8.9.p: xin
This argument has size n and is the initial point for the algorithm. It must strictly satisfy the constraints; i.e.,
     
a < xin,  xin < b,  C * xin - c < 0

5.8.9.q: xout
This argument has size is n and the input value of its elements does no matter. Upon return it is the primal variables @(@ x @)@ corresponding to the problem solution.

5.8.9.r: ok
If the return value ok is true, convergence is obtained; i.e., @[@ | F ( x , y_a, s_a, y_b, s_b, y_c, s_c ) |_\infty < \varepsilon @]@ where @(@ |v|_\infty @)@ is the infinity norm of the vector @(@ v @)@, @(@ \varepsilon @)@ is epsilon , @(@ x @)@ is equal to xout , @(@ y_a, s_a \in \B{R}_+^n @)@, @(@ y_b, s_b \in \B{R}_+^n @)@ and @(@ y_c, s_c \in \B{R}_+^m @)@.

5.8.9.s: KKT Conditions
Give a vector @(@ v \in \B{R}^m @)@ we define @(@ D(v) \in \B{R}^{m \times m} @)@ as the corresponding diagonal matrix. We also define @(@ 1_m \in \B{R}^m @)@ as the vector of ones. We define @[@ F ( x , y_a, s_a, y_b, s_b, y_c, s_c ) = \left( \begin{array}{c} g + G x - y_a + y_b + y_c^T C \\ a + s_a - x \\ x + s_b - b \\ C x + c + s_c \\ D(s_a) D(y_a) 1_m \\ D(s_b) D(y_b) 1_m \\ D(s_c) D(y_c) 1_m \end{array} \right) @]@ where @(@ x \in \B{R}^n @)@, @(@ y_a, s_a \in \B{R}_+^n @)@, @(@ y_b, s_b \in \B{R}_+^n @)@ and @(@ y_c, s_c \in \B{R}_+^m @)@. The KKT conditions for a solution of this problem is @[@ F ( x , y_a, s_a, y_b, s_b, y_c, s_c ) = 0 @]@

5.8.9.t: Example
The file 5.8.9.1: qp_box.cpp contains an example and test of qp_box. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/qp_box.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.9.1: abs_normal qp_box: Example and Test

5.8.9.1.a: Problem
Our original problem is @[@ \begin{array}{rl} \R{minimize} & x_0 - x_1 \; \R{w.r.t} \; x \in \B{R}^2 \\ \R{subject \; to} & -2 \leq x_0 \leq +2 \; \R{and} \; -2 \leq x_1 \leq +2 \end{array} @]@

5.8.9.1.b: Source

# include <limits>
# include <cppad/utility/vector.hpp>
# include "qp_box.hpp"

bool qp_box(void)
{     bool ok = true;
     typedef CppAD::vector<double> vector;
     //
     size_t n = 2;
     size_t m = 0;
     vector a(n), b(n), c(m), C(m), g(n), G(n*n), xin(n), xout(n);
     a[0] = -2.0;
     a[1] = -2.0;
     b[0] = +2.0;
     b[1] = +2.0;
     g[0] = +1.0;
     g[1] = -1.0;
     for(size_t i = 0; i < n * n; i++)
          G[i] = 0.0;
     //
     // (0, 0) is feasible.
     xin[0] = 0.0;
     xin[1] = 0.0;
     //
     size_t level   = 0;
     double epsilon = 99.0 * std::numeric_limits<double>::epsilon();
     size_t maxitr  = 20;
     //
     ok &= CppAD::qp_box(
          level, a, b, c, C, g, G, epsilon, maxitr, xin, xout
     );
     //
     // check optimal value for x
     ok &= std::fabs( xout[0] + 2.0 ) < epsilon;
     ok &= std::fabs( xout[1] - 2.0 ) < epsilon;
     //
     return ok;
}

Input File: example/abs_normal/qp_box.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.9.2: qp_box Source Code
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class Vector>
bool qp_box(
     size_t        level   ,
     const Vector& a       ,
     const Vector& b       ,
     const Vector& c       ,
     const Vector& C       ,
     const Vector& g       ,
     const Vector& G       ,
     double        epsilon ,
     size_t        maxitr  ,
     const Vector& xin     ,
     Vector&       xout    )
// END PROTOTYPE
{     double inf = std::numeric_limits<double>::infinity();
     //
     size_t n = a.size();
     size_t m = c.size();
     //
     CPPAD_ASSERT_KNOWN(level <= 2, "qp_interior: level is greater than 2");
     CPPAD_ASSERT_KNOWN(
          size_t(b.size()) == n, "qp_box: size of b is not n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(C.size()) == m * n, "qp_box: size of C is not m * n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g.size()) == n, "qp_box: size of g is not n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(G.size()) == n * n, "qp_box: size of G is not n * n"
     );
     if( level > 0 )
     {     std::cout << "start qp_box\n";
          CppAD::abs_print_mat("a", n, 1, a);
          CppAD::abs_print_mat("b", n, 1, b);
          CppAD::abs_print_mat("c", m, 1, c);
          CppAD::abs_print_mat("C", m, n, C);
          CppAD::abs_print_mat("g", 1, n, g);
          CppAD::abs_print_mat("G", n, n, G);
          CppAD::abs_print_mat("xin", n, 1, xin);
     }
     //
     // count number of lower and upper limits
     size_t n_limit = 0;
     for(size_t j = 0; j < n; j++)
     {     CPPAD_ASSERT_KNOWN(G[j * n + j] >= 0.0, "qp_box: G_{j,j} < 0.0");
          if( -inf < a[j] )
               ++n_limit;
          if( b[j] < inf )
               ++n_limit;
     }
     //
     // C_int and c_int define the extended constraints
     Vector C_int((m + n_limit) * n ), c_int(m + n_limit);
     for(size_t i = 0; i < size_t(C_int.size()); i++)
          C_int[i] = 0.0;
     //
     // put C * x + c <= 0 in C_int, c_int
     for(size_t i = 0; i < m; i++)
     {     c_int[i] = c[i];
          for(size_t j = 0; j < n; j++)
               C_int[i * n + j] = C[i * n + j];
     }
     //
     // put I * x - b <= 0 in C_int, c_int
     size_t i_limit = 0;
     for(size_t j = 0; j < n; j++) if( b[j] < inf )
     {     c_int[m + i_limit]            = - b[j];
          C_int[(m + i_limit) * n + j]  = 1.0;
          ++i_limit;
     }
     //
     // put a - I * x <= 0 in C_int, c_int
     for(size_t j = 0; j < n; j++) if( -inf < a[j] )
     {     c_int[m + i_limit]           = a[j];
          C_int[(m + i_limit) * n + j] = -1.0;
          ++i_limit;
     }
     Vector yout(m + n_limit), sout(m + n_limit);
     size_t level_int = 0;
     if( level == 2 )
          level_int = 1;
     bool ok = qp_interior( level_int,
          c_int, C_int, g, G, epsilon, maxitr, xin, xout, yout, sout
     );
     if( level > 0 )
     {     if( level < 2 )
               CppAD::abs_print_mat("xout", n, 1, xout);
          if( ok )
               std::cout << "end q_box: ok = true\n";
          else
               std::cout << "end q_box: ok = false\n";
     }
     return ok;
}

} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/qp_box.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.10: abs_normal: Minimize a Linear Abs-normal Approximation

5.8.10.a: Syntax
ok = abs_min_quad(
     
levelnms,
     
g_hatg_jachessianboundepsilonmaxitrdelta_x
)


5.8.10.b: Prototype

template <class DblVector, class SizeVector>
bool abs_min_quad(
     size_t            level   ,
     size_t            n       ,
     size_t            m       ,
     size_t            s       ,
     const DblVector&  g_hat   ,
     const DblVector&  g_jac   ,
     const DblVector&  hessian ,
     const DblVector&  bound   ,
     const DblVector&  epsilon ,
     const SizeVector& maxitr  ,
     DblVector&        delta_x )

5.8.10.c: Source
This following is a link to the source code for this example: 5.8.10.2: abs_min_quad.hpp .

5.8.10.d: Purpose
We are given a point @(@ \hat{x} \in \B{R}^n @)@ and use the notation @(@ \tilde{f} (x) @)@ for the abs-normal 5.8.1.f.b: approximation for f(x) near @(@ \hat{x} @)@. We are also given a vector @(@ b \in \B{R}_+^n @)@ and a positive definite matrix @(@ H \in \B{R}^{n \times n} @)@. This routine solves the problem @[@ \begin{array}{lll} \R{minimize} & \Delta x^T H \Delta x / 2 + \tilde{f}( \hat{x} + \Delta x ) & \R{w.r.t} \; \Delta x \in \B{R}^n \\ \R{subject \; to} & | \Delta x_j | \leq b_j & j = 0 , \ldots , n-1 \end{array} @]@

5.8.10.e: DblVector
is a 8.9: SimpleVector class with elements of type double.

5.8.10.f: SizeVector
is a 8.9: SimpleVector class with elements of type size_t.

5.8.10.g: f
We use the notation f for the original function; see 5.8.1.b: f .

5.8.10.h: level
This value is less that or equal 3. If level == 0 , no tracing of the optimization is printed. If level >= 1 , a trace of each iteration of abs_min_quad is printed. If level >= 2 , a trace of the 5.8.9: qp_box sub-problem is printed. If level >= 3 , a trace of the 5.8.8: qp_interior sub-problem is printed.

5.8.10.i: n
This is the dimension of the domain space for f ; see 5.8.1.b.a: n .

5.8.10.j: m
This is the dimension of the range space for f ; see 5.8.1.b.b: m . This must be one so that @(@ f @)@ is an objective function.

5.8.10.k: s
This is the number of absolute value terms in f ; see 5.8.1.b.c: s .

5.8.10.l: g
We use the notation g for the abs-normal representation of f ; see 5.8.1.d: g .

5.8.10.m: g_hat
This vector has size m + s and is the value of g(x, u) at @(@ x = \hat{x} @)@ and @(@ u = a( \hat{x} ) @)@.

5.8.10.n: g_jac
This vector has size (m + s) * (n + s) and is the Jacobian of @(@ g(x, u) @)@ at @(@ x = \hat{x} @)@ and @(@ u = a( \hat{x} ) @)@.

5.8.10.o: hessian
This vector has size n * n . It is a 12.4.i: row-major representation of the matrix @(@ H \in \B{R}^{n \times n} @)@.

5.8.10.p: bound
This vector has size n and is the vector @(@ b \in \B{R}^n @)@. The trust region is defined as the set of @(@ \Delta x @)@ such that @[@ | \Delta x | \leq b_j @]@ for @(@ j = 0 , \ldots , n-1 @)@.

5.8.10.q: epsilon
The value epsilon[0] is convergence criteria in terms of the infinity norm of the difference of delta_x between iterations. The value epsilon[1] is convergence criteria in terms of the derivative of the objective; i.e. @[@ \Delta x^T H \Delta x / 2 + \tilde{f}( \hat{x} + \Delta x) @]@

5.8.10.r: maxitr
This is a vector with size 2. The value maxitr[0] is the maximum number of abs_min_quad iterations to try before giving up on convergence. The value maxitr[1] is the maximum number of iterations in the 5.8.8.m: qp_interior sub-problems.

5.8.10.s: delta_x
This vector @(@ \Delta x @)@ has size n . The input value of its elements does not matter. Upon return, the approximate minimizer of the objective with respect to the trust region.

5.8.10.t: Method

5.8.10.t.a: sigma
We use the notation @[@ \sigma (x) = \R{sign} ( z[ x , a(x) ] ) @]@ where 5.8.1.c.b: a(x) and 5.8.1.d.a: z(x, u) are as defined in the abs-normal representation of @(@ f(x) @)@.

5.8.10.t.b: Cutting Planes
At each iteration, we are given affine functions @(@ p_k (x) @)@ such that @(@ p_k ( x_k ) = \tilde{f}( x_k ) @)@ and @(@ p_k^{(1)} ( x_k ) @)@ is the derivative @(@ \tilde{f}^{(1)} ( x_k ) @)@ corresponding to @(@ \sigma ( x_k ) @)@.

5.8.10.t.c: Iteration
At iteration @(@ k @)@, we solve the problem @[@ \begin{array}{lll} \R{minimize} & \Delta x^T H \Delta x / 2 + \max \{ p_k ( \hat{x} + \Delta x) \W{:} k = 0 , \ldots , K-1 \} & \R{w.r.t} \; \Delta x \\ \R{subject \; to} & - b \leq \Delta x \leq + b \end{array} @]@ The solution is the new point @(@ x_K @)@ at which the new affine approximation @(@ p_K (x) @)@ is constructed. This process is iterated until the difference @(@ x_K - x_{K-1} @)@ is small enough.

5.8.10.u: Example
The file 5.8.10.1: abs_min_quad.cpp contains an example and test of abs_min_quad. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/abs_min_quad.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.10.1: abs_min_quad: Example and Test

5.8.10.1.a: Purpose
The function @(@ f : \B{R}^3 \rightarrow \B{R} @)@ defined by @[@ f( x_0, x_1 ) = ( x_0^2 + x_1^2 ) / 2 + | x_0 - 5 | + | x_1 + 5 | @]@ For this case, the 5.8.10: abs_min_quad object should be equal to the function itself. In addition, the function is convex and 5.8.10: abs_min_quad should find its global minimizer. The minimizer of this function is @(@ x_0 = 1 @)@, @(@ x_1 = -1 @)@.

5.8.10.1.b: Source

# include <cppad/cppad.hpp>
# include "abs_min_quad.hpp"

namespace {
     CPPAD_TESTVECTOR(double) join(
          const CPPAD_TESTVECTOR(double)& x ,
          const CPPAD_TESTVECTOR(double)& u )
     {     size_t n = x.size();
          size_t s = u.size();
          CPPAD_TESTVECTOR(double) xu(n + s);
          for(size_t j = 0; j < n; j++)
               xu[j] = x[j];
          for(size_t j = 0; j < s; j++)
               xu[n + j] = u[j];
          return xu;
     }
}
bool abs_min_quad(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::ADFun;
     //
     typedef CPPAD_TESTVECTOR(size_t)       s_vector;
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR( AD<double> ) ad_vector;
     //
     size_t level = 0;     // level of tracing
     size_t n     = 2;     // size of x
     size_t m     = 1;     // size of y
     size_t s     = 2 ;    // number of data points and absolute values
     //
     // record the function f(x)
     ad_vector ad_x(n), ad_y(m);
     for(size_t j = 0; j < n; j++)
          ad_x[j] = double(j + 1);
     Independent( ad_x );
     AD<double> sum = 0.0;
     sum += ad_x[0] * ad_x[0] / 2.0 + abs( ad_x[0] - 5 );
     sum += ad_x[1] * ad_x[1] / 2.0 + abs( ad_x[1] + 5 );
     ad_y[0] = sum;
     ADFun<double> f(ad_x, ad_y);

     // create its abs_normal representation in g, a
     ADFun<double> g, a;
     f.abs_normal_fun(g, a);

     // check dimension of domain and range space for g
     ok &= g.Domain() == n + s;
     ok &= g.Range()  == m + s;

     // check dimension of domain and range space for a
     ok &= a.Domain() == n;
     ok &= a.Range()  == s;

     // --------------------------------------------------------------------
     // Choose the point x_hat = 0
     d_vector x_hat(n);
     for(size_t j = 0; j < n; j++)
          x_hat[j] = 0.0;

     // value of a_hat = a(x_hat)
     d_vector a_hat = a.Forward(0, x_hat);

     // (x_hat, a_hat)
     d_vector xu_hat = join(x_hat, a_hat);

     // value of g[ x_hat, a_hat ]
     d_vector g_hat = g.Forward(0, xu_hat);

     // Jacobian of g[ x_hat, a_hat ]
     d_vector g_jac = g.Jacobian(xu_hat);

     // trust region bound
     d_vector bound(n);
     for(size_t j = 0; j < n; j++)
          bound[j] = 10.0;

     // convergence criteria
     d_vector epsilon(2);
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     epsilon[0]   = eps99;
     epsilon[1]   = eps99;

     // maximum number of iterations
     s_vector maxitr(2);
     maxitr[0] = 10; // maximum number of abs_min_quad iterations
     maxitr[1] = 35; // maximum number of qp_interior iterations

     // set Hessian equal to identity matrix I
     d_vector hessian(n * n);
     for(size_t i = 0; i < n; i++)
     {     for(size_t j = 0; j < n; j++)
               hessian[i * n + j] = 0.0;
          hessian[i * n + i] = 1.0;
     }

     // minimize the approxiamtion for f (which is equal to f for this case)
     d_vector delta_x(n);
     ok &= CppAD::abs_min_quad(
          level, n, m, s,
          g_hat, g_jac, hessian, bound, epsilon, maxitr, delta_x
     );

     // check that the solution
     ok &= CppAD::NearEqual( delta_x[0], +1.0, eps99, eps99 );
     ok &= CppAD::NearEqual( delta_x[1], -1.0, eps99, eps99 );

     return ok;
}

Input File: example/abs_normal/abs_min_quad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.10.2: abs_min_quad Source Code
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class DblVector, class SizeVector>
bool abs_min_quad(
     size_t            level   ,
     size_t            n       ,
     size_t            m       ,
     size_t            s       ,
     const DblVector&  g_hat   ,
     const DblVector&  g_jac   ,
     const DblVector&  hessian ,
     const DblVector&  bound   ,
     const DblVector&  epsilon ,
     const SizeVector& maxitr  ,
     DblVector&        delta_x )
// END PROTOTYPE
{     using std::fabs;
     bool ok    = true;
     double inf = std::numeric_limits<double>::infinity();
     //
     CPPAD_ASSERT_KNOWN(
          level <= 4,
          "abs_min_quad: level is not less that or equal 3"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(epsilon.size()) == 2,
          "abs_min_quad: size of epsilon not equal to 2"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(maxitr.size()) == 2,
          "abs_min_quad: size of maxitr not equal to 2"
     );
     CPPAD_ASSERT_KNOWN(
          m == 1,
          "abs_min_quad: m is not equal to 1"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(delta_x.size()) == n,
          "abs_min_quad: size of delta_x not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(bound.size()) == n,
          "abs_min_quad: size of bound not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g_hat.size()) == m + s,
          "abs_min_quad: size of g_hat not equal to m + s"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(g_jac.size()) == (m + s) * (n + s),
          "abs_min_quad: size of g_jac not equal to (m + s)*(n + s)"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(hessian.size()) == n * n,
          "abs_min_quad: size of hessian not equal to n * n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(bound.size()) == n,
          "abs_min_quad: size of bound is not equal to n"
     );
     if( level > 0 )
     {     std::cout << "start abs_min_quad\n";
          CppAD::abs_print_mat("g_hat", m + s, 1, g_hat);
          CppAD::abs_print_mat("g_jac", m + s, n + s, g_jac);
          CppAD::abs_print_mat("hessian", n, n, hessian);
          CppAD::abs_print_mat("bound", n, 1, bound);
     }
     // partial y(x, u) w.r.t x (J in reference)
     DblVector py_px(n);
     for(size_t j = 0; j < n; j++)
          py_px[ j ] = g_jac[ j ];
     //
     // partial y(x, u) w.r.t u (Y in reference)
     DblVector py_pu(s);
     for(size_t j = 0; j < s; j++)
          py_pu[ j ] = g_jac[ n + j ];
     //
     // partial z(x, u) w.r.t x (Z in reference)
     DblVector pz_px(s * n);
     for(size_t i = 0; i < s; i++)
     {     for(size_t j = 0; j < n; j++)
          {     pz_px[ i * n + j ] = g_jac[ (n + s) * (i + m) + j ];
          }
     }
     // partial z(x, u) w.r.t u (L in reference)
     DblVector pz_pu(s * s);
     for(size_t i = 0; i < s; i++)
     {     for(size_t j = 0; j < s; j++)
          {     pz_pu[ i * s + j ] = g_jac[ (n + s) * (i + m) + n + j ];
          }
     }
     // initailize delta_x
     for(size_t j = 0; j < n; j++)
          delta_x[j] = 0.0;
     //
     // current set of cutting planes
     DblVector C(maxitr[0] * n), c(maxitr[0]);
     //
     // value of abs-normal approximation at x_hat + delta_x
     DblVector g_tilde = CppAD::abs_eval(n, m, s, g_hat, g_jac, delta_x);
     //
     // value of sigma at delta_x = 0; i.e., sign( z(x, u) )
     CppAD::vector<double> sigma(s);
     for(size_t i = 0; i < s; i++)
          sigma[i] = CppAD::sign( g_tilde[m + i] );
     //
     // initial value of the objective
     double obj_cur =  g_tilde[0];
     //
     // initial number of cutting planes
     size_t n_plane = 0;
     //
     if( level > 0 )
     {     std::cout << "obj = " << obj_cur << "\n";
          CppAD::abs_print_mat("delta_x", n, 1, delta_x);
     }
     for(size_t itr = 0; itr < maxitr[0]; itr++)
     {
          // Equation (5), Propostion 3.1 of reference
          // dy_dx = py_px + py_pu * Sigma * (I - pz_pu * Sigma)^-1 * pz_px
          //
          // tmp_ss = I - pz_pu * Sigma
          DblVector tmp_ss(s * s);
          for(size_t i = 0; i < s; i++)
          {     for(size_t j = 0; j < s; j++)
                    tmp_ss[i * s + j] = - pz_pu[i * s + j] * sigma[j];
               tmp_ss[i * s + i] += 1.0;
          }
          // tmp_sn = (I - pz_pu * Sigma)^-1 * pz_px
          double logdet;
          DblVector tmp_sn(s * n);
          LuSolve(s, n, tmp_ss, pz_px, tmp_sn, logdet);
          //
          // tmp_sn = Sigma * (I - pz_pu * Sigma)^-1 * pz_px
          for(size_t i = 0; i < s; i++)
          {     for(size_t j = 0; j < n; j++)
                    tmp_sn[i * n + j] *= sigma[i];
          }
          // dy_dx = py_px + py_pu * Sigma * (I - pz_pu * Sigma)^-1 * pz_px
          DblVector dy_dx(n);
          for(size_t j = 0; j < n; j++)
          {     dy_dx[j] = py_px[j];
               for(size_t k = 0; k < s; k++)
                    dy_dx[j] += py_pu[k] * tmp_sn[ k * n + j];
          }
          //
          // compute derivative of the quadratic term
          DblVector dq_dx(n);
          for(size_t j = 0; j < n; j++)
          {     dq_dx[j] = 0.0;
               for(size_t i = 0; i < n; i++)
                    dq_dx[j] += delta_x[i] * hessian[i * n + j];
          }
          //
          // check for case where derivative of objective is zero
          // (in convex case, this is the minimizer)
          bool near_zero = true;
          for(size_t j = 0; j < n; j++)
               near_zero &= std::fabs( dq_dx[j] + dy_dx[j] ) < epsilon[1];
          if( near_zero )
          {     if( level > 0 )
                    std::cout << "end abs_min_quad: local derivative near zero\n";
               return true;
          }
          // value of hyperplane at delta_x
          double plane_at_zero = g_tilde[0];
          //
          // value of hyperplane at 0
          for(size_t j = 0; j < n; j++)
               plane_at_zero -= dy_dx[j] * delta_x[j];
          //
          // add a cutting plane with value g_tilde[0] at delta_x
          // and derivative dy_dx
          c[n_plane] = plane_at_zero;
          for(size_t j = 0; j < n; j++)
               C[n_plane * n + j] = dy_dx[j];
          ++n_plane;
          //
          // variables for cutting plane problem are (dx, w)
          // c[i] + C[i,:] * dx <= w
          DblVector c_box(n_plane), C_box(n_plane * (n + 1));
          for(size_t i = 0; i < n_plane; i++)
          {     c_box[i] = c[i];
               for(size_t j = 0; j < n; j++)
                    C_box[i * (n+1) + j] = C[i * n + j];
               C_box[i * (n+1) + n] = -1.0;
          }
          //
          // w is the objective
          DblVector g_box(n + 1);
          for(size_t i = 0; i < size_t(c_box.size()); i++)
               g_box[i] = 0.0;
          g_box[n] = 1.0;
          //
          // a_box, b_box
          DblVector a_box(n+1), b_box(n+1);
          for(size_t j = 0; j < n; j++)
          {     a_box[j] = - bound[j];
               b_box[j] = + bound[j];
          }
          a_box[n] = - inf;
          b_box[n] = + inf;
          //
          // initial delta_x in qp_box is zero
          DblVector xin_box(n + 1);
          for(size_t j = 0; j < n; j++)
               xin_box[j] = 0.0;
          // initial w in qp_box is 1 + max_i c[i]
          xin_box[n] = 1.0 + c_box[0];
          for(size_t i = 1; i < n_plane; i++)
               xin_box[n] = std::max( xin_box[n], 1.0 + c_box[i] );
          //
          DblVector hessian_box( (n+1) * (n+1) );
          for(size_t i = 0; i < n+1; i++)
          {     for(size_t j = 0; j < n+1; j++)
               {     if( i == n || j == n )
                         hessian_box[i * (n+1) + j] = 0.0;
                    else
                         hessian_box[i * (n+1) + j] = hessian[i * n + j];
               }
          }
          //
          // solve the cutting plane problem
          DblVector xout_box(n + 1);
          size_t level_box = 0;
          if( level > 0 )
               level_box = level - 1;
          ok &= CppAD::qp_box(
               level_box,
               a_box,
               b_box,
               c_box,
               C_box,
               g_box,
               hessian_box,
               epsilon[1],
               maxitr[1],
               xin_box,
               xout_box
          );
          if( ! ok )
          {     if( level > 0 )
               {     CppAD::abs_print_mat("delta_x", n, 1, delta_x);
                    std::cout << "end abs_min_quad: qp_box failed\n";
               }
               return false;
          }
          DblVector delta_new(n);
          for(size_t j = 0; j < n; j++)
               delta_new[j] = xout_box[j];
          //
          // check for convergence
          double max_diff = 0.0;
          for(size_t j = 0; j < n; j++)
          {     double diff = delta_x[j] - delta_new[j];
               max_diff    = std::max( max_diff, std::fabs(diff) );
          }
          //
          // new value of the objective
          DblVector g_new   = CppAD::abs_eval(n, m, s, g_hat, g_jac, delta_new);
          double    obj_new = g_new[0];
          for(size_t i = 0; i < n; i++)
          {     for(size_t j = 0; j < n; j++)
                    obj_new += delta_new[i] * hessian[i * n + j] * delta_new[j];
          }
          g_tilde = g_new;
          obj_cur = obj_new;
          delta_x = delta_new;
          //
          if( level > 0 )
          {     std::cout << "itr = " << itr << ", max_diff = " << max_diff
                    << ", obj_cur = " << obj_cur << "\n";
               CppAD::abs_print_mat("delta_x", n, 1, delta_x);
          }
          //
          // value of sigma at new delta_x; i.e., sign( z(x, u) )
          for(size_t i = 0; i < s; i++)
               sigma[i] = CppAD::sign( g_tilde[m + i] );
          //
          if( max_diff < epsilon[0] )
          {     if( level > 0 )
                    std::cout << "end abs_min_quad: change in delta_x near zero\n";
               return true;
          }
     }
     if( level > 0 )
          std::cout << "end abs_min_quad: maximum number of iterations exceeded\n";
     return false;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/abs_min_quad.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations

5.8.11.a: Syntax
ok = min_nso_quad(
     
levelfgaepsilonmaxitrb_inx_inx_out
)


5.8.11.b: Prototype

template <class DblVector, class SizeVector>
bool min_nso_quad(
     size_t           level     ,
     ADFun<double>&   f         ,
     ADFun<double>&   g         ,
     ADFun<double>&   a         ,
     const DblVector& epsilon   ,
     SizeVector       maxitr    ,
     double           b_in      ,
     const DblVector& x_in      ,
     DblVector&       x_out     )

5.8.11.c: Source
This following is a link to the source code for this example: 5.8.11.2: min_nso_quad.hpp .

5.8.11.d: Purpose
Given a current that abs-normal representation 5.8.1.d: g , 5.8.1.c: a , for a function @(@ f(x) @)@, this routine minimizes @(@ f(x) @)@.

5.8.11.e: DblVector
is a 8.9: SimpleVector class with elements of type double.

5.8.11.f: SizeVector
is a 8.9: SimpleVector class with elements of type size_t.

5.8.11.g: level
This value is less that or equal 5. If level == 0 , no tracing of the optimization is printed. If level >= 1 , a trace of each iteration of min_nso_quad is printed. If level >= 2 , a trace of each iteration of the abs_min_quad sub-problem is printed. If level >= 3 , a trace of the 5.8.9: qp_box sub-problem is printed. If level >= 4 , a trace of the 5.8.8: qp_interior sub-problem is printed.

5.8.11.h: f
This is the original function for the abs-normal form; see 5.8.1.b: f .

5.8.11.h.a: n
We use n to denote the dimension of the domain for f ; i.e., f.Domain() .

5.8.11.h.b: m
We use m to denote the dimension of the range for f ; i.e., f.Range() . This must be equal to one.

5.8.11.h.c: s
We use 5.8.1.b.c: s to denote the number absolute terms in f .

5.8.11.i: g
This is the function 5.8.1.d: g in the abs-normal representation of f .

5.8.11.j: a
This is the function 5.8.1.c: a in the abs-normal representation of f .

5.8.11.k: epsilon
This is a vector with size 2. The value epsilon[0] is convergence criteria in terms of the infinity norm of the difference of x_out between iterations. The value epsilon[1] is convergence criteria in terms of the derivative of @(@ f(x) @)@. This derivative is actually the average of the directional derivative in the direction of the sub-problem minimizer.

5.8.11.l: maxitr
This is a vector with size 3. The value maxitr[0] is the maximum number of min_nso_quad iterations to try before giving up on convergence. The value maxitr[1] is the maximum number of iterations in the abs_min_quad sub-problem. The value maxitr[2] is the maximum number of iterations in the 5.8.8.m: qp_interior sub-problems.

5.8.11.m: b_in
This the initial bound on the trust region size. To be specific, if @(@ b @)@ is the current trust region size, at each iteration affine approximation is minimized with respect to @(@ \Delta x @)@ and subject to @[@ -b \leq \Delta x_j \leq b @]@ for j = 0 , ...n-1 . It must hold that b_in > epsilon[0] .

5.8.11.n: x_in
This vector x_out has size n . It is the starting point for the optimization procedure; i.e., the min_nso_quad iterations.

5.8.11.o: x_out
This vector x_out has size n . The input value of its elements does not matter. Upon return, it is the approximate minimizer of the abs-normal approximation for @(@ f(x) @)@ over the trust region is @(@ x = \hat{x} + \Delta x @)@.

5.8.11.p: Example
The file 5.8.11.1: min_nso_quad.cpp contains an example and test of min_nso_quad. It returns true if the test passes and false otherwise.
Input File: example/abs_normal/min_nso_quad.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.11.1: abs_normal min_nso_quad: Example and Test

5.8.11.1.a: Purpose
We minimize the function @(@ f : \B{R}^3 \rightarrow \B{R} @)@ defined by @[@ \begin{array}{rcl} f( x_0, x_1, x_2 ) & = & x_0^2 + 2 (x_0 + x_1)^2 + | x_2 | \end{array} @]@

5.8.11.1.b: Discussion
This routine uses 5.8.10: abs_min_quad which uses 5.8.9: qp_box , a quadratic programming algorithm. It is mean to be compared with 5.8.7.1: min_nso_linear.cpp which uses a linear programing algorithm for the same problem. To see this comparison, set level = 1 is both examples.

5.8.11.1.c: Source

# include <cppad/cppad.hpp>
# include "min_nso_quad.hpp"

bool min_nso_quad(void)
{     bool ok = true;
     //
     using CppAD::AD;
     using CppAD::ADFun;
     //
     typedef CPPAD_TESTVECTOR(size_t)       s_vector;
     typedef CPPAD_TESTVECTOR(double)       d_vector;
     typedef CPPAD_TESTVECTOR( AD<double> ) ad_vector;
     //
     size_t level = 0;    // level of tracing
     size_t n     = 3;    // size of x
     size_t m     = 1;    // size of y
     size_t s     = 1;    // number of data points and absolute values
     //
     // start recording the function f(x)
     ad_vector ax(n), ay(m);
     for(size_t j = 0; j < n; j++)
          ax[j] = double(j + 1);
     Independent( ax );
     //
     ay[0]  =  ax[0] * ax[0];
     ay[0] += 2.0 * (ax[0] + ax[1]) * (ax[0] + ax[1]);
     ay[0] += fabs( ax[2] );
     ADFun<double> f(ax, ay);
     //
     // create its abs_normal representation in g, a
     ADFun<double> g, a;
     f.abs_normal_fun(g, a);

     // check dimension of domain and range space for g
     ok &= g.Domain() == n + s;
     ok &= g.Range()  == m + s;

     // check dimension of domain and range space for a
     ok &= a.Domain() == n;
     ok &= a.Range()  == s;

     // epsilon
     d_vector epsilon(2);
     double eps = 1e-3;
     epsilon[0] = eps;
     epsilon[1] = eps;

     // maxitr
     s_vector maxitr(3);
     maxitr[0] = 100;
     maxitr[1] = 20;
     maxitr[2] = 20;

     // b_in
     double b_in = 1.0;

     // call min_nso_quad
     d_vector x_in(n), x_out(n);
     for(size_t j = 0; j < n; j++)
          x_in[j]  = double(j + 1);

     //
     ok &= CppAD::min_nso_quad(
          level, f, g, a, epsilon, maxitr, b_in, x_in, x_out
     );
     //
     for(size_t j = 0; j < n; j++)
          ok &= std::fabs( x_out[j] ) < eps;

     return ok;
}

Input File: example/abs_normal/min_nso_quad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.8.11.2: min_nso_quad Source Code
namespace {
     CPPAD_TESTVECTOR(double) min_nso_quad_join(
          const CPPAD_TESTVECTOR(double)& x ,
          const CPPAD_TESTVECTOR(double)& u )
     {     size_t n = x.size();
          size_t s = u.size();
          CPPAD_TESTVECTOR(double) xu(n + s);
          for(size_t j = 0; j < n; j++)
               xu[j] = x[j];
          for(size_t j = 0; j < s; j++)
               xu[n + j] = u[j];
          return xu;
     }
}

// BEGIN C++
namespace CppAD { // BEGIN_CPPAD_NAMESPACE

// BEGIN PROTOTYPE
template <class DblVector, class SizeVector>
bool min_nso_quad(
     size_t           level     ,
     ADFun<double>&   f         ,
     ADFun<double>&   g         ,
     ADFun<double>&   a         ,
     const DblVector& epsilon   ,
     SizeVector       maxitr    ,
     double           b_in      ,
     const DblVector& x_in      ,
     DblVector&       x_out     )
// END PROTOTYPE
{
     using std::fabs;
     //
     // number of absolute value terms
     size_t s  = a.Range();
     //
     // size of domain for f
     size_t n  = f.Domain();
     //
     // size of range space for f
     size_t m = f.Range();
     //
     CPPAD_ASSERT_KNOWN( g.Domain() == n + s,
          "min_nso_quad: (g, a) is not an abs-normal for for f"
     );
     CPPAD_ASSERT_KNOWN( g.Range() == m + s,
          "min_nso_quad: (g, a) is not an abs-normal for for f"
     );
     CPPAD_ASSERT_KNOWN(
          level <= 5,
          "min_nso_quad: level is not less that or equal 5"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(epsilon.size()) == 2,
          "min_nso_quad: size of epsilon not equal to 2"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(maxitr.size()) == 3,
          "min_nso_quad: size of maxitr not equal to 3"
     );
     CPPAD_ASSERT_KNOWN(
          g.Domain() > s && g.Range() > s,
          "min_nso_quad: g, a is not an abs-normal representation"
     );
     CPPAD_ASSERT_KNOWN(
          m == 1,
          "min_nso_quad: m is not equal to 1"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(x_in.size()) == n,
          "min_nso_quad: size of x_in not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(x_out.size()) == n,
          "min_nso_quad: size of x_out not equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          epsilon[0] < b_in,
          "min_nso_quad: b_in <= epsilon[0]"
     );
     if( level > 0 )
     {     std::cout << "start min_nso_quad\n";
          std::cout << "b_in = " << b_in << "\n";
          CppAD::abs_print_mat("x_in", n, 1, x_in);
     }
     // level in abs_min_quad sub-problem
     size_t level_tilde = 0;
     if( level > 0 )
          level_tilde = level - 1;
     //
     // maxitr in abs_min_quad sub-problem
     SizeVector maxitr_tilde(2);
     maxitr_tilde[0] = maxitr[1];
     maxitr_tilde[1] = maxitr[2];
     //
     // epsilon in abs_min_quad sub-problem
     DblVector eps_tilde(2);
     eps_tilde[0] = epsilon[0] / 10.;
     eps_tilde[1] = epsilon[1] / 10.;
     //
     // current bound
     double b_cur = b_in;
     //
     // initilaize the current x
     x_out = x_in;
     //
     // value of a(x) at current x
     DblVector a_cur = a.Forward(0, x_out);
     //
     // (x_out, a_cur)
     DblVector xu_cur = min_nso_quad_join(x_out, a_cur);
     //
     // value of g[ x_cur, a_cur ]
     DblVector g_cur = g.Forward(0, xu_cur);
     //
     for(size_t itr = 0; itr < maxitr[0]; itr++)
     {
          // Jacobian of g[ x_cur, a_cur ]
          DblVector g_jac = g.Jacobian(xu_cur);
          //
          // Hessian at x_cur
          DblVector f_hes = f.Hessian(x_out, 0);
          //
          // bound in abs_min_quad sub-problem
          DblVector bound_tilde(n);
          for(size_t j = 0; j < n; j++)
               bound_tilde[j] = b_cur;
          //
          DblVector delta_x(n);
          bool ok = abs_min_quad(
               level_tilde, n, m, s,
               g_cur, g_jac, f_hes, bound_tilde, eps_tilde, maxitr_tilde, delta_x
          );
          if( ! ok )
          {     if( level > 0 )
                    std::cout << "end min_nso_quad: abs_min_quad failed\n";
               return false;
          }
          //
          // new candidate value for x
          DblVector x_new(n);
          double max_delta_x = 0.0;
          for(size_t j = 0; j < n; j++)
          {     x_new[j] = x_out[j] + delta_x[j];
               max_delta_x = std::max(max_delta_x, std::fabs( delta_x[j] ) );
          }
          //
          if( max_delta_x < 0.75 * b_cur && max_delta_x < epsilon[0] )
          {     if( level > 0 )
                    std::cout << "end min_nso_quad: delta_x is near zero\n";
               return true;
          }
          // value of abs-normal approximation at minimizer
          DblVector g_tilde = CppAD::abs_eval(n, m, s, g_cur, g_jac, delta_x);
          //
          double derivative = (g_tilde[0] - g_cur[0]) / max_delta_x;
          CPPAD_ASSERT_UNKNOWN( derivative <= 0.0 )
          if( - epsilon[1] < derivative )
          {     if( level > 0 )
                    std::cout << "end min_nso_quad: derivative near zero\n";
               return true;
          }
          //
          // value of a(x) at new x
          DblVector a_new = a.Forward(0, x_new);
          //
          // (x_new, a_new)
          DblVector xu_new = min_nso_quad_join(x_new, a_new);
          //
          // value of g[ x_new, a_new ]
          DblVector g_new = g.Forward(0, xu_new);
          //
          // check for descent of objective
          double rate_new = (g_new[0] - g_cur[0]) / max_delta_x;
          if( - epsilon[1] < rate_new )
          {     // did not get sufficient descent
               b_cur /= 2.0;
               if( level > 0 )
                    std::cout << "itr = " << itr
                    << ", rate_new = " << rate_new
                    << ", b_cur = " << b_cur << "\n";
               //
          }
          else
          {     // got sufficient descent so accept candidate for x
               x_out  = x_new;
               a_cur  = a_new;
               g_cur  = g_new;
               xu_cur = xu_new;
               //
               if( level >  0 )
               {     std::cout << "itr = " << itr
                    << ", derivative = "<< derivative
                    << ", max_delta_x = "<< max_delta_x
                    << ", objective = " << g_cur[0] << "\n";
                    abs_print_mat("x_out", n, 1, x_out);
               }
          }
     }
     if( level > 0 )
          std::cout << "end min_nso_quad: maximum number of iterations exceeded\n";
     return false;
}
} // END_CPPAD_NAMESPACE

Input File: example/abs_normal/min_nso_quad.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.9: Check an ADFun Sequence of Operations

5.9.a: Syntax
ok = FunCheck(fgxra)
See Also 12.8.3: CompareChange

5.9.b: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . We use @(@ G : B^n \rightarrow B^m @)@ to denote the function corresponding to the C++ function object g . This routine check if @[@ F(x) = G(x) @]@ If @(@ F(x) \neq G(x) @)@, the 12.4.g.b: operation sequence corresponding to f does not represents the algorithm used by g to calculate values for @(@ G @)@ (see 5.9.l: Discussion below).

5.9.c: f
The FunCheck argument f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.9.k: Forward below).

5.9.d: g
The FunCheck argument g has prototype
     
Fun &g
( Fun is defined the properties of g ). The C++ function object g supports the syntax
     
y = g(x)
which computes @(@ y = G(x) @)@.

5.9.d.a: x
The g argument x has prototype
     const 
Vector &x
(see 5.9.j: Vector below) and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f .

5.9.e: y
The g result y has prototype
     
Vector y
and its value is @(@ G(x) @)@. The size of y is equal to m , the dimension of the 5.1.5.e: range space for f .

5.9.f: x
The FunCheck argument x has prototype
     const 
Vector &x
and its size must be equal to n , the dimension of the 5.1.5.d: domain space for f . This specifies that point at which to compare the values calculated by f and G .

5.9.g: r
The FunCheck argument r has prototype
     const 
Base &r
It specifies the relative error the element by element comparison of the value of @(@ F(x) @)@ and @(@ G(x) @)@.

5.9.h: a
The FunCheck argument a has prototype
     const 
Base &a
It specifies the absolute error the element by element comparison of the value of @(@ F(x) @)@ and @(@ G(x) @)@.

5.9.i: ok
The FunCheck result ok has prototype
     bool 
ok
It is true, if for @(@ i = 0 , \ldots , m-1 @)@ either the relative error bound is satisfied @[@ | F_i (x) - G_i (x) | \leq r ( | F_i (x) | + | G_i (x) | ) @]@ or the absolute error bound is satisfied @[@ | F_i (x) - G_i (x) | \leq a @]@ It is false if for some @(@ (i, j) @)@ neither of these bounds is satisfied.

5.9.j: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Base . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

5.9.k: FunCheck Uses Forward
After each call to 5.3: Forward , the object f contains the corresponding 12.4.l: Taylor coefficients . After FunCheck, the previous calls to 5.3: Forward are undefined.

5.9.l: Discussion
Suppose that the algorithm corresponding to g contains
     if( 
x >= 0 )
          
y = exp(x)
     else 
y = exp(-x)
where x and y are AD<double> objects. It follows that the AD of double 12.4.g.b: operation sequence depends on the value of x . If the sequence of operations stored in f corresponds to g with @(@ x \geq 0 @)@, the function values computed using f when @(@ x < 0 @)@ will not agree with the function values computed by @(@ g @)@. This is because the operation sequence corresponding to g changed (and hence the object f does not represent the function @(@ G @)@ for this value of x ). In this case, you probably want to re-tape the calculations performed by g with the 12.4.k.c: independent variables equal to the values in x (so AD operation sequence properly represents the algorithm for this value of independent variables).

5.9.m: Example
The file 5.9.1: fun_check.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/core/fun_check.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.9.1: ADFun Check and Re-Tape: Example and Test
# include <cppad/cppad.hpp>

namespace { // -----------------------------------------------------------
// define the template function object Fun<Type,Vector> in empty namespace
template <class Type, class Vector>
class Fun {
private:
     size_t n;
public:
     // function constructor
     Fun(size_t n_) : n(n_)
     { }
     // function evaluator
     Vector operator() (const Vector &x)
     {     Vector y(n);
          size_t i;
          for(i = 0; i < n; i++)
          {     // This operaiton sequence depends on x
               if( x[i] >= 0 )
                    y[i] = exp(x[i]);
               else     y[i] = exp(-x[i]);
          }
          return y;
     }
};
// template function FunCheckCases<Vector, ADVector> in empty namespace
template <class Vector, class ADVector>
bool FunCheckCases(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::ADFun;
     using CppAD::Independent;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // use the ADFun default constructor
     ADFun<double> f;

     // domain space vector
     size_t n = 2;
     ADVector X(n);
     X[0] = -1.;
     X[1] = 1.;

     // declare independent variables and starting recording
     Independent(X);

     // create function object to use with AD<double>
     Fun< AD<double>, ADVector > G(n);

     // range space vector
     size_t m = n;
     ADVector Y(m);
     Y = G(X);

     // stop tape and store operation sequence in f : X -> Y
     f.Dependent(X, Y);
     ok &= (f.size_order() == 0);  // no implicit forward operation

     // create function object to use with double
     Fun<double, Vector> g(n);

     // function values should agree when the independent variable
     // values are the same as during recording
     Vector x(n);
     size_t j;
     for(j = 0; j < n; j++)
          x[j] = Value(X[j]);
     double r = eps99;
     double a = eps99;
     ok      &= FunCheck(f, g, x, a, r);

     // function values should not agree when the independent variable
     // values are the negative of values during recording
     for(j = 0; j < n; j++)
          x[j] = - Value(X[j]);
     ok      &= ! FunCheck(f, g, x, a, r);

     // re-tape to obtain the new AD of double operation sequence
     for(j = 0; j < n; j++)
          X[j] = x[j];
     Independent(X);
     Y = G(X);

     // stop tape and store operation sequence in f : X -> Y
     f.Dependent(X, Y);
     ok &= (f.size_order() == 0);  // no implicit forward with this x

     // function values should agree now
     ok      &= FunCheck(f, g, x, a, r);

     return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool FunCheck(void)
{     bool ok = true;
     typedef CppAD::vector<double>                Vector1;
     typedef CppAD::vector< CppAD::AD<double> > ADVector1;
     typedef   std::vector<double>                Vector2;
     typedef   std::vector< CppAD::AD<double> > ADVector2;
     typedef std::valarray<double>                Vector3;
     typedef std::valarray< CppAD::AD<double> > ADVector3;
     // Run with Vector and ADVector equal to three different cases
     // all of which are Simple Vectors with elements of type
     // double and AD<double> respectively.
     ok &= FunCheckCases< Vector1, ADVector2 >();
     ok &= FunCheckCases< Vector2, ADVector3 >();
     ok &= FunCheckCases< Vector3, ADVector1 >();
     return ok;
}

Input File: example/general/fun_check.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.10: Check an ADFun Object For Nan Results

5.10.a: Syntax
f.check_for_nan(b)
b = f.check_for_nan()
get_check_for_nan(vecfile)

5.10.b: Debugging
If NDEBUG is not defined, and the result of a 5.3.4: forward or 5.4.3: reverse calculation contains a 8.11: nan , CppAD can halt with an error message.

5.10.c: f
For the syntax where b is an argument, f has prototype
     ADFun<
Basef
(see ADFun<Base> 5.1.2: constructor ). For the syntax where b is the result, f has prototype
     const ADFun<
Basef

5.10.d: b
This argument or result has prototype
     bool 
b
Future calls to f.Forward will (will not) check for nan. depending on if b is true (false).

5.10.e: Default
The value for this setting after construction of f ) is true. The value of this setting is not affected by calling 5.1.3: Dependent for this function object.

5.10.f: Error Message
If this error is detected during zero order forward mode, the values of the independent variables that resulted in the nan are written to a temporary binary file. This is so that you can run the original source code with those values to see what is causing the nan.

5.10.f.a: vector_size
The error message with contain the text vector_size = vector_size followed the newline character '\n'. The value of vector_size is the number of elements in the independent vector.

5.10.f.b: file_name
The error message with contain the text file_name = file_name followed the newline character '\n'. The value of file_name is the name of the temporary file that contains the dependent variable values.

5.10.f.c: index
The error message will contain the text index = index followed by the newline character '\n'. The value of index is the lowest dependent variable index that has the value nan.

5.10.g: get_check_for_nan
This routine can be used to get the independent variable values that result in a nan.

5.10.g.a: vec
This argument has prototype
     CppAD::vector<
Base>& vec
It size must be equal to the corresponding value of 5.10.f.a: vector_size in the corresponding error message. The input value of its elements does not matter. Upon return, it will contain the values for the independent variables, in the corresponding call to 5.1.1: Independent , that resulted in the nan. (Note that the call to Independent uses an vector with elements of type AD<Base> and vec has elements of type Base .)

5.10.g.b: file
This argument has prototype
     const std::string& 
file
It must be the value of 5.10.f.b: file_name in the corresponding error message.

5.10.h: Example
The file 5.10.1: check_for_nan.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: cppad/core/check_for_nan.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
5.10.1: ADFun Checking For Nan: Example and Test
# include <cppad/cppad.hpp>
# include <cctype>

namespace {
     void myhandler(
          bool known       ,
          int  line        ,
          const char *file ,
          const char *exp  ,
          const char *msg  )
     {     // error handler must not return, so throw an exception
          std::string message = msg;
          throw message;
     }
}

bool check_for_nan(void)
{     bool ok = true;
     using CppAD::AD;
     using std::string;
     double eps = 10. * std::numeric_limits<double>::epsilon();

     // replace the default CppAD error handler
     CppAD::ErrorHandler info(myhandler);

     CPPAD_TESTVECTOR(AD<double>) ax(2), ay(2);
     ax[0] = 2.0;
     ax[1] = 1.0;
     CppAD::Independent(ax);
     ay[0] = sqrt( ax[0] );
     ay[1] = sqrt( ax[1] );
     CppAD::ADFun<double> f(ax, ay);

     CPPAD_TESTVECTOR(double) x(2), y(2);
     x[0] = 2.0;
     x[1] = -1.0;

     // use try / catch because this causes an exception
     // (assuming that NDEBUG is not defined)
     f.check_for_nan(true);
     try {
          y = f.Forward(0, x);

# ifndef NDEBUG
          // When compiled with NDEBUG defined,
          // CppAD does not spend time checking for nan.
          ok = false;
# endif
     }
     catch(std::string msg)
     {
          // get and check size of the independent variable vector
          string pattern = "vector_size = ";
          size_t start   = msg.find(pattern) + pattern.size();
          string number;
          for(size_t i = start; msg[i] != '\n'; i++)
               number += msg[i];
          size_t vector_size = std::atoi(number.c_str());
          ok &= vector_size == 2;

          // get and check first dependent variable index that is nan
          pattern = "index = ";
          start   = msg.find(pattern) + pattern.size();
          number  = "";
          for(size_t i = start; msg[i] != '\n'; i++)
               number += msg[i];
          size_t index = std::atoi(number.c_str());
          ok &= index == 1;

          // get the name of the file
          pattern = "file_name = ";
          start   = msg.find(pattern) + pattern.size();
          string file_name;
          for(size_t i = start; msg[i] != '\n'; i++)
               file_name += msg[i];

          // get and check independent variable vector that resulted in the nan
          CppAD::vector<double> vec(vector_size);
          CppAD::get_check_for_nan(vec, file_name);
          for(size_t i = 0; i < vector_size; i++)
               ok &= vec[i] == x[i];
     }

     // now do calculation without an exception
     f.check_for_nan(false);
     y = f.Forward(0, x);
     ok &= CppAD::NearEqual(y[0], std::sqrt(x[0]), eps, eps);
     ok &= CppAD::isnan( y[1] );

     return ok;
}

Input File: example/general/check_for_nan.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
6: CppAD API Preprocessor Symbols

6.a: Purpose
The CppAD include files defines preprocessor symbols all of which begin with CPPAD_. Note that there are some old, deprecated preprocessor symbols that begin with CppAD. In this section we list all of the CppAD preprocessor symbols that are part of the CppAD Application Interface (API).

6.b: Documented Here

6.b.a: CPPAD_DEBUG_AND_RELEASE
This flag is an exception because it is defined (or not) by the user when compiling programs that include CppAD source code. If it is defined, less error checking is done and the debug and release versions of CppAD can be mixed in the same program. Of particular note is that 8.23: thread_alloc does less error checking. For programs that do a lot of memory allocation, this can be a significant time savings when NDEBUG is defined.

6.b.b: CPPAD_NULL
Is a null pointer used by CppAD, instead of just using the value zero which was often done in C++98, which has been replaced by the value nullptr in C++11.

6.b.c: CPPAD_PACKAGE_STRING
Is a const char* representation of this version of CppAD.

6.b.d: CPPAD_USE_CPLUSPLUS_2011
This preprocessor symbol has the value has the value 1 if C++11 features are being used by CppAD. Otherwise it has the value zero.

6.c: Documented Elsewhere
4.5.3.k: CPPAD_BOOL_BINARY
4.5.3.g: CPPAD_BOOL_UNARY
4.4.5: CPPAD_DISCRETE_FUNCTION
7.b: CPPAD_MAX_NUM_THREADS
4.7.6.b: CPPAD_NUMERIC_LIMITS
4.7.5.c: CPPAD_STANDARD_MATH_UNARY
2.2.r: CPPAD_TAPE_ADDR_TYPE
2.2.q: CPPAD_TAPE_ID_TYPE
10.5: CPPAD_TESTVECTOR
4.7.7.b: CPPAD_TO_STRING

6.d: Deprecated
4.4.5.n: CppADCreateDiscrete
12.8.9.a: CppADvector
12.8.9: CPPAD_TEST_VECTOR
12.8.5.k.a: CPPAD_TRACK_NEW_VEC
12.8.5.l.a: CPPAD_TRACK_DEL_VEC
12.8.5.m.a: CPPAD_TRACK_EXTEND
12.8.5.n.a: CPPAD_TRACK_COUNT
12.8.11: CPPAD_USER_ATOMIC
12.8.5.k.b: CppADTrackNewVec
12.8.5.l.b: CppADTrackDelVec
12.8.5.m.b: CppADTrackExtend
12.8.5.n.b: CppADTrackCount

Input File: omh/preprocessor.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7: Using CppAD in a Multi-Threading Environment

7.a: Purpose
Extra steps and care must be taken to use CppAD in 8.23.4: parallel execution mode. This section collects this information in one place.

7.b: CPPAD_MAX_NUM_THREADS
The value CPPAD_MAX_NUM_THREADS is an absolute maximum for the number of threads that CppAD should support. If this preprocessor symbol is defined before including any CppAD header files, it must be an integer greater than or equal to one. Otherwise, 2.2.p: cppad_max_num_threads is used to define this preprocessor symbol. Note that the minimum allowable value for cppad_max_num_threads is 4; i.e., you can only get smaller values for CPPAD_MAX_NUM_THREADS by defining it before including the CppAD header files.

7.c: parallel_setup
Using any of the following routines in a multi-threading environment requires that 8.23.2: thread_alloc::parallel_setup has been completed: 8.22.n: CppAD::vector , 8.10.f: CheckSimpleVector , 8.8.d: CheckNumericType , 7.1: parallel_ad .

7.d: hold_memory
Memory allocation should be much faster after calling hold_memory with 8.23.9.c: value equal to true. This may even be true if there is only one thread.

7.e: Parallel AD
One must first call 8.23.2: thread_alloc::parallel_setup and then call 7.1: parallel_ad before using AD types in 8.23.4: parallel execution mode.

7.f: Initialization
The following routines must be called at least once before being used in parallel mode: 8.10.f: CheckSimpleVector , 8.8.d: CheckNumericType , 4.4.5.l: discrete functions , 8.18.m: Rosen34 , 8.17.n: Runge45 .

7.g: Same Thread
Some operations must be preformed by the same thread: 5.1.2.j: ADFun , 5.1.1: Independent , 5.1.3: Dependent .

7.h: Parallel Prohibited
The following routine cannot be called in parallel mode: 8.1.b.a: ErrorHandler constructor .

7.i: Contents
parallel_ad: 7.1Enable AD Calculations During Parallel Mode
thread_test.cpp: 7.2Run Multi-Threading Examples and Speed Tests

Input File: omh/multi_thread.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.1: Enable AD Calculations During Parallel Mode

7.1.a: Syntax
parallel_ad<Base>()

7.1.b: Purpose
The function parallel_ad<Base>() must be called before any AD<Base> objects are used in 8.23.4: parallel mode. In addition, if this routine is called after one is done using parallel mode, it will free extra memory used to keep track of the multiple AD<Base> tapes required for parallel execution.

7.1.c: Discussion
By default, for each AD<Base> class there is only one tape that records 12.4.b: AD of Base operations. This tape is a global variable and hence it cannot be used by multiple threads at the same time. The 8.23.2: parallel_setup function informs CppAD of the maximum number of threads that can be active in parallel mode. This routine does extra setup (and teardown) for the particular Base type.

7.1.d: CheckSimpleVector
This routine has the side effect of calling the routines
     CheckSimpleVector< 
Type, CppAD::vector<Type> >()
where Type is Base and AD<Base> .

7.1.e: Example
The files 7.2.11.1: team_openmp.cpp , 7.2.11.2: team_bthread.cpp , and 7.2.11.3: team_pthread.cpp , contain examples and tests that implement this function.

7.1.f: Restriction
This routine cannot be called in parallel mode or while there is a tape recording AD<Base> operations.
Input File: cppad/core/parallel_ad.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2: Run Multi-Threading Examples and Speed Tests

7.2.a: Purpose
Runs the CppAD multi-threading examples and timing tests:

7.2.b: build
We use build for the directory where you run the 2.2: cmake command.

7.2.c: threading
If the 2.2: cmake command output indicates that bthread, pthread, or openmp is available, you can run the program below with threading equal to bthread, pthread, or openmp respectively.

7.2.d: program
We use the notation program for
      example_multi_thread_
threading

7.2.e: Running Tests
You can build this program and run the default version of its test parameters by executing the following commands:
     cd 
build
     make check_
program
After this operation, in the directory
     
build/example/multi_thread/threading
you can execute the following commands: .
./
program a11c
./
program simple_ad
./
program team_example
./
program harmonic     test_time max_threads mega_sum
./
program multi_newton test_time max_threads \
     
num_zero num_sub num_sum use_ad

7.2.f: a11c
The examples 7.2.1: a11c_openmp.cpp , 7.2.2: a11c_bthread.cpp , and 7.2.3: a11c_pthread.cpp demonstrate simple multi-threading, without algorithmic differentiation.

7.2.g: simple_ad
The examples 7.2.4: simple_ad_openmp.cpp , 7.2.5: simple_ad_bthread.cpp , and 7.2.6: simple_ad_pthread.cpp demonstrate simple multi-threading, with algorithmic differentiation, using OpenMP, boost threads and pthreads respectively.

7.2.h: team_example
The 7.2.7: team_example.cpp routine demonstrates simple multi-threading with algorithmic differentiation and using a 7.2.11: team of threads .

7.2.i: harmonic
The 7.2.8.6: harmonic_time routine preforms a timing test for a multi-threading example without algorithmic differentiation using a team of threads.

7.2.i.a: test_time
Is the minimum amount of wall clock time that the test should take. The number of repeats for the test will be increased until this time is reached. The reported time is the total wall clock time divided by the number of repeats.

7.2.i.b: max_threads
If the argument max_threads is a non-negative integer specifying the maximum number of threads to use for the test. The specified test is run with the following number of threads:
     
num_threads = 0 , ... , max_threads
The value of zero corresponds to not using the multi-threading system.

7.2.i.c: mega_sum
The command line argument mega_sum is an integer greater than or equal one and has the same meaning as in 7.2.8.6.h: harmonic_time .

7.2.j: multi_newton
The 7.2.10.6: multi_newton_time routine preforms a timing test for a multi-threading example with algorithmic differentiation using a team of threads.

7.2.j.a: test_time
Is the minimum amount of wall clock time that the test should take. The number of repeats for the test will be increased until this time is reached. The reported time is the total wall clock time divided by the number of repeats.

7.2.j.b: max_threads
If the argument max_threads is a non-negative integer specifying the maximum number of threads to use for the test. The specified test is run with the following number of threads:
     
num_threads = 0 , ... , max_threads
The value of zero corresponds to not using the multi-threading system.

7.2.j.c: num_zero
The command line argument num_zero is an integer greater than or equal two and has the same meaning as in 7.2.10.6.h: multi_newton_time .

7.2.j.d: num_sub
The command line argument num_sub is an integer greater than or equal one and has the same meaning as in 7.2.10.6.i: multi_newton_time .

7.2.j.e: num_sum
The command line argument num_sum is an integer greater than or equal one and has the same meaning as in 7.2.10.6.j: multi_newton_time .

7.2.j.f: use_ad
The command line argument use_ad is either true or false and has the same meaning as in 7.2.10.6.k: multi_newton_time .

7.2.k: Team Implementations
The following routines are used to implement the specific threading systems through the common interface 7.2.11: team_thread.hpp :
7.2.11.1: team_openmp.cpp OpenMP Implementation of a Team of AD Threads
7.2.11.2: team_bthread.cpp Boost Thread Implementation of a Team of AD Threads
7.2.11.3: team_pthread.cpp Pthread Implementation of a Team of AD Threads

7.2.l: Source


# include <cppad/cppad.hpp>
# include <cmath>
# include <cstring>
# include <ctime>
# include "team_thread.hpp"
# include "team_example.hpp"
# include "harmonic.hpp"
# include "multi_atomic.hpp"
# include "multi_newton.hpp"

extern bool a11c(void);
extern bool simple_ad(void);

namespace {
     size_t arg2size_t(
          const char* arg       ,
          int limit             ,
          const char* error_msg )
     {     int i = std::atoi(arg);
          if( i >= limit )
               return size_t(i);
          std::cerr << "value = " << i << std::endl;
          std::cerr << error_msg << std::endl;
          exit(1);
     }
     double arg2double(
          const char* arg       ,
          double limit          ,
          const char* error_msg )
     {     double d = std::atof(arg);
          if( d >= limit )
               return d;
          std::cerr << "value = " << d << std::endl;
          std::cerr << error_msg << std::endl;
          exit(1);
     }
}

int main(int argc, char *argv[])
{     using CppAD::thread_alloc;
     bool ok         = true;
     using std::cout;
     using std::endl;

     // commnd line usage message
     const char* usage =
     "./<thread>_test a11c\n"
     "./<thread>_test simple_ad\n"
     "./<thread>_test team_example\n"
     "./<thread>_test harmonic     test_time max_threads mega_sum\n"
     "./<thread>_test multi_atomic test_time max_threads num_solve\n"
     "./<thread>_test multi_newton test_time max_threads \\\n"
     "     num_zero num_sub num_sum use_ad\\\n"
     "where <thread> is bthread, openmp, or pthread";

     // command line argument values (assign values to avoid compiler warnings)
     size_t num_zero=0, num_sub=0, num_sum=0;
     bool use_ad=true;

     // put the date and time in the output file
     std::time_t rawtime;
     std::time( &rawtime );
     const char* gmt = std::asctime( std::gmtime( &rawtime ) );
     size_t len = size_t( std::strlen(gmt) );
     cout << "gmtime        = '";
     for(size_t i = 0; i < len; i++)
          if( gmt[i] != '\n' ) cout << gmt[i];
     cout << "';" << endl;

     // CppAD version number
     cout << "cppad_version = '" << CPPAD_PACKAGE_STRING << "';" << endl;

     // put the team name in the output file
     cout << "team_name     = '" << team_name() << "';" << endl;

     // print command line as valid matlab/octave
     cout << "command       = '" << argv[0];
     for(int i = 1; i < argc; i++)
          cout << " " << argv[i];
     cout << "';" << endl;

     ok = false;
     const char* test_name = "";
     if( argc > 1 )
          test_name = *++argv;
     bool run_a11c         = std::strcmp(test_name, "a11c")         == 0;
     bool run_simple_ad    = std::strcmp(test_name, "simple_ad")    == 0;
     bool run_team_example = std::strcmp(test_name, "team_example") == 0;
     bool run_harmonic     = std::strcmp(test_name, "harmonic")     == 0;
     bool run_multi_atomic = std::strcmp(test_name, "multi_atomic") == 0;
     bool run_multi_newton = std::strcmp(test_name, "multi_newton") == 0;
     if( run_a11c || run_simple_ad || run_team_example )
          ok = (argc == 2);
     else if( run_harmonic || run_multi_atomic )
          ok = (argc == 5);
     else if( run_multi_newton )
          ok = (argc == 8);
     if( ! ok )
     {     std::cerr << "test_name     = " << test_name << endl;
          std::cerr << "argc          = " << argc      << endl;
          std::cerr << usage << endl;
          exit(1);
     }
     if( run_a11c || run_simple_ad || run_team_example )
     {     if( run_a11c )
               ok        = a11c();
          else if( run_simple_ad )
               ok        = simple_ad();
          else     ok        = team_example();
          if( thread_alloc::free_all() )
               cout << "free_all      = true;"  << endl;
          else
          {     ok = false;
               cout << "free_all      = false;" << endl;
          }
          if( ok )
               cout << "OK            = true;"  << endl;
          else cout << "OK            = false;" << endl;
          return ! ok;
     }

     // test_time
     double test_time = arg2double( *++argv, 0.,
          "run: test_time is less than zero"
     );

     // max_threads
     size_t max_threads = arg2size_t( *++argv, 0,
          "run: max_threads is less than zero"
     );

     size_t mega_sum  = 0; // assignment to avoid compiler warning
     size_t num_solve = 0;
     if( run_harmonic )
     {     // mega_sum
          mega_sum = arg2size_t( *++argv, 1,
               "run: mega_sum is less than one"
          );
     }
     else if( run_multi_atomic )
     {     // num_solve
          num_solve = arg2size_t( *++argv, 1,
               "run: num_solve is less than one"
          );
     }
     else
     {     ok &= run_multi_newton;

          // num_zero
          num_zero = arg2size_t( *++argv, 2,
               "run: num_zero is less than two"
          );

          // num_sub
          num_sub = arg2size_t( *++argv, 1,
               "run: num_sub is less than one"
          );

          // num_sum
          num_sum = arg2size_t( *++argv, 1,
               "run: num_sum is less than one"
          );

          // use_ad
          ++argv;
          if( std::strcmp(*argv, "true") == 0 )
               use_ad = true;
          else if( std::strcmp(*argv, "false") == 0 )
               use_ad = false;
          else
          {     std::cerr << "run: use_ad = '" << *argv;
               std::cerr << "' is not true or false" << endl;
               exit(1);
          }
     }

     // run the test for each number of threads
     cout << "time_all  = [" << endl;
     for(size_t num_threads = 0; num_threads <= max_threads; num_threads++)
     {     double time_out;

          // run the requested test
          if( run_harmonic ) ok &=
               harmonic_time(time_out, test_time, num_threads, mega_sum);
          else if( run_multi_atomic ) ok &=
               multi_atomic_time(time_out, test_time, num_threads, num_solve);
          else
          {     ok &= run_multi_newton;
               ok &= multi_newton_time(
                    time_out                ,
                    test_time               ,
                    num_threads             ,
                    num_zero                ,
                    num_sub                 ,
                    num_sum                 ,
                    use_ad
               );
          }
          // time_out
          cout << "\t" << time_out << " % ";
          // num_threads
          if( num_threads == 0 )
               cout << "no threading" << endl;
          else     cout << num_threads << " threads" << endl;
     }
     cout << "];" << endl;
     //
     if( thread_alloc::free_all() )
          cout << "free_all      = true;"  << endl;
     else
     {     ok = false;
          cout << "free_all      = false;" << endl;
     }
     if( ok )
          cout << "OK            = true;"  << endl;
     else cout << "OK            = false;" << endl;

     return  ! ok;
}

Input File: example/multi_thread/thread_test.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.1: A Simple OpenMP Example and Test

7.2.1.a: Purpose
This example just demonstrates OpenMP and does not use CppAD at all.

7.2.1.b: Source Code

# include <omp.h>
# include <limits>
# include <cmath>
# include <cassert>
# define NUMBER_THREADS 4

namespace {
     // Beginning of Example A.1.1.1c of OpenMP 2.5 standard document ---------
     void a1(int n, float *a, float *b)
     {     int i;
     # pragma omp parallel for
          for(i = 1; i < n; i++) /* i is private by default */
          {     assert( omp_get_num_threads() == NUMBER_THREADS );
               b[i] = (a[i] + a[i-1]) / float(2);
          }
     }
     // End of Example A.1.1.1c of OpenMP 2.5 standard document ---------------
}
bool a11c(void)
{     bool ok = true;

     // Test setup
     int i, n = 1000;
     float *a = new float[n];
     float *b = new float[n];
     for(i = 0; i < n; i++)
          a[i] = float(i);

     int n_thread = NUMBER_THREADS;   // number of threads in parallel regions
     omp_set_dynamic(0);              // off dynamic thread adjust
     omp_set_num_threads(n_thread);   // set the number of threads

     a1(n, a, b);

     // check the result
     float eps = float(100) * std::numeric_limits<float>::epsilon();
     for(i = 1; i < n ; i++)
          ok &= std::fabs( (float(2) * b[i] - a[i] - a[i-1]) / b[i] ) <= eps;

     delete [] a;
     delete [] b;

     return ok;
}

Input File: example/multi_thread/openmp/a11c_openmp.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.2: A Simple Boost Thread Example and Test

7.2.2.a: Purpose
This example just demonstrates Boost threads and does not use CppAD at all.

7.2.2.b: Source Code

# include <boost/thread.hpp>
# include <limits>
# include <cmath>
# include <cassert>
# define NUMBER_THREADS 4

namespace { // Begin empty namespace
     class worker_t
     {
     private:
          int    n_;
          float* a_;
          float* b_;
     public:
          void setup(size_t n, float* a, float* b)
          {     n_ = static_cast<int>(n);
               a_ = a;
               b_ = b;
          }
          // Beginning of Example A.1.1.1c of OpenMP 2.5 standard document
          void a1(int n, float *a, float *b)
          {     int i;
               // for some reason this function is missing on some systems
               // assert( bthread_is_multithreaded_np() > 0 );
               for(i = 1; i < n; i++)
                    b[i] = (a[i] + a[i-1]) / 2.0f;
               return;
          }
          // End of Example A.1.1.1c of OpenMP 2.5 standard document
          void operator()()
          {     a1(n_, a_, b_); }
     };
}

bool a11c(void)
{     bool ok = true;

     // Test setup
     size_t i, j, n_total = 10;
     float *a = new float[n_total];
     float *b = new float[n_total];
     for(i = 0; i < n_total; i++)
          a[i] = float(i);

     // number of threads
     size_t number_threads = NUMBER_THREADS;

     // set of workers
     worker_t worker[NUMBER_THREADS];
     // threads for each worker
     boost::thread* bthread[NUMBER_THREADS];

     // Break the work up into sub work for each thread
     size_t  n     = n_total / number_threads;
     size_t  n_tmp = n;
     float*  a_tmp = a;
     float*  b_tmp = b;
     worker[0].setup(n_tmp, a_tmp, b_tmp);
     for(j = 1; j < number_threads; j++)
     {     n_tmp = n + 1;
          a_tmp = a_tmp + n - 1;
          b_tmp = b_tmp + n - 1;
          if( j == (number_threads - 1) )
               n_tmp = n_total - j * n + 1;

          worker[j].setup(n_tmp, a_tmp, b_tmp);

          // create this thread
          bthread[j] = new boost::thread(worker[j]);
     }

     // do this threads protion of the work
     worker[0]();

     // wait for other threads to finish
     for(j = 1; j < number_threads; j++)
     {     bthread[j]->join();
          delete bthread[j];
     }

     // check the result
     float eps = 100.f * std::numeric_limits<float>::epsilon();
     for(i = 1; i < n ; i++)
          ok &= std::fabs( (2. * b[i] - a[i] - a[i-1]) / b[i] ) <= eps;

     delete [] a;
     delete [] b;

     return ok;
}

Input File: example/multi_thread/bthread/a11c_bthread.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.3: A Simple Parallel Pthread Example and Test

7.2.3.a: Purpose
This example just demonstrates pthreads and does not use CppAD at all.

7.2.3.b: Source Code

# include <pthread.h>
# include <limits>
# include <cmath>
# include <cassert>

// define CPPAD_NULPTR
# include <cppad/configure.hpp>
# if CPPAD_USE_CPLUSPLUS_2011
# define CPPAD_NULL nullptr
# else
# define CPPAD_NULL 0
# endif
//
# define NUMBER_THREADS 4

# ifdef NDEBUG
# define CHECK_ZERO(expression) expression
# else
# define CHECK_ZERO(expression) assert( expression == 0 );
# endif
namespace {
     // Beginning of Example A.1.1.1c of OpenMP 2.5 standard document ---------
     void a1(int n, float *a, float *b)
     {     int i;
          // for some reason this function is missing on some systems
          // assert( pthread_is_multithreaded_np() > 0 );
          for(i = 1; i < n; i++)
               b[i] = (a[i] + a[i-1]) / 2.0f;
          return;
     }
     // End of Example A.1.1.1c of OpenMP 2.5 standard document ---------------
     struct start_arg { int  n; float* a; float* b; };
     void* start_routine(void* arg_vptr)
     {     start_arg* arg = static_cast<start_arg*>( arg_vptr );
          a1(arg->n, arg->a, arg->b);

          void* no_status = CPPAD_NULL;
          pthread_exit(no_status);

          return no_status;
     }
}

bool a11c(void)
{     bool ok = true;

     // Test setup
     int i, j, n_total = 10;
     float *a = new float[n_total];
     float *b = new float[n_total];
     for(i = 0; i < n_total; i++)
          a[i] = float(i);

     // number of threads
     int n_thread = NUMBER_THREADS;
     // the threads
     pthread_t thread[NUMBER_THREADS];
     // arguments to start_routine
     struct start_arg arg[NUMBER_THREADS];
     // attr
     pthread_attr_t attr;
     CHECK_ZERO( pthread_attr_init( &attr ) );
     CHECK_ZERO( pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE) );
     //
     // Break the work up into sub work for each thread
     int n = n_total / n_thread;
     arg[0].n = n;
     arg[0].a = a;
     arg[0].b = b;
     for(j = 1; j < n_thread; j++)
     {     arg[j].n = n + 1;
          arg[j].a = arg[j-1].a + n - 1;
          arg[j].b = arg[j-1].b + n - 1;
          if( j == (n_thread - 1) )
               arg[j].n = n_total - j * n + 1;
     }
     for(j = 0; j < n_thread; j++)
     {     // inform each thread of which block it is working on
          void* arg_vptr = static_cast<void*>( &arg[j] );
          CHECK_ZERO( pthread_create(
               &thread[j], &attr, start_routine, arg_vptr
          ) );
     }
     for(j = 0; j < n_thread; j++)
     {     void* no_status = CPPAD_NULL;
          CHECK_ZERO( pthread_join(thread[j], &no_status) );
     }

     // check the result
     float eps = 100.0f * std::numeric_limits<float>::epsilon();
     for(i = 1; i < n ; i++)
          ok &= std::fabs( (2. * b[i] - a[i] - a[i-1]) / b[i] ) <= eps;

     delete [] a;
     delete [] b;

     return ok;
}

Input File: example/multi_thread/pthread/a11c_pthread.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.4: A Simple OpenMP AD: Example and Test

7.2.4.a: Purpose
This example demonstrates how CppAD can be used in a OpenMP multi-threading environment.

7.2.4.b: Source Code

# include <cppad/cppad.hpp>
# include <omp.h>
# define NUMBER_THREADS  4

namespace {
     // structure with problem specific information
     typedef struct {
          // function argument (worker input)
          double          x;
          // This structure would also have return information in it,
          // but this example only returns the ok flag
     } problem_specific;
     // =====================================================================
     // General purpose code you can copy to your application
     // =====================================================================
     using CppAD::thread_alloc;
     // ------------------------------------------------------------------
     // used to inform CppAD when we are in parallel execution mode
     bool in_parallel(void)
     {     return omp_in_parallel() != 0; }
     // ------------------------------------------------------------------
     // used to inform CppAD of the current thread number
     size_t thread_number(void)
     {     return static_cast<size_t>( omp_get_thread_num() ); }
     // ------------------------------------------------------------------
     // structure with information for one thread
     typedef struct {
          // false if an error occurs, true otherwise (worker output)
          bool               ok;
     } thread_one_t;
     // vector with information for all threads
     thread_one_t thread_all_[NUMBER_THREADS];
     // ------------------------------------------------------------------
     // function that calls all the workers
     bool worker(problem_specific* info);
     bool run_all_workers(size_t num_threads, problem_specific* info_all[])
     {     bool ok = true;

          // initialize thread_all_
          int thread_num, int_num_threads = int(num_threads);
          for(thread_num = 0; thread_num < int_num_threads; thread_num++)
          {     // initialize as false to make sure gets called for all threads
               thread_all_[thread_num].ok         = false;
          }

          // turn off dynamic thread adjustment
          omp_set_dynamic(0);

          // set the number of OpenMP threads
          omp_set_num_threads( int_num_threads );

          // setup for using CppAD::AD<double> in parallel
          thread_alloc::parallel_setup(
               num_threads, in_parallel, thread_number
          );
          thread_alloc::hold_memory(true);
          CppAD::parallel_ad<double>();

          // execute worker in parallel
# pragma omp parallel for
     for(thread_num = 0; thread_num < int_num_threads; thread_num++)
          thread_all_[thread_num].ok = worker(info_all[thread_num]);
// end omp parallel for

          // set the number of OpenMP threads to one
          omp_set_num_threads(1);

          // now inform CppAD that there is only one thread
          thread_alloc::parallel_setup(1, CPPAD_NULL, CPPAD_NULL);
          thread_alloc::hold_memory(false);
          CppAD::parallel_ad<double>();

          // check to ok flag returned by during calls to work by other threads
          for(thread_num = 1; thread_num < int_num_threads; thread_num++)
               ok &= thread_all_[thread_num].ok;

          return ok;
     }
     // =====================================================================
     // End of General purpose code
     // =====================================================================
     // function that does the work for one thread
     bool worker(problem_specific* info)
     {     using CppAD::NearEqual;
          using CppAD::AD;
          bool ok = true;

          // CppAD::vector uses the CppAD fast multi-threading allocator
          CppAD::vector< AD<double> > ax(1), ay(1);
          ax[0] = info->x;
          Independent(ax);
          ay[0] = sqrt( ax[0] * ax[0] );
          CppAD::ADFun<double> f(ax, ay);

          // Check function value corresponds to the identity
          double eps = 10. * CppAD::numeric_limits<double>::epsilon();
          ok        &= NearEqual(ay[0], ax[0], eps, eps);

          // Check derivative value corresponds to the identity.
          CppAD::vector<double> d_x(1), d_y(1);
          d_x[0] = 1.;
          d_y    = f.Forward(1, d_x);
          ok    &= NearEqual(d_x[0], 1., eps, eps);

          return ok;
     }
}
bool simple_ad(void)
{     bool ok = true;
     size_t num_threads = NUMBER_THREADS;

     // Check that no memory is in use or avialable at start
     // (using thread_alloc in sequential mode)
     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     ok &= thread_alloc::inuse(thread_num) == 0;
          ok &= thread_alloc::available(thread_num) == 0;
     }

     // initialize info_all
     problem_specific *info, *info_all[NUMBER_THREADS];
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     // problem specific information
          size_t min_bytes(sizeof(info)), cap_bytes;
          void*  v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          info         = static_cast<problem_specific*>(v_ptr);
          info->x      = double(thread_num) + 1.;
          info_all[thread_num] = info;
     }

     ok &= run_all_workers(num_threads, info_all);

     // go down so that free memory for other threads before memory for master
     thread_num = num_threads;
     while(thread_num--)
     {     // delete problem specific information
          void* v_ptr = static_cast<void*>( info_all[thread_num] );
          thread_alloc::return_memory( v_ptr );
          // check that there is no longer any memory inuse by this thread
          ok &= thread_alloc::inuse(thread_num) == 0;
          // return all memory being held for future use by this thread
          thread_alloc::free_available(thread_num);
     }

     return ok;
}

Input File: example/multi_thread/openmp/simple_ad_openmp.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.5: A Simple Boost Threading AD: Example and Test

7.2.5.a: Purpose
This example demonstrates how CppAD can be used in a boost multi-threading environment.

7.2.5.b: Source Code

# include <cppad/cppad.hpp>
# include <boost/thread.hpp>
# define NUMBER_THREADS  4

namespace {
     // structure with problem specific information
     typedef struct {
          // function argument (worker input)
          double          x;
          // This structure would also have return information in it,
          // but this example only returns the ok flag
     } problem_specific;
     // =====================================================================
     // General purpose code you can copy to your application
     // =====================================================================
     using CppAD::thread_alloc;
     // ------------------------------------------------------------------
     // thread specific point to the thread number (initialize as null)
     void cleanup(size_t*)
     {     return; }
     boost::thread_specific_ptr<size_t> thread_num_ptr_(cleanup);

     // Are we in sequential mode; i.e., other threads are waiting for
     // master thread to set up next job ?
     bool sequential_execution_ = true;

     // used to inform CppAD when we are in parallel execution mode
     bool in_parallel(void)
     {     return ! sequential_execution_; }

     // used to inform CppAD of current thread number thread_number()
     size_t thread_number(void)
     {     // return thread_all_[thread_num].thread_num
          return *thread_num_ptr_.get();
     }
     // ---------------------------------------------------------------------
     // structure with information for one thread
     typedef struct {
          // number for this thread (thread specific points here)
          size_t            thread_num;
          // pointer to this boost thread
          boost::thread*    bthread;
          // false if an error occurs, true otherwise
          bool              ok;
          // pointer to problem specific information
          problem_specific* info;
     } thread_one_t;
     // vector with information for all threads
     thread_one_t thread_all_[NUMBER_THREADS];
     // --------------------------------------------------------------------
     // function that initializes the thread and then calls actual worker
     bool worker(size_t thread_num, problem_specific* info);
     void run_one_worker(size_t thread_num)
     {     bool ok = true;

          // The master thread should call worker directly
          ok &= thread_num != 0;

          // This is not the master thread, so thread specific infromation
          // has not yet been set. We use it to inform other routines
          // of this threads number.
          // We must do this before calling thread_alloc::thread_num().
          thread_num_ptr_.reset(& thread_all_[thread_num].thread_num);

          // Check the value of thread_alloc::thread_num().
          ok = thread_num == thread_alloc::thread_num();

          // Now do the work
          ok &= worker(thread_num, thread_all_[thread_num].info);

          // pass back ok information for this thread
          thread_all_[thread_num].ok = ok;

          // no return value
          return;
     }
     // ----------------------------------------------------------------------
     // function that calls all the workers
     bool run_all_workers(size_t num_threads, problem_specific* info_all[])
     {     bool ok = true;

          // initialize thread_all_ (execpt for pthread_id)
          size_t thread_num;
          for(thread_num = 0; thread_num < num_threads; thread_num++)
          {     // pointed to by thread specific info for this thread
               thread_all_[thread_num].thread_num = thread_num;
               // initialize as false to make sure worker gets called by other
               // threads. Note that thread_all_[0].ok does not get used
               thread_all_[thread_num].ok         = false;
               // problem specific information
               thread_all_[thread_num].info       = info_all[thread_num];
          }

          // master bthread number
          thread_num_ptr_.reset(& thread_all_[0].thread_num);

          // Now thread_number() has necessary information for this thread
          // (number zero), and while still in sequential mode,
          // call setup for using CppAD::AD<double> in parallel mode.
          thread_alloc::parallel_setup(
               num_threads, in_parallel, thread_number
          );
          thread_alloc::hold_memory(true);
          CppAD::parallel_ad<double>();

          // inform CppAD that we now may be in parallel execution mode
          sequential_execution_ = false;

          // This master thread is already running, we need to create
          // num_threads - 1 more threads
          thread_all_[0].bthread = CPPAD_NULL;
          for(thread_num = 1; thread_num < num_threads; thread_num++)
          {     // Create the thread with thread number equal to thread_num
               thread_all_[thread_num].bthread =
                    new boost::thread(run_one_worker, thread_num);
          }

          // now call worker for the master thread
          thread_num = thread_alloc::thread_num();
          ok &= thread_num == 0;
          ok &= worker(thread_num, thread_all_[thread_num].info);

          // now wait for the other threads to finish
          for(thread_num = 1; thread_num < num_threads; thread_num++)
          {     thread_all_[thread_num].bthread->join();
               delete thread_all_[thread_num].bthread;
               thread_all_[thread_num].bthread = CPPAD_NULL;
          }

          // Inform CppAD that we now are definately back to sequential mode
          sequential_execution_ = true;

          // now inform CppAD that there is only one thread
          thread_alloc::parallel_setup(1, CPPAD_NULL, CPPAD_NULL);
          thread_alloc::hold_memory(false);
          CppAD::parallel_ad<double>();

          // check to ok flag returned by during calls to work by other threads
          for(thread_num = 1; thread_num < num_threads; thread_num++)
               ok &= thread_all_[thread_num].ok;

          return ok;
     }
     // =====================================================================
     // End of General purpose code
     // =====================================================================
     // function that does the work for one thread
     bool worker(size_t thread_num, problem_specific* info)
     {     bool ok = true;

          // CppAD::vector uses the CppAD fast multi-threading allocator
          CppAD::vector< CppAD::AD<double> > ax(1), ay(1);
          ax[0] = info->x;
          Independent(ax);
          ay[0] = sqrt( ax[0] * ax[0] );
          CppAD::ADFun<double> f(ax, ay);

          // Check function value corresponds to the identity
          double eps = 10. * CppAD::numeric_limits<double>::epsilon();
          ok        &= CppAD::NearEqual(ay[0], ax[0], eps, eps);

          // Check derivative value corresponds to the identity.
          CppAD::vector<double> d_x(1), d_y(1);
          d_x[0] = 1.;
          d_y    = f.Forward(1, d_x);
          ok    &= CppAD::NearEqual(d_x[0], 1., eps, eps);

          return ok;
     }
}
bool simple_ad(void)
{     bool ok = true;
     size_t num_threads = NUMBER_THREADS;

     // Check that no memory is in use or avialable at start
     // (using thread_alloc in sequential mode)
     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     ok &= thread_alloc::inuse(thread_num) == 0;
          ok &= thread_alloc::available(thread_num) == 0;
     }

     // initialize info_all
     problem_specific *info, *info_all[NUMBER_THREADS];
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     // problem specific information
          size_t min_bytes(sizeof(info)), cap_bytes;
          void*  v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          info         = static_cast<problem_specific*>(v_ptr);
          info->x      = double(thread_num) + 1.;
          info_all[thread_num] = info;
     }

     ok &= run_all_workers(num_threads, info_all);

     // go down so that free memory for other threads before memory for master
     thread_num = num_threads;
     while(thread_num--)
     {     // delete problem specific information
          void* v_ptr = static_cast<void*>( info_all[thread_num] );
          thread_alloc::return_memory( v_ptr );
          // check that there is no longer any memory inuse by this thread
          ok &= thread_alloc::inuse(thread_num) == 0;
          // return all memory being held for future use by this thread
          thread_alloc::free_available(thread_num);
     }

     return ok;
}

Input File: example/multi_thread/bthread/simple_ad_bthread.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.6: A Simple pthread AD: Example and Test

7.2.6.a: Purpose
This example demonstrates how CppAD can be used in a pthread multi-threading environment.

7.2.6.b: Source Code

# include <cppad/cppad.hpp>
# include <pthread.h>
# define NUMBER_THREADS  4

namespace {
     // structure with problem specific information
     typedef struct {
          // function argument (worker input)
          double          x;
          // This structure would also have return information in it,
          // but this example only returns the ok flag
     } problem_specific;
     // =====================================================================
     // General purpose code you can copy to your application
     // =====================================================================
     using CppAD::thread_alloc;
     // ------------------------------------------------------------------
     // key for accessing thread specific information
     pthread_key_t thread_specific_key_;

     // no need to destroy thread specific information
     void thread_specific_destructor(void* thread_num_vptr)
     {     return; }

     // Are we in sequential mode; i.e., other threads are waiting for
     // master thread to set up next job ?
     bool sequential_execution_ = true;

     // used to inform CppAD when we are in parallel execution mode
     bool in_parallel(void)
     {     return ! sequential_execution_; }

     // used to inform CppAD of current thread number thread_number()
     size_t thread_number(void)
     {     // get thread specific information
          void*   thread_num_vptr = pthread_getspecific(thread_specific_key_);
          size_t* thread_num_ptr  = static_cast<size_t*>(thread_num_vptr);
          size_t  thread_num      = *thread_num_ptr;
          return thread_num;
     }
     // ---------------------------------------------------------------------
     // structure with information for one thread
     typedef struct {
          // number for this thread (thread specific points here)
          size_t            thread_num;
          // pthread unique identifier for this thread
          pthread_t         pthread_id;
          // false if an error occurs, true otherwise
          bool              ok;
          // pointer to problem specific information
          problem_specific* info;
     } thread_one_t;
     // vector with information for all threads
     thread_one_t thread_all_[NUMBER_THREADS];
     // --------------------------------------------------------------------
     // function that initializes the thread and then calls the actual worker
     bool worker(size_t thread_num, problem_specific* info);
     void* run_one_worker(void* thread_num_vptr)
     {     bool ok = true;

          // thread_num for this thread
          size_t thread_num = *static_cast<size_t*>(thread_num_vptr);

          // The master thread should call worker directly
          ok &= thread_num != 0;

          // This is not the master thread, so thread specific infromation
          // has not yet been set. We use it to inform other routines
          // of this threads number.
          // We must do this before calling thread_alloc::thread_num().
          int rc = pthread_setspecific(
               thread_specific_key_,
               thread_num_vptr
          );
          ok &= rc == 0;

          // check the value of thread_alloc::thread_num().
          ok = thread_num == thread_alloc::thread_num();

          // Now do the work
          ok &= worker(thread_num, thread_all_[thread_num].info);

          // pass back ok information for this thread
          thread_all_[thread_num].ok = ok;

          // no return value
          return CPPAD_NULL;
     }
     // --------------------------------------------------------------------
     // function that calls all the workers
     bool run_all_workers(size_t num_threads, problem_specific* info_all[])
     {     bool ok = true;

          // initialize thread_all_ (execpt for pthread_id)
          size_t thread_num;
          for(thread_num = 0; thread_num < num_threads; thread_num++)
          {     // pointed to by thread specific info for this thread
               thread_all_[thread_num].thread_num = thread_num;
               // initialize as false to make sure worker gets called by other
               // threads. Note that thread_all_[0].ok does not get used
               thread_all_[thread_num].ok         = false;
               // problem specific information
               thread_all_[thread_num].info       = info_all[thread_num];
          }

          // master pthread_id
          thread_all_[0].pthread_id = pthread_self();

          // error flag for calls to pthread library
          int rc;

          // create a key for thread specific information
          rc = pthread_key_create(
               &thread_specific_key_, thread_specific_destructor
          );
          ok &= (rc == 0);

          // set thread specific information for this (master thread)
          void* thread_num_vptr = static_cast<void*>(
               &(thread_all_[0].thread_num)
          );
          rc = pthread_setspecific(thread_specific_key_, thread_num_vptr);
          ok &= (rc == 0);

          // Now thread_number() has necessary information for this thread
          // (number zero), and while still in sequential mode,
          // call setup for using CppAD::AD<double> in parallel mode.
          thread_alloc::parallel_setup(
               num_threads, in_parallel, thread_number
          );
          thread_alloc::hold_memory(true);
          CppAD::parallel_ad<double>();

          // inform CppAD that we now may be in parallel execution mode
          sequential_execution_ = false;

          // structure used to create the threads
          pthread_t       pthread_id;
          // default for pthread_attr_setdetachstate is PTHREAD_CREATE_JOINABLE
          pthread_attr_t* no_attr= CPPAD_NULL;

          // This master thread is already running, we need to create
          // num_threads - 1 more threads
          for(thread_num = 1; thread_num < num_threads; thread_num++)
          {     // Create the thread with thread number equal to thread_num
               thread_num_vptr = static_cast<void*> (
                    &(thread_all_[thread_num].thread_num)
               );
               rc = pthread_create(
                         &pthread_id ,
                         no_attr     ,
                         run_one_worker,
                         thread_num_vptr
               );
               thread_all_[thread_num].pthread_id = pthread_id;
               ok &= (rc == 0);
          }

          // now call worker for the master thread
          thread_num = thread_alloc::thread_num();
          ok &= thread_num == 0;
          ok &= worker(thread_num, thread_all_[thread_num].info);

          // now wait for the other threads to finish
          for(thread_num = 1; thread_num < num_threads; thread_num++)
          {     void* no_status = CPPAD_NULL;
               rc      = pthread_join(
                    thread_all_[thread_num].pthread_id, &no_status
               );
               ok &= (rc == 0);
          }

          // Inform CppAD that we now are definately back to sequential mode
          sequential_execution_ = true;

          // now inform CppAD that there is only one thread
          thread_alloc::parallel_setup(1, CPPAD_NULL, CPPAD_NULL);
          thread_alloc::hold_memory(false);
          CppAD::parallel_ad<double>();

          // destroy the key for thread specific data
          pthread_key_delete(thread_specific_key_);

          // check to ok flag returned by during calls to work by other threads
          for(thread_num = 1; thread_num < num_threads; thread_num++)
               ok &= thread_all_[thread_num].ok;

          return ok;
     }
     // =====================================================================
     // End of General purpose code
     // =====================================================================
     // function that does the work for one thread
     bool worker(size_t thread_num, problem_specific* info)
     {     bool ok = true;

          // CppAD::vector uses the CppAD fast multi-threading allocator
          CppAD::vector< CppAD::AD<double> > ax(1), ay(1);
          ax[0] = info->x;
          Independent(ax);
          ay[0] = sqrt( ax[0] * ax[0] );
          CppAD::ADFun<double> f(ax, ay);

          // Check function value corresponds to the identity
          double eps = 10. * CppAD::numeric_limits<double>::epsilon();
          ok        &= CppAD::NearEqual(ay[0], ax[0], eps, eps);

          // Check derivative value corresponds to the identity.
          CppAD::vector<double> d_x(1), d_y(1);
          d_x[0] = 1.;
          d_y    = f.Forward(1, d_x);
          ok    &= CppAD::NearEqual(d_x[0], 1., eps, eps);

          return ok;
     }
}
bool simple_ad(void)
{     bool ok = true;
     size_t num_threads = NUMBER_THREADS;

     // Check that no memory is in use or avialable at start
     // (using thread_alloc in sequential mode)
     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     ok &= thread_alloc::inuse(thread_num) == 0;
          ok &= thread_alloc::available(thread_num) == 0;
     }

     // initialize info_all
     problem_specific *info, *info_all[NUMBER_THREADS];
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     // problem specific information
          size_t min_bytes(sizeof(info)), cap_bytes;
          void*  v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          info         = static_cast<problem_specific*>(v_ptr);
          info->x      = double(thread_num) + 1.;
          info_all[thread_num] = info;
     }

     ok &= run_all_workers(num_threads, info_all);

     // go down so that free memory for other threads before memory for master
     thread_num = num_threads;
     while(thread_num--)
     {     // delete problem specific information
          void* v_ptr = static_cast<void*>( info_all[thread_num] );
          thread_alloc::return_memory( v_ptr );
          // check that there is no longer any memory inuse by this thread
          ok &= thread_alloc::inuse(thread_num) == 0;
          // return all memory being held for future use by this thread
          thread_alloc::free_available(thread_num);
     }

     return ok;
}

Input File: example/multi_thread/pthread/simple_ad_pthread.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.7: Using a Team of AD Threads: Example and Test

7.2.7.a: Purpose
This example demonstrates how use a team of threads with CppAD.

7.2.7.b: thread_team
The following three implementations of the 7.2.11: team_thread.hpp specifications are included:
7.2.11.1: team_openmp.cpp OpenMP Implementation of a Team of AD Threads
7.2.11.2: team_bthread.cpp Boost Thread Implementation of a Team of AD Threads
7.2.11.3: team_pthread.cpp Pthread Implementation of a Team of AD Threads

7.2.7.c: Source Code

# include <cppad/cppad.hpp>
# include "team_thread.hpp"
# define NUMBER_THREADS  4

namespace {
     using CppAD::thread_alloc;

     // structure with information for one thread
     typedef struct {
          // function argument (worker input)
          double          x;
          // false if an error occurs, true otherwise (worker output)
          bool            ok;
     } work_one_t;
     // vector with information for all threads
     // (use pointers instead of values to avoid false sharing)
     work_one_t* work_all_[NUMBER_THREADS];
     // --------------------------------------------------------------------
     // function that does the work for one thread
     void worker(void)
     {     using CppAD::NearEqual;
          using CppAD::AD;
          bool ok = true;
          size_t thread_num = thread_alloc::thread_num();

          // CppAD::vector uses the CppAD fast multi-threading allocator
          CppAD::vector< AD<double> > ax(1), ay(1);
          ax[0] = work_all_[thread_num]->x;
          Independent(ax);
          ay[0] = sqrt( ax[0] * ax[0] );
          CppAD::ADFun<double> f(ax, ay);

          // Check function value corresponds to the identity
          double eps = 10. * CppAD::numeric_limits<double>::epsilon();
          ok        &= NearEqual(ay[0], ax[0], eps, eps);

          // Check derivative value corresponds to the identity.
          CppAD::vector<double> d_x(1), d_y(1);
          d_x[0] = 1.;
          d_y    = f.Forward(1, d_x);
          ok    &= NearEqual(d_x[0], 1., eps, eps);

          // pass back ok information for this thread
          work_all_[thread_num]->ok = ok;
     }
}

// This test routine is only called by the master thread (thread_num = 0).
bool team_example(void)
{     bool ok = true;

     size_t num_threads = NUMBER_THREADS;

     // Check that no memory is in use or avialable at start
     // (using thread_alloc in sequential mode)
     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     ok &= thread_alloc::inuse(thread_num) == 0;
          ok &= thread_alloc::available(thread_num) == 0;
     }

     // initialize work_all_
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     // allocate separate memory for this thread to avoid false sharing
          size_t min_bytes(sizeof(work_one_t)), cap_bytes;
          void*  v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          work_all_[thread_num]     = static_cast<work_one_t*>(v_ptr);
          // incase this thread's worker does not get called
          work_all_[thread_num]->ok = false;
          // parameter that defines the work for this thread
          work_all_[thread_num]->x  = double(thread_num) + 1.;
     }

     ok &= team_create(num_threads);
     ok &= team_work(worker);
     ok &= team_destroy();

     // go down so that free memrory for other threads before memory for master
     thread_num = num_threads;
     while(thread_num--)
     {     // check that this thread was ok with the work it did
          ok &= work_all_[thread_num]->ok;
          // delete problem specific information
          void* v_ptr = static_cast<void*>( work_all_[thread_num] );
          thread_alloc::return_memory( v_ptr );
          // check that there is no longer any memory inuse by this thread
          // (for general applications, the master might still be using memory)
          ok &= thread_alloc::inuse(thread_num) == 0;
          // return all memory being held for future use by this thread
          thread_alloc::free_available(thread_num);
     }
     return ok;
}

Input File: example/multi_thread/team_example.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8: Multi-Threading Harmonic Summation Example / Test

7.2.8.a: Source File
All of the routines below are located in the file
 
     example/multi_thread/harmonic.cpp

7.2.8.b: Contents
harmonic_common: 7.2.8.1Common Variables Used by Multi-threading Sum of 1/i
harmonic_setup: 7.2.8.2Set Up Multi-threading Sum of 1/i
harmonic_worker: 7.2.8.3Do One Thread's Work for Sum of 1/i
harmonic_takedown: 7.2.8.4Take Down Multi-threading Sum of 1/i
harmonic_sum: 7.2.8.5Multi-Threaded Implementation of Summation of 1/i
harmonic_time: 7.2.8.6Timing Test of Multi-Threaded Summation of 1/i

Input File: example/multi_thread/harmonic.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i

7.2.8.1.a: Purpose
This source code defines the common include files, defines, and variables that are used by the summation that defines the harmonic series @[@ 1 + 1/2 + 1/3 + ... + 1/n @]@

7.2.8.1.b: Source



# include <cppad/cppad.hpp>
# include "harmonic.hpp"
# include "team_thread.hpp"
# define MAX_NUMBER_THREADS 48

namespace {
     using CppAD::thread_alloc; // fast multi-threadeding memory allocator

     // Number of threads, set by previous call to harmonic_time
     // (zero means one thread with no multi-threading setup)
     size_t num_threads_ = 0;

     // value of mega_sum, set by previous call to harmonic_time.
     size_t mega_sum_;

     // structure with information for one thread
     typedef struct {
          // index to start summation at (worker input)
          // set by previous call to harmonic_setup
          size_t start;
          // index to end summation at (worker input)
          // set by previous call to harmonic_setup
          size_t stop;
          // summation for this thread
          // set by worker
          double sum;
          // false if an error occurs, true otherwise
          // set by worker
          bool   ok;
     } work_one_t;

     // vector with information for all threads
     // (use pointers instead of values to avoid false sharing)
     work_one_t* work_all_[MAX_NUMBER_THREADS];
}

Input File: example/multi_thread/harmonic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8.2: Set Up Multi-threading Sum of 1/i

7.2.8.2.a: Syntax
ok = harmonic_setup(num_sum)

7.2.8.2.b: Purpose
This routine does the setup for splitting the summation that defines the harmonic series @[@ 1 + 1/2 + 1/3 + ... + 1/n @]@ into separate parts for each thread.

7.2.8.2.c: Thread
It is assumed that this function is called by thread zero, and all the other threads are blocked (waiting).

7.2.8.2.d: num_sum
The argument num_sum has prototype
     size_t 
num_sum
It specifies the value of @(@ n @)@ in the summation.

7.2.8.2.e: Source

namespace {
bool harmonic_setup(size_t num_sum)
{     // sum = 1/num_sum + 1/(num_sum-1) + ... + 1
     size_t num_threads  = std::max(num_threads_, size_t(1));
     bool ok             = num_threads == thread_alloc::num_threads();
     ok                 &= thread_alloc::thread_num() == 0;
     ok                 &= num_sum >= num_threads;
     //
     for(size_t thread_num = 0; thread_num < num_threads; thread_num++)
     {     // allocate separate memory for this thread to avoid false sharing
          size_t min_bytes(sizeof(work_one_t)), cap_bytes;
          void* v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          work_all_[thread_num] = static_cast<work_one_t*>(v_ptr);
          //
          // in case this thread's worker does not get called
          work_all_[thread_num]->ok = false;
          // parameters that define the work for this and previous thread
          if( thread_num == 0 )
               work_all_[0]->start = 1;
          else
          {     size_t index  = (num_sum * thread_num) / num_threads;
               work_all_[thread_num-1]->stop = index;
               work_all_[thread_num]->start  = index;
          }
     }
     work_all_[num_threads-1]->stop = num_sum + 1;
     return ok;
}
}

Input File: example/multi_thread/harmonic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8.3: Do One Thread's Work for Sum of 1/i

7.2.8.3.a: Syntax
harmonic_worker()

7.2.8.3.b: Purpose
This routines computes the sum the summation that defines the harmonic series
     1/
start + 1/(start+1) + ... + 1/(end-1)

7.2.8.3.c: start
This is the value of the 7.2.8.1: harmonic_common information
     
start = work_all_[thread_num]->start

7.2.8.3.d: end
This is the value of the 7.2.8.1: harmonic_common information
     
end = work_all_[thread_num]->end

7.2.8.3.e: thread_num
This is the number for the current thread; see 8.23.5: thread_num .

7.2.8.3.f: Source

namespace {
void harmonic_worker(void)
{     // sum =  1/(stop-1) + 1/(stop-2) + ... + 1/start
     size_t thread_num  = thread_alloc::thread_num();
     size_t num_threads = std::max(num_threads_, size_t(1));
     bool   ok          = thread_num < num_threads;
     size_t start       = work_all_[thread_num]->start;
     size_t stop        = work_all_[thread_num]->stop;
     double sum         = 0.;

     ok &= stop > start;
     size_t i = stop;
     while( i > start )
     {     i--;
          sum += 1. / double(i);
     }

     work_all_[thread_num]->sum = sum;
     work_all_[thread_num]->ok  = ok;
}
}

Input File: example/multi_thread/harmonic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8.4: Take Down Multi-threading Sum of 1/i

7.2.8.4.a: Syntax
ok = harmonic_takedown(sum)

7.2.8.4.b: Purpose
This routine does the takedown for splitting the summation that defines the harmonic series @[@ s = 1 + 1/2 + 1/3 + ... + 1/n @]@ into separate parts for each thread; see 7.2.8.2: harmonic_setup .

7.2.8.4.c: Thread
It is assumed that this function is called by thread zero, and all the other threads have completed their work and are blocked (waiting).

7.2.8.4.d: sum
This argument has prototype
     double& 
sum
The input value of the argument does not matter. Upon return it is the value of the summation; i.e. @(@ s @)@.

7.2.8.4.e: Source

namespace {
bool harmonic_takedown(double& sum)
{     // sum = 1/num_sum + 1/(num_sum-1) + ... + 1
     bool ok            = true;
     ok                &= thread_alloc::thread_num() == 0;
     size_t num_threads = std::max(num_threads_, size_t(1));
     sum                = 0.;
     //
     // go down so that free memory for other threads before memory for master
     size_t thread_num = num_threads;
     while(thread_num--)
     {     // check that this tread was ok with the work it did
          ok  &= work_all_[thread_num]->ok;
          //
          // add this threads contribution to the sum
          sum += work_all_[thread_num]->sum;
          //
          // delete problem specific information
          void* v_ptr = static_cast<void*>( work_all_[thread_num] );
          thread_alloc::return_memory( v_ptr );
          //
          // check that there is no longer any memory inuse by this thread
          // (for general applications, the master might still be using memory)
          ok &= thread_alloc::inuse(thread_num) == 0;
          //
          // return all memory being held for future use by this thread
          thread_alloc::free_available(thread_num);
     }
     return ok;
}
}

Input File: example/multi_thread/harmonic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8.5: Multi-Threaded Implementation of Summation of 1/i

7.2.8.5.a: Syntax
ok = harmonic_sum(sumnum_sum)

7.2.8.5.b: Purpose
Multi-threaded computation of the summation that defines the harmonic series @[@ s = 1 + 1/2 + 1/3 + ... + 1/n @]@

7.2.8.5.c: Thread
It is assumed that this function is called by thread zero, and all the other threads are blocked (waiting).

7.2.8.5.d: ok
This return value has prototype
     bool 
ok
If this return value is false, an error occurred during harmonic.

7.2.8.5.e: sum
This argument has prototype
     double& 
sum
The input value of the argument does not matter. Upon return it is the value of the summation; i.e. @(@ s @)@.

7.2.8.5.f: num_sum
This argument has prototype
     size_t 
num_sum
It specifies the number of terms in the summation; i.e. @(@ n @)@.

7.2.8.5.g: Source

namespace {
bool harmonic_sum(double& sum, size_t num_sum)
{     // sum = 1/num_sum + 1/(num_sum-1) + ... + 1
     bool ok = true;
     ok     &= thread_alloc::thread_num() == 0;

     // setup the work for multi-threading
     ok &= harmonic_setup(num_sum);

     // now do the work for each thread
     if( num_threads_ > 0 )
          team_work( harmonic_worker );
     else     harmonic_worker();

     // combine the result for each thread and takedown the multi-threading.
     ok &= harmonic_takedown(sum);

     return ok;
}
}

Input File: example/multi_thread/harmonic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.8.6: Timing Test of Multi-Threaded Summation of 1/i

7.2.8.6.a: Syntax
ok = harmonic_time(
     
time_outtest_timenum_threadsmega_sum
)


7.2.8.6.b: Purpose
Runs a correctness and timing test for a multi-threaded computation of the summation that defines the harmonic series @[@ 1 + 1/2 + 1/3 + ... + 1/n @]@

7.2.8.6.c: Thread
It is assumed that this function is called by thread zero in sequential mode; i.e., not 8.23.4: in_parallel .

7.2.8.6.d: ok
This return value has prototype
     bool 
ok
If it is true, harmonic_time passed the correctness test. Otherwise it is false.

7.2.8.6.e: time_out
This argument has prototype
     double& 
time_out
The input value of the argument does not matter. Upon return it is the number of wall clock seconds required for to compute the summation.

7.2.8.6.f: test_time
Is the minimum amount of wall clock time that the test should take. The number of repeats for the test will be increased until this time is reached. The reported time_out is the total wall clock time divided by the number of repeats.

7.2.8.6.g: num_threads
This argument has prototype
     size_t 
num_threads
It specifies the number of threads that are available for this test. If it is zero, the test is run without the multi-threading environment and
     1 == thread_alloc::num_threads()
when harmonic_time is called. If it is non-zero, the test is run with the multi-threading and
     
num_threads = thread_alloc::num_threads()
when harmonic_time is called.

7.2.8.6.h: mega_sum
This argument has prototype
     size_t& 
mega_sum
and is greater than zero. The value @(@ n @)@ in the summation is equal to @(@ 10^6 @)@ times mega_sum .

7.2.8.6.i: Source

# include <cstring>
# include <limits>
# include <iostream>
# include <cstdlib>
# include <algorithm>

// Note there is no mention of parallel mode in the documentation for
// speed_test (so it is safe to use without special consideration).
# include <cppad/utility/time_test.hpp>

namespace {
     // value of sum resulting from most recent call to test_once
     double sum_ = 0.;
     //
     void test_once(void)
     {     if( mega_sum_ < 1 )
          {     std::cerr << "harmonic_time: mega_sum < 1" << std::endl;
               exit(1);
          }
          size_t num_sum = mega_sum_ * 1000000;
          bool ok = harmonic_sum(sum_, num_sum);
          if( ! ok )
          {     std::cerr << "harmonic: error" << std::endl;
               exit(1);
          }
          return;
     }
     //
     void test_repeat(size_t repeat)
     {     size_t i;
          for(i = 0; i < repeat; i++)
               test_once();
          return;
     }
}

// This is the only routine that is accessible outside of this file
bool harmonic_time(
     double& time_out, double test_time, size_t num_threads, size_t mega_sum)
{     bool ok  = true;
     ok      &= thread_alloc::thread_num() == 0;

     // arguments passed to harmonic_sum
     num_threads_ = num_threads;
     mega_sum_    = mega_sum;

     // create team of threads
     ok &= thread_alloc::in_parallel() == false;
     if( num_threads > 0 )
     {     team_create(num_threads);
          ok &= num_threads == thread_alloc::num_threads();
     }
     else
     {     ok &= 1 == thread_alloc::num_threads();
     }

     // run the test case and set the time return value
     time_out = CppAD::time_test(test_repeat, test_time);

     // destroy team of threads
     if( num_threads > 0 )
          team_destroy();
     ok &= thread_alloc::in_parallel() == false;

     // Correctness check
     double eps1000 =
          double(mega_sum_) * 1e3 * std::numeric_limits<double>::epsilon();
     size_t i       = mega_sum_ * 1000000;
     double check = 0.;
     while(i)
          check += 1. / double(i--);
     ok &= std::fabs(sum_ - check) <= eps1000;

     return ok;
}

Input File: example/multi_thread/harmonic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9: Multi-Threading User Atomic Example / Test

7.2.9.a: Source File
All of the routines below are located in the file
 
     example/multi_thread/multi_atomic.cpp

7.2.9.b: Contents
multi_atomic_user: 7.2.9.1Defines a User Atomic Operation that Computes Square Root
multi_atomic_common: 7.2.9.2Multi-Threaded User Atomic Common Information
multi_atomic_setup: 7.2.9.3Multi-Threaded User Atomic Set Up
multi_atomic_worker: 7.2.9.4Multi-Threaded User Atomic Worker
multi_atomic_takedown: 7.2.9.5Multi-Threaded User Atomic Take Down
multi_atomic_run: 7.2.9.6Run Multi-Threaded User Atomic Calculation
multi_atomic_time: 7.2.9.7Timing Test for Multi-Threaded User Atomic Calculation

Input File: example/multi_thread/multi_atomic.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.1: Defines a User Atomic Operation that Computes Square Root

7.2.9.1.a: Syntax
atomic_user a_square_root
a_square_root(auay)

7.2.9.1.b: Purpose
This user atomic operation computes a square root using Newton's method. It is meant to be very inefficient in order to demonstrate timing results.

7.2.9.1.c: au
This argument has prototype
     const 
ADvectorau
where ADvector is a 8.9: simple vector class with elements of type AD<double>. The size of au is three.

7.2.9.1.c.a: num_itr
We use the notation
     
num_itr = size_t( Integer( au[0] ) )
for the number of Newton iterations in the computation of the square root function. The component au[0] must be a 12.4.h: parameter .

7.2.9.1.c.b: y_initial
We use the notation
     
y_initial = au[1]
for the initial value of the Newton iterate.

7.2.9.1.c.c: y_squared
We use the notation
     
y_squared = au[2]
for the value we are taking the square root of.

7.2.9.1.d: ay
This argument has prototype
     
ADvectoray
The size of ay is one and ay[0] is the square root of y_squared .

7.2.9.1.e: Limitations
Only zero order forward mode is implements for the atomic_user class.

7.2.9.1.f: Source

// includes used by all source code in multi_atomic.cpp file
# include <cppad/cppad.hpp>
# include "multi_atomic.hpp"
# include "team_thread.hpp"
//
namespace {
using CppAD::thread_alloc; // fast multi-threading memory allocator
using CppAD::vector;       // uses thread_alloc

class atomic_user : public CppAD::atomic_base<double> {
public:
     // ctor
     atomic_user(void)
     : CppAD::atomic_base<double>("atomic_square_root")
     { }
private:
     // forward mode routine called by CppAD
     virtual bool forward(
          size_t                   p   ,
          size_t                   q   ,
          const vector<bool>&      vu  ,
          vector<bool>&            vy  ,
          const vector<double>&    tu  ,
          vector<double>&          ty  )
     {
# ifndef NDEBUG
          size_t n = tu.size() / (q + 1);
          size_t m = ty.size() / (q + 1);
          assert( n == 3 );
          assert( m == 1 );
# endif
          // only implementing zero order forward for this example
          if( q != 0 )
               return false;

          // extract components of argument vector
          size_t num_itr    = size_t( tu[0] );
          double y_initial  = tu[1];
          double y_squared  = tu[2];

          // check for setting variable information
          if( vu.size() > 0 )
          {     if( vu[0] )
                    return false;
               vy[0] = vu[1] || vu[2];
          }

          // Use Newton's method to solve f(y) = y^2 = y_squared
          double y_itr = y_initial;
          for(size_t itr = 0; itr < num_itr; itr++)
          {     // solve (y - y_itr) * f'(y_itr) = y_squared - y_itr^2
               double fp_itr = 2.0 * y_itr;
               y_itr         = y_itr + (y_squared - y_itr * y_itr) / fp_itr;
          }

          // return the Newton approximation for f(y) = y_squared
          ty[0] = y_itr;
          return true;
     }
};
}

Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.2: Multi-Threaded User Atomic Common Information

7.2.9.2.a: Purpose
This source code defines the common variables that are used by the multi_atomic_name functions.

7.2.9.2.b: Source

namespace {
     // Number of threads, set by multi_atomic_time
     // (zero means one thread with no multi-threading setup)
     size_t num_threads_ = 0;

     // Number of Newton iterations, set by multi_atomic_time
     size_t num_itr_;

     // We can use one atomic_user function for all threads because
     // there is no member data that gets changed during worker call.
     // This needs to stay in scope for as long as a recording will use it.
     // We cannot be in parallel mode when this object is created or deleted.
     // We use a pointer so that there is no left over memory in thread zero.
     atomic_user* a_square_root_ = 0;

     // structure with information for one thread
     typedef struct {
          // used by worker to compute the square root, set by multi_atomic_setup
          CppAD::ADFun<double>* fun;
          //
          // value we are computing square root of, set by multi_atomic_setup
          vector<double>* y_squared;
          //
          // square root, set by worker
          vector<double>* square_root;
          //
          // false if an error occurs, true otherwise, set by worker
          bool ok;
     } work_one_t;
     //
     // Vector with information for all threads
     // (uses pointers instead of values to avoid false sharing)
     // allocated by multi_atomic_setup, freed by multi_atomic_takedown
     work_one_t* work_all_[CPPAD_MAX_NUM_THREADS];
}

Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.3: Multi-Threaded User Atomic Set Up
.

7.2.9.3.a: Syntax
ok = multi_atomic_setup(y_squared)

7.2.9.3.b: Purpose
This routine splits up the computation into the individual threads.

7.2.9.3.c: Thread
It is assumed that this function is called by thread zero and all the other threads are blocked (waiting).

7.2.9.3.d: y_squared
This argument has prototype
     const vector<double>& 
y_squared
and its size is equal to the number of equations to solve. It is the values that we are computing the square root of.

7.2.9.3.e: ok
This return value has prototype
     bool 
ok
If it is false, multi_atomic_setup detected an error.

7.2.9.3.f: Source

namespace {
bool multi_atomic_setup(const vector<double>& y_squared)
{     using CppAD::AD;
     size_t num_threads = std::max(num_threads_, size_t(1));
     bool   ok          = num_threads == thread_alloc::num_threads();
     ok                &= thread_alloc::thread_num() == 0;
     //
     // declare independent variable variable vector
     vector< AD<double> > ax(1);
     ax[0] = 2.0;
     CppAD::Independent(ax);
     //
     // argument and result for atomic function
     vector< AD<double> > au(3), ay(1);
     au[0] = AD<double>( num_itr_ ); // num_itr
     au[1] = ax[0];                  // y_initial
     au[2] = ax[0];                  // y_squared
     // put user atomic operation in recording
     (*a_square_root_)(au, ay);
     //
     // f(u) = sqrt(u)
     CppAD::ADFun<double> fun(ax, ay);
     //
     // number of square roots for each thread
     size_t per_thread = (y_squared.size() + num_threads - 1) / num_threads;
     size_t y_index    = 0;
     //
     for(size_t thread_num = 0; thread_num < num_threads; thread_num++)
     {     // allocate separate memory for each thread to avoid false sharing
          size_t min_bytes(sizeof(work_one_t)), cap_bytes;
          void* v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          work_all_[thread_num] = static_cast<work_one_t*>(v_ptr);
          //
          // Run constructor on work_all_[thread_num]->fun
          work_all_[thread_num]->fun = new CppAD::ADFun<double>;
          //
          // Run constructor on work_all_[thread_num] vectors
          work_all_[thread_num]->y_squared = new vector<double>;
          work_all_[thread_num]->square_root = new vector<double>;
          //
          // Each worker gets a separate copy of fun. This is necessary because
          // the Taylor coefficients will be set by each thread.
          *(work_all_[thread_num]->fun) = fun;
          //
          // values we are computing square root of for this thread
          ok &=  0 == work_all_[thread_num]->y_squared->size();
          for(size_t i = 0; i < per_thread; i++)
          if( y_index < y_squared.size() )
               work_all_[thread_num]->y_squared->push_back(y_squared[y_index++]);
          //
          // set to false in case this thread's worker does not get called
          work_all_[thread_num]->ok = false;
     }
     ok &= y_index == y_squared.size();
     //
     return ok;
}
}

Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.4: Multi-Threaded User Atomic Worker

7.2.9.4.a: Purpose
This routine does the computation for one thread.

7.2.9.4.b: Source

namespace {
void multi_atomic_worker(void)
{     size_t thread_num  = thread_alloc::thread_num();
     size_t num_threads = std::max(num_threads_, size_t(1));
     bool   ok          = thread_num < num_threads;
     //
     vector<double> x(1), y(1);
     size_t n = work_all_[thread_num]->y_squared->size();
     work_all_[thread_num]->square_root->resize(n);
     for(size_t i = 0; i < n; i++)
     {     x[0] = (* work_all_[thread_num]->y_squared )[i];
          y    = work_all_[thread_num]->fun->Forward(0, x);
          //
          (* work_all_[thread_num]->square_root )[i] = y[0];
     }
     work_all_[thread_num]->ok             = ok;
}
}

Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.5: Multi-Threaded User Atomic Take Down

7.2.9.5.a: Syntax
ok = multi_atomic_takedown(square_root)

7.2.9.5.b: Purpose
This routine gathers up the results for each thread and frees memory that was allocated by 7.2.9.3: multi_atomic_setup .

7.2.9.5.c: Thread
It is assumed that this function is called by thread zero and all the other threads are blocked (waiting).

7.2.9.5.d: square_root
This argument has prototype
     vector<double>& 
square_root
The input value of square_root does not matter. Upon return, it has the same size and is the element by element square root of 7.2.9.3.d: y_squared .

7.2.9.5.e: ok
This return value has prototype
     bool 
ok
If it is false, multi_atomic_takedown detected an error.

7.2.9.5.f: Source

namespace {
bool multi_atomic_takedown(vector<double>& square_root)
{     bool ok            = true;
     ok                &= thread_alloc::thread_num() == 0;
     size_t num_threads = std::max(num_threads_, size_t(1));
     //
     // extract square roots in original order
     square_root.resize(0);
     for(size_t thread_num = 0; thread_num < num_threads; thread_num++)
     {     // results for this thread
          size_t n = work_all_[thread_num]->square_root->size();
          for(size_t i = 0; i < n; i++)
               square_root.push_back((* work_all_[thread_num]->square_root )[i]);
     }
     //
     // go down so that free memory for other threads before memory for master
     size_t thread_num = num_threads;
     while(thread_num--)
     {     // check that this tread was ok with the work it did
          ok  &= work_all_[thread_num]->ok;
          //
          // run destructor on vector object for this thread
          delete work_all_[thread_num]->y_squared;
          delete work_all_[thread_num]->square_root;
          //
          // run destructor on function object for this thread
          delete work_all_[thread_num]->fun;
          //
          // delete problem specific information
          void* v_ptr = static_cast<void*>( work_all_[thread_num] );
          thread_alloc::return_memory( v_ptr );
          //
          // check that there is no longer any memory inuse by this thread
          if( thread_num > 0 )
          {     ok &= 0 == thread_alloc::inuse(thread_num);
               //
               // return all memory being held for future use by this thread
               thread_alloc::free_available(thread_num);
          }
     }
     return ok;
}
}

Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.6: Run Multi-Threaded User Atomic Calculation

7.2.9.6.a: Syntax
ok = multi_atomic_run(y_squaredsquare_root)

7.2.9.6.b: Thread
It is assumed that this function is called by thread zero and all the other threads are blocked (waiting).

7.2.9.6.c: y_squared
This argument has prototype
     const vector<double>& 
y_squared
and its size is equal to the number of threads. It is the values that we are computing the square root of.

7.2.9.6.d: square_root
This argument has prototype
     vector<double>& 
square_root
The input value of square_root does not matter. Upon return, it has the same size and is the element by element square root of y_squared .

7.2.9.6.e: ok
This return value has prototype
     bool 
ok
If it is false, multi_atomic_run detected an error.

7.2.9.6.f: Source

namespace {
bool multi_atomic_run(
     const vector<double>& y_squared  ,
     vector<double>&      square_root )
{
     bool ok = true;
     ok     &= thread_alloc::thread_num() == 0;

     // setup the work for multi-threading
     ok &= multi_atomic_setup(y_squared);

     // now do the work for each thread
     if( num_threads_ > 0 )
          team_work( multi_atomic_worker );
     else     multi_atomic_worker();

     // combine the result for each thread and takedown the multi-threading.
     ok &= multi_atomic_takedown(square_root);

     return ok;
}
}

Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation

7.2.9.7.a: Syntax
ok = multi_atomic_time(
     
time_outtest_timenum_threadsnum_itr
)


7.2.9.7.b: Thread
It is assumed that this function is called by thread zero in sequential mode; i.e., not 8.23.4: in_parallel .

7.2.9.7.c: time_out
This argument has prototype
     double& 
time_out
Its input value of the argument does not matter. Upon return it is the number of wall clock seconds used by 7.2.9.6: multi_atomic_run .

7.2.9.7.d: test_time
This argument has prototype
     double 
test_time
and is the minimum amount of wall clock time that the test should take. The number of repeats for the test will be increased until this time is reached. The reported time_out is the total wall clock time divided by the number of repeats.

7.2.9.7.e: num_threads
This argument has prototype
     size_t 
num_threads
It specifies the number of threads that are available for this test. If it is zero, the test is run without the multi-threading environment and
     1 == thread_alloc::num_threads()
If it is non-zero, the test is run with the multi-threading and
     
num_threads = thread_alloc::num_threads()

7.2.9.7.f: num_solve
This specifies the number of square roots that will be solved for.

7.2.9.7.g: ok
The return value has prototype
     bool 
ok
If it is true, harmonic_time passed the correctness test and multi_atomic_time did not detect an error. Otherwise it is false.
Input File: example/multi_thread/multi_atomic.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.10: Multi-Threaded Newton Method Example / Test

7.2.10.a: Source File
All of the routines below are located in the file
 
     example/multi_thread/multi_newton.cpp

7.2.10.b: Contents
multi_newton_common: 7.2.10.1Common Variables use by Multi-Threaded Newton Method
multi_newton_setup: 7.2.10.2Set Up Multi-Threaded Newton Method
multi_newton_worker: 7.2.10.3Do One Thread's Work for Multi-Threaded Newton Method
multi_newton_takedown: 7.2.10.4Take Down Multi-threaded Newton Method
multi_newton_run: 7.2.10.5A Multi-Threaded Newton's Method
multi_newton_time: 7.2.10.6Timing Test of Multi-Threaded Newton Method

Input File: example/multi_thread/multi_newton.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.10.1: Common Variables use by Multi-Threaded Newton Method

7.2.10.1.a: Purpose
This source code defined the common include files, defines, and variables that are used by the multi-newton method.

7.2.10.1.b: Source

# include <cppad/cppad.hpp>
# include <cppad/utility/time_test.hpp>
# include <cmath>
# include <cstring>
# include "multi_newton.hpp"
# include "team_thread.hpp"
# define USE_THREAD_ALLOC_FOR_WORK_ALL 1

namespace {
     using CppAD::thread_alloc; // fast multi-threadeding memory allocator
     using CppAD::vector;       // uses thread_alloc

     // number of threads, set by multi_newton_time.
     size_t num_threads_ = 0;

     // function we are finding zeros of, set by multi_newton_time
     void (*fun_)(double x, double& f, double& df) = 0;

     // convergence criteria, set by multi_newton_setup
     double epsilon_ = 0.;

     // maximum number of iterations, set by  multi_newton_setup
     size_t max_itr_ = 0;

     // length for all sub-intervals
     double sub_length_ = 0.;

     // structure with information for one thread
     typedef struct {
          // number of sub intervals (worker input)
          size_t num_sub;
          // beginning of interval (worker input)
          double xlow;
          // end of interval (worker input)
          double xup;
          // vector of zero candidates (worker output)
          // after call to multi_newton_setup:    x.size() == 0
          // after call to multi_newton_work:     x.size() is number of zeros
          // after call to multi_newton_takedown: x.size() == 0
          vector<double> x;
          // false if an error occurs, true otherwise (worker output)
          bool   ok;
     } work_one_t;
     // vector with information for all threads
     // after call to multi_newton_setup:    work_all.size() == num_threads
     // after call to multi_newton_takedown: work_all.size() == 0
     // (use pointers instead of values to avoid false sharing)
     vector<work_one_t*> work_all_;
}

Input File: example/multi_thread/multi_newton.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.10.2: Set Up Multi-Threaded Newton Method

7.2.10.2.a: Syntax
ok = multi_newton_setup(
     
num_subxlowxupepsilonmax_itrnum_threads
)


7.2.10.2.b: Purpose
These routine does the setup for splitting finding all the zeros in an interval into separate sub-intervals, one for each thread.

7.2.10.2.c: Thread
It is assumed that this function is called by thread zero, and all the other threads are blocked (waiting).

7.2.10.2.d: num_sub
See num_sub in 7.2.10.5.h: multi_newton_run .

7.2.10.2.e: xlow
See xlow in 7.2.10.5.i: multi_newton_run .

7.2.10.2.f: xup
See xup in 7.2.10.5.j: multi_newton_run .

7.2.10.2.g: epsilon
See epsilon in 7.2.10.5.k: multi_newton_run .

7.2.10.2.h: max_itr
See max_itr in 7.2.10.5.l: multi_newton_run .

7.2.10.2.i: num_threads
See num_threads in 7.2.10.5.m: multi_newton_run .

7.2.10.2.j: Source

namespace {
bool multi_newton_setup(
     size_t num_sub                              ,
     double xlow                                 ,
     double xup                                  ,
     double epsilon                              ,
     size_t max_itr                              ,
     size_t num_threads                          )
{
     num_threads  = std::max(num_threads_, size_t(1));
     bool ok      = num_threads == thread_alloc::num_threads();
     ok          &= thread_alloc::thread_num() == 0;

     // inputs that are same for all threads
     epsilon_ = epsilon;
     max_itr_ = max_itr;

     // resize the work vector to accomidate the number of threads
     ok &= work_all_.size() == 0;
     work_all_.resize(num_threads);

     // length of each sub interval
     sub_length_ = (xup - xlow) / double(num_sub);

     // determine values that are specific to each thread
     size_t num_min   = num_sub / num_threads; // minimum num_sub
     size_t num_more  = num_sub % num_threads; // number that have one more
     size_t sum_num   = 0;  // sum with respect to thread of num_sub
     size_t thread_num, num_sub_thread;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {
# if  USE_THREAD_ALLOC_FOR_WORK_ALL
          // allocate separate memory for this thread to avoid false sharing
          size_t min_bytes(sizeof(work_one_t)), cap_bytes;
          void* v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);
          work_all_[thread_num] = static_cast<work_one_t*>(v_ptr);

          // thread_alloc is a raw memory allocator; i.e., it does not call
          // the constructor for the objects it creates. The vector
          // class requires it's constructor to be called so we do it here
          new(& (work_all_[thread_num]->x) ) vector<double>();
# else
          work_all_[thread_num] = new work_one_t;
# endif

          // number of sub-intervalse for this thread
          if( thread_num < num_more  )
               num_sub_thread = num_min + 1;
          else     num_sub_thread = num_min;

          // when thread_num == 0, xlow_thread == xlow
          double xlow_thread = xlow + double(sum_num) * sub_length_;

          // when thread_num == num_threads - 1, xup_thread = xup
          double xup_thread =
               xlow + double(sum_num + num_sub_thread) * sub_length_;
          if( thread_num == num_threads - 1 )
               xup_thread = xup;

          // update sum_num for next time through loop
          sum_num += num_sub_thread;

          // input information specific to this thread
          work_all_[thread_num]->num_sub = num_sub_thread;
          work_all_[thread_num]->xlow    = xlow_thread;
          work_all_[thread_num]->xup     = xup_thread;
          ok &= work_all_[thread_num]->x.size() == 0;

          // in case this thread does not get called
          work_all_[thread_num]->ok = false;
     }
     ok &= sum_num == num_sub;
     return ok;
}
}

Input File: example/multi_thread/multi_newton.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method

7.2.10.3.a: Syntax
multi_newton_worker()

7.2.10.3.b: Purpose
This function finds all the zeros in the interval low , up ] .

7.2.10.3.c: low
This is the value of the 7.2.10.1: multi_newton_common information
     
low = work_all_[thread_num]->xlow

7.2.10.3.d: up
This is the value of the 7.2.10.1: multi_newton_common information
     
up = work_all_[thread_num]->xup

7.2.10.3.e: thread_num
This is the number for the current thread; see 8.23.5: thread_num .

7.2.10.3.f: Source

namespace {
void multi_newton_worker(void)
{
     // Split [xlow, xup] into num_sub intervales and
     // look for one zero in each sub-interval.
     size_t thread_num    = thread_alloc::thread_num();
     size_t num_threads   = std::max(num_threads_, size_t(1));
     bool   ok            = thread_num < num_threads;
     size_t num_sub       = work_all_[thread_num]->num_sub;
     double xlow          = work_all_[thread_num]->xlow;
     double xup           = work_all_[thread_num]->xup;
     vector<double>& x    = work_all_[thread_num]->x;

     // check arguments
     ok &= max_itr_ > 0;
     ok &= num_sub > 0;
     ok &= xlow < xup;
     ok &= x.size() == 0;

     // check for special case where there is nothing for this thread to do
     if( num_sub == 0 )
     {     work_all_[thread_num]->ok = ok;
          return;
     }

     // check for a zero on each sub-interval
     size_t i;
     double xlast = xlow - 2.0 * sub_length_; // over sub_length_ away from x_low
     double flast = 2.0 * epsilon_;           // any value > epsilon_ would do
     for(i = 0; i < num_sub; i++)
     {
          // note that when i == 0, xlow_i == xlow (exactly)
          double xlow_i = xlow + double(i) * sub_length_;

          // note that when i == num_sub - 1, xup_i = xup (exactly)
          double xup_i  = xup  - double(num_sub - i - 1) * sub_length_;

          // initial point for Newton iterations
          double xcur = (xup_i + xlow_i) / 2.;

          // Newton iterations
          bool more_itr = true;
          size_t itr    = 0;
          // initialize these values to avoid MSC C++ warning
          double fcur=0.0, dfcur=0.0;
          while( more_itr )
          {     fun_(xcur, fcur, dfcur);

               // check end of iterations
               if( fabs(fcur) <= epsilon_ )
                    more_itr = false;
               if( (xcur == xlow_i ) & (fcur * dfcur > 0.) )
                    more_itr = false;
               if( (xcur == xup_i)   & (fcur * dfcur < 0.) )
                    more_itr = false;

               // next Newton iterate
               if( more_itr )
               {     xcur = xcur - fcur / dfcur;
                    // keep in bounds
                    xcur = std::max(xcur, xlow_i);
                    xcur = std::min(xcur, xup_i);

                    more_itr = ++itr < max_itr_;
               }
          }
          if( fabs( fcur ) <= epsilon_ )
          {     // check for case where xcur is lower bound for this
               // sub-interval and upper bound for previous sub-interval
               if( fabs(xcur - xlast) >= sub_length_ )
               {     x.push_back( xcur );
                    xlast = xcur;
                    flast = fcur;
               }
               else if( fabs(fcur) < fabs(flast) )
               {     x[ x.size() - 1] = xcur;
                    xlast            = xcur;
                    flast            = fcur;
               }
          }
     }
     work_all_[thread_num]->ok = ok;
}
}

Input File: example/multi_thread/multi_newton.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.10.4: Take Down Multi-threaded Newton Method

7.2.10.4.a: Syntax
ok = harmonic_takedown(xout)

7.2.10.4.b: Purpose
This routine does the takedown for splitting the Newton method into sub-intervals.

7.2.10.4.c: Thread
It is assumed that this function is called by thread zero, and all the other threads have completed their work and are blocked (waiting).

7.2.10.4.d: xout
See 7.2.10.5.f: multi_newton_run .

7.2.10.4.e: Source

namespace {
bool multi_newton_takedown(vector<double>& xout)
{     // number of threads in the calculation
     size_t num_threads  = std::max(num_threads_, size_t(1));

     // remove duplicates and points that are not solutions
     xout.resize(0);
     bool   ok = true;
     ok       &= thread_alloc::thread_num() == 0;

     // initialize as more that sub_length_ / 2 from any possible solution
     double xlast = - sub_length_;
     for(size_t thread_num = 0; thread_num < num_threads; thread_num++)
     {     vector<double>& x = work_all_[thread_num]->x;

          size_t i;
          for(i = 0; i < x.size(); i++)
          {     // check for case where this point is lower limit for this
               // thread and upper limit for previous thread
               if( fabs(x[i] - xlast) >= sub_length_ )
               {     xout.push_back( x[i] );
                    xlast = x[i];
               }
               else
               {     double fcur, flast, df;
                    fun_(x[i],   fcur, df);
                    fun_(xlast, flast, df);
                    if( fabs(fcur) < fabs(flast) )
                    {     xout[ xout.size() - 1] = x[i];
                         xlast                  = x[i];
                    }
               }
          }
          // check that this thread was ok with the work it did
          ok &= work_all_[thread_num]->ok;
     }

     // go down so free memory for other threads before memory for master
     size_t thread_num = num_threads;
     while(thread_num--)
     {
# if USE_THREAD_ALLOC_FOR_WORK_ALL
          // call the destructor for vector destructor
          work_all_[thread_num]->x.~vector<double>();
          // delete the raw memory allocation
          void* v_ptr = static_cast<void*>( work_all_[thread_num] );
          thread_alloc::return_memory( v_ptr );
# else
          delete work_all_[thread_num];
# endif
          // Note that xout corresponds to memroy that is inuse by master
          // (so we can only chech have freed all their memory).
          if( thread_num > 0 )
          {     // check that there is no longer any memory inuse by this thread
               ok &= thread_alloc::inuse(thread_num) == 0;
               // return all memory being held for future use by this thread
               thread_alloc::free_available(thread_num);
          }
     }
     // now we are done with the work_all_ vector so free its memory
     // (because it is a static variable)
     work_all_.clear();

     return ok;
}
}

Input File: example/multi_thread/multi_newton.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.10.5: A Multi-Threaded Newton's Method

7.2.10.5.a: Syntax
ok = multi_newton_run(xout,
     
funnum_subxlowxupepsilonmax_itrnum_threads
)


7.2.10.5.b: Purpose
Multi-threaded determination of the argument values @(@ x @)@, in the interval @(@ [a, b] @)@ (where @(@ a < b @)@), such that @(@ f(x) = 0 @)@.

7.2.10.5.c: Thread
It is assumed that this function is called by thread zero, and all the other threads are blocked (waiting).

7.2.10.5.d: Method
For @(@ i = 0 , \ldots , n @)@, we define the i-th grid point @(@ g_i @)@ by @[@ g_i = a \frac{n - i}{n} + b \frac{i}{n} @]@ For @(@ i = 0 , \ldots , n-1 @)@, we define the i-th sub-interval of @(@ [a, b] @)@ by @[@ I_i = [ g_i , g_{i+1} ] @]@ Newton's method is applied starting at the center of each of the sub-intervals @(@ I_i @)@ for @(@ i = 0 , \ldots , n-1 @)@ and at most one zero is found for each sub-interval.

7.2.10.5.e: ok
The return value ok has prototype
     bool 
ok
If an error occurs, it is false, otherwise it is true.

7.2.10.5.f: xout
The argument xout has the prototype
     vector<double>& 
xout
The input size and value of the elements of xout do not matter. Upon return from multi_newton, the size of xout is less than or equal the number of sub-intervals @(@ n @)@ and @[@ | f( xout[i] ) | \leq epsilon @]@ for each valid index 0 <= i < xout.size() . Two @(@ x @)@ solutions are considered equal (and joined as one) if the absolute difference between the solutions is less than @(@ (b - a) / n @)@.

7.2.10.5.g: fun
The argument fun has prototype
     void 
fun (double x, double& f, double& df)
This function must evaluate @(@ f(x) @)@, and its derivative @(@ f^{(1)} (x) @)@, using the syntax
     
fun(xfdf)
where the arguments to fun have the prototypes
     double    
x
     double&   
f
     double&   
df
. The input values of f and df do not matter. Upon return they are @(@ f(x) @)@ and @(@ f^{(1)} (x) @)@ respectively.

7.2.10.5.h: num_sub
The argument num_sub has prototype
     size_t 
num_sub
It specifies the number of sub-intervals; i.e., @(@ n @)@.

7.2.10.5.i: xlow
The argument xlow has prototype
     double 
xlow
It specifies the lower limit for the entire search interval; i.e., @(@ a @)@.

7.2.10.5.j: xup
The argument xup has prototype
     double 
xup
It specifies the upper limit for the entire search interval; i.e., @(@ b @)@.

7.2.10.5.k: epsilon
The argument epsilon has prototype
     double 
epsilon
It specifies the convergence criteria for Newton's method in terms of how small the function value must be.

7.2.10.5.l: max_itr
The argument max_itr has prototype
     size_t 
max_itr
It specifies the maximum number of iterations of Newton's method to try before giving up on convergence (on each sub-interval).

7.2.10.5.m: num_threads
This argument has prototype
     size_t 
num_threads
It specifies the number of threads that are available for this test. If it is zero, the test is run without the multi-threading environment.

7.2.10.5.n: Source

namespace {
bool multi_newton_run(
     vector<double>& xout                       ,
     void fun(double x, double& f, double& df)  ,
     size_t num_sub                             ,
     double xlow                                ,
     double xup                                 ,
     double epsilon                             ,
     size_t max_itr                             ,
     size_t num_threads                         )
{
     bool ok = true;
     ok     &= thread_alloc::thread_num() == 0;

     // setup the work for num_threads threads
     ok &= multi_newton_setup(
          num_sub, xlow, xup, epsilon, max_itr, num_threads
     );

     // now do the work for each thread
     if( num_threads > 0 )
          team_work( multi_newton_worker );
     else     multi_newton_worker();

     // now combine the results for all the threads
     ok &= multi_newton_takedown(xout);

     return ok;
}
}

Input File: example/multi_thread/multi_newton.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@.
7.2.10.6: Timing Test of Multi-Threaded Newton Method

7.2.10.6.a: Syntax
ok = multi_newton_time(time_outtest_timenum_threads,
     
num_zeronum_subnum_sumuse_ad
)


7.2.10.6.b: Purpose
Runs correctness and timing test for a multi-threaded Newton method. This test uses Newton's method to determine all the zeros of the sine function on an interval. CppAD, or hand coded derivatives, can be used to calculate the derivatives used by Newton's method. The calculation can be done in parallel on the different sub-intervals. In addition, the calculation can be done without multi-threading.

7.2.10.6.c: Thread
It is assumed that this function is called by thread zero in sequential mode; i.e., not 8.23.4: in_parallel .

7.2.10.6.d: ok
This return value has prototype
     bool 
ok
If it is true, multi_newton_time passed the correctness test. Otherwise it is false.

7.2.10.6.e: time_out
This argument has prototype
     double& 
time_out
The input value of the argument does not matter. Upon return it is the number of wall clock seconds required for the multi-threaded Newton method can compute all the zeros.

7.2.10.6.f: test_time
Is the minimum amount of wall clock time that the test should take. The number of repeats for the test will be increased until this time is reached. The reported time_out is the total wall clock time divided by the number of repeats.

7.2.10.6.g: num_threads
This argument has prototype
     size_t 
num_threads
It specifies the number of threads that are available for this test. If it is zero, the test is run without multi-threading and
     1 == thread_alloc::num_threads()
when multi_newton_time is called. If it is non-zero, the test is run with multi-threading and
     
num_threads == thread_alloc::num_threads()
when multi_newton_time is called.

7.2.10.6.h: num_zero
This argument has prototype
     size_t 
num_zero
and it must be greater than one. It specifies the actual number of zeros in the test function @(@ \sin(x) @)@. To be specific, multi_newton_time will attempt to determine all of the values of @(@ x @)@ for which @(@ \sin(x) = 0 @)@ and @(@ x @)@ is in the interval
     [ 0 , (
num_zero - 1) * pi ]
.

7.2.10.6.i: num_sub
This argument has prototype
     size_t 
num_sub
It specifies the number of sub-intervals to divide the total interval into. It must be greater than num_zero (so that the correctness test can check we have found all the zeros).

7.2.10.6.j: num_sum
This argument has prototype
     size_t 
num_sum
and must be greater than zero. The actual function used by the Newton method is @[@ f(x) = \frac{1}{n} \sum_{i=1}^{n} \sin (x) @]@ where @(@ n @)@ is equal to num_sum . Larger values of num_sum simulate a case where the evaluation of the function @(@ f(x) @)@ takes more time.

7.2.10.6.k: use_ad
This argument has prototype
     bool 
user_ad
If use_ad is true, then derivatives will be computed using CppAD. Note that this derivative computation includes re-taping the function for each value of @(@ x @)@ (even though re-taping is not necessary).

If use_ad is false, derivatives will be computed using a hand coded routine.

7.2.10.6.l: Source


namespace { // empty namespace

     // values correspond to arguments in previous call to multi_newton_time
     size_t num_zero_;   // number of zeros of f(x) in the total interval
     size_t num_sub_;    // number of sub-intervals to split calculation into
     size_t num_sum_;    // larger values make f(x) take longer to calculate

     // value of xout corresponding to most recent call to test_once
     vector<double> xout_;

     // A version of the sine function that can be made as slow as we like
     template <class Float>
     Float f_eval(Float x)
     {     Float sum = 0.;
          size_t i;
          for(i = 0; i < num_sum_; i++)
               sum += sin(x);

          return sum / Float(num_sum_);
     }

     // Direct calculation of derivative with same number of floating point
     // operations as for f_eval.
     double df_direct(double x)
     {     double sum = 0.;
          size_t i;
          for(i = 0; i < num_sum_; i++)
               sum += cos(x);

          return sum / double(num_sum_);
     }

     // AD calculation of detivative
     void fun_ad(double x, double& f, double& df)
     {     using CppAD::AD;

          // use vector because it uses fast multi-threaded memory alloc
          vector< AD<double> > X(1), Y(1);
          X[0] = x;
          CppAD::Independent(X);
          Y[0] = f_eval(X[0]);
          CppAD::ADFun<double> F(X, Y);
          vector<double> dx(1), dy(1);
          dx[0] = 1.;
          dy    = F.Forward(1, dx);
          f     = Value( Y[0] );
          df    = dy[0];
          return;
     }

     // evaulate the function and its derivative
     void fun_no(double x, double& f, double& df)
     {     f  = f_eval(x);
          df = df_direct(x);
          return;
     }


     // Run computation of all the zeros once
     void test_once(void)
     {     if(  num_zero_ == 0 )
          {     std::cerr << "multi_newton_time: num_zero == 0" << std::endl;
               exit(1);
          }
          double pi      = 4. * std::atan(1.);
          double xlow    = 0.;
          double xup     = double(num_zero_ - 1) * pi;
          double eps     =
               xup * 100. * CppAD::numeric_limits<double>::epsilon();
          size_t max_itr = 20;

          // note that fun_ is set to fun_ad or fun_no by multi_newton_time
          bool ok = multi_newton_run(
               xout_       ,
               fun_        ,
               num_sub_    ,
               xlow        ,
               xup         ,
               eps         ,
               max_itr     ,
               num_threads_
          );
          if( ! ok )
          {     std::cerr << "multi_newton: error" << std::endl;
               exit(1);
          }
          return;
     }

     // Repeat computation of all the zeros a specied number of times
     void test_repeat(size_t repeat)
     {     size_t i;
          for(i = 0; i < repeat; i++)
               test_once();
          return;
     }
} // end empty namespace


// This is the only routine that is accessible outside of this file
bool multi_newton_time(
     double& time_out      ,
     double  test_time     ,
     size_t  num_threads   ,
     size_t  num_zero      ,
     size_t  num_sub       ,
     size_t  num_sum       ,
     bool    use_ad
)
{
     bool ok = true;
     ok     &= thread_alloc::thread_num() == 0;
     ok     &= num_sub > num_zero;

     // Set local namespace environment variables
     num_threads_  = num_threads;
     if( use_ad )
          fun_ = fun_ad;
     else     fun_ = fun_no;
     //
     num_zero_     = num_zero;
     num_sub_      = num_sub;
     num_sum_      = num_sum;

     // create team of threads
     ok &= thread_alloc::in_parallel() == false;
     if( num_threads > 0 )
     {     team_create(num_threads);
          ok &= num_threads == thread_alloc::num_threads();
     }
     else
     {     ok &= 1 == thread_alloc::num_threads();
     }

     // run the test case and set time return value
     time_out = CppAD::time_test(test_repeat, test_time);

     // destroy team of threads
     if( num_threads > 0 )
          team_destroy();
     ok &= thread_alloc::in_parallel() == false;
     //
     // correctness check
     double pi      = 4. * std::atan(1.);
     double xup     = double(num_zero_ - 1) * pi;
     double eps     = xup * 100. * CppAD::numeric_limits<double>::epsilon();
     ok        &= (xout_.size() == num_zero);
     size_t i   = 0;
     for(i = 0; i < xout_.size(); i++)
          ok &= std::fabs( xout_[i] - pi * double(i)) <= 2 * eps;

     // xout_ is a static variable, so clear it to free its memory
     xout_.clear();

     // return correctness check result
     return  ok;
}

Input File: example/multi_thread/multi_newton.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.11: Specifications for A Team of AD Threads

7.2.11.a: Syntax
include "team_thread.hpp"
ok   = team_create(num_threads)
ok   = team_work(worker)
ok   = team_destroy()
name = team_name()

7.2.11.b: Purpose
These routines start, use, and stop a team of threads that can be used with the CppAD type AD<double>. For example, these could be OpenMP threads, pthreads, or Boost threads to name a few.

7.2.11.c: Restrictions
Calls to the routines team_create, team_work, and team_destroy, must all be done by the master thread; i.e., 8.23.5: thread_num must be zero. In addition, they must all be done in sequential execution mode; i.e., when the master thread is the only thread that is running (8.23.4: in_parallel must be false).

7.2.11.d: team_create
The argument num_threads > 0 has type size_t and specifies the number of threads in this team. This initializes both AD<double> and team_work to be used with num_threads . If num_threads > 1 , num_threads - 1 new threads are created and put in a waiting state until team_work is called.

7.2.11.e: team_work
This routine may be called one or more times between the call to team_create and team_destroy. The argument worker has type bool worker(void) . Each call to team_work runs num_threads versions of worker with the corresponding value of 8.23.5: thread_num between zero and num_threads - 1 and different for each thread,

7.2.11.f: team_destroy
This routine terminates all the other threads except for thread number zero; i.e., it terminates the threads corresponding to
     
thread_num = 1 , ... , num_threads-1

7.2.11.g: team_name
This routines returns a name that identifies this thread_team. The return value has prototype
     const char* 
name
and is a statically allocated '\0' terminated C string.

7.2.11.h: ok
The return value ok has type bool. It is false if an error is detected during the corresponding call. Otherwise it is true.

7.2.11.i: Example Use
Example use of these specifications can be found in the file 7.2.7: team_example.cpp .

7.2.11.j: Example Implementation
Example implementations of these specifications can be found in the files:
7.2.11.1: team_openmp.cpp OpenMP Implementation of a Team of AD Threads
7.2.11.2: team_bthread.cpp Boost Thread Implementation of a Team of AD Threads
7.2.11.3: team_pthread.cpp Pthread Implementation of a Team of AD Threads

7.2.11.k: Speed Test of Implementation
Speed tests of using CppAD with the team implementations above can be found in:
7.2.8: harmonic.cpp Multi-Threading Harmonic Summation Example / Test
7.2.10: multi_newton.cpp Multi-Threaded Newton Method Example / Test

7.2.11.l: Source
# include <cstddef> // for size_t

extern bool team_create(size_t num_threads);
extern bool team_work(void worker(void));
extern bool team_destroy(void);
extern const char* team_name(void);

Input File: example/multi_thread/team_thread.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.11.1: OpenMP Implementation of a Team of AD Threads
See 7.2.11: team_thread.hpp for this routines specifications.
# include <omp.h>
# include <cppad/cppad.hpp>
# include "../team_thread.hpp"

namespace {
     using CppAD::thread_alloc;

     // number of threads in this team
     size_t num_threads_;

     // used to inform CppAD when we are in parallel execution mode
     bool in_parallel(void)
     {     return omp_in_parallel() != 0; }

     // used to inform CppAD of the current thread number
     size_t thread_num(void)
     {     return static_cast<size_t>( omp_get_thread_num() ); }
}

bool team_create(size_t num_threads)
{
     bool ok = ! in_parallel();
     ok     &= thread_num() == 0;;
     ok     &= num_threads > 0;

     // Turn off dynamic thread adjustment
     omp_set_dynamic(0);

     // Set the number of OpenMP threads
     omp_set_num_threads( int(num_threads) );

     // setup for using CppAD::AD<double> in parallel
     thread_alloc::parallel_setup(num_threads, in_parallel, thread_num);
     thread_alloc::hold_memory(true);
     CppAD::parallel_ad<double>();

     // inform team_work of number of threads
     num_threads_ = num_threads;

     return ok;
}

bool team_work(void worker(void))
{     bool ok = ! in_parallel();
     ok     &= thread_num() == 0;;
     ok     &= num_threads_ > 0;

     int number_threads = int(num_threads_);
     int thread_num;
# pragma omp parallel for
     for(thread_num = 0; thread_num < number_threads; thread_num++)
          worker();
// end omp parallel for

     return ok;
}

bool team_destroy(void)
{     bool ok = ! in_parallel();
     ok     &= thread_num() == 0;;
     ok     &= num_threads_ > 0;

     // inform team_work of number of threads
     num_threads_ = 1;

     // Set the number of OpenMP threads to one
     omp_set_num_threads( int(num_threads_) );

     // inform CppAD no longer in multi-threading mode
     thread_alloc::parallel_setup(num_threads_, CPPAD_NULL, CPPAD_NULL);
     thread_alloc::hold_memory(false);
     CppAD::parallel_ad<double>();

     return ok;
}

const char* team_name(void)
{     return "openmp"; }

Input File: example/multi_thread/openmp/team_openmp.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.11.2: Boost Thread Implementation of a Team of AD Threads
See 7.2.11: team_thread.hpp for this routines specifications.
# include <boost/thread.hpp>
# include <cppad/cppad.hpp>
# include "../team_thread.hpp"
# define MAX_NUMBER_THREADS 48

namespace {
     using CppAD::thread_alloc;

     // number of threads in the team
     size_t num_threads_ = 1;

     // no need to cleanup up thread specific data
     void cleanup(size_t*)
     {     return; }

     // thread specific pointer the thread number (initialize as null)
     boost::thread_specific_ptr<size_t> thread_num_ptr_(cleanup);

     // type of the job currently being done by each thread
     enum thread_job_t { init_enum, work_enum, join_enum } thread_job_;

     // barrier used to wait for other threads to finish work
     boost::barrier* wait_for_work_ = CPPAD_NULL;

     // barrier used to wait for master thread to set next job
     boost::barrier* wait_for_job_ = CPPAD_NULL;

     // Are we in sequential mode; i.e., other threads are waiting for
     // master thread to set up next job ?
     bool sequential_execution_ = true;

     // structure with information for one thread
     typedef struct {
          // The thread
          boost::thread*       bthread;
          // CppAD thread number as global (pointed to by thread_num_ptr_)
          size_t               thread_num;
          // true if no error for this thread, false otherwise.
          bool                 ok;
     } thread_one_t;

     // vector with information for all threads
     thread_one_t thread_all_[MAX_NUMBER_THREADS];

     // pointer to function that does the work for one thread
     void (* worker_)(void) = CPPAD_NULL;

     // ---------------------------------------------------------------------
     // in_parallel()
     bool in_parallel(void)
     {     return ! sequential_execution_; }

     // ---------------------------------------------------------------------
     // thread_number()
     size_t thread_number(void)
     {     // return thread_all_[thread_num].thread_num
          return *thread_num_ptr_.get();
     }
     // --------------------------------------------------------------------
     // function that gets called by boost thread constructor
     void thread_work(size_t thread_num)
     {     bool ok = wait_for_work_ != CPPAD_NULL;
          ok     &= wait_for_job_  != CPPAD_NULL;
          ok     &= thread_num     != 0;

          // thread specific storage of thread number for this thread
          thread_num_ptr_.reset(& thread_all_[thread_num].thread_num );

          while( true )
          {
               // Use wait_for_jog_ to give master time in sequential mode
               // (so it can change global information like thread_job_)
               wait_for_job_->wait();

               // case where we are terminating this thread (no more work)
               if( thread_job_ == join_enum)
                    break;

               // only other case once wait_for_job_ has been completed (so far)
               ok &= thread_job_ == work_enum;
               worker_();

               // Use wait_for_work_ to inform master that our work is done and
               // that this thread will not use global infromation until
               // passing its barrier wait_for_job_ above.
               wait_for_work_->wait();

          }
          thread_all_[thread_num].ok &= ok;
          return;
     }
}

bool team_create(size_t num_threads)
{     bool ok = true;;

     if( num_threads > MAX_NUMBER_THREADS )
     {     std::cerr << "team_create: num_threads greater than ";
          std::cerr << MAX_NUMBER_THREADS << std::endl;
          exit(1);
     }
     // check that we currently do not have multiple threads running
     ok  = num_threads_ == 1;
     ok &= wait_for_work_ == CPPAD_NULL;
     ok &= wait_for_job_  == CPPAD_NULL;
     ok &= sequential_execution_;

     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     // Each thread gets a pointer to its version of this thread_num
          // so it knows which section of thread_all it is working with
          thread_all_[thread_num].thread_num = thread_num;

          // initialize
          thread_all_[thread_num].ok = true;
          thread_all_[0].bthread     = CPPAD_NULL;
     }
     // Finish setup of thread_all_ for this thread
     thread_num_ptr_.reset(& thread_all_[0].thread_num);

     // Now that thread_number() has necessary information for the case
     // num_threads_ == 1, and while still in sequential mode,
     // call setup for using CppAD::AD<double> in parallel mode.
     thread_alloc::parallel_setup(num_threads, in_parallel, thread_number);
     thread_alloc::hold_memory(true);
     CppAD::parallel_ad<double>();

     // now change num_threads_ to its final value.
     num_threads_ = num_threads;

     // initialize two barriers, one for work done, one for new job ready
     wait_for_work_ = new boost::barrier( (unsigned int) num_threads );
     wait_for_job_  = new boost::barrier( (unsigned int) num_threads );

     // initial job for the threads
     thread_job_           = init_enum;
     if( num_threads > 1 )
          sequential_execution_ = false;

     // This master thread is already running, we need to create
     // num_threads - 1 more threads
     for(thread_num = 1; thread_num < num_threads; thread_num++)
     {     // Create the thread with thread number equal to thread_num
          thread_all_[thread_num].bthread =
               new boost::thread(thread_work, thread_num);
     }

     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     sequential_execution_ = true;
     return ok;
}

bool team_work(void worker(void))
{
     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     bool ok = sequential_execution_;
     ok     &= thread_number() == 0;
     ok     &= wait_for_work_  != CPPAD_NULL;
     ok     &= wait_for_job_   != CPPAD_NULL;

     // set global version of this work routine
     worker_ = worker;

     // set the new job that other threads are waiting for
     thread_job_ = work_enum;

     // Enter parallel exectuion when master thread calls wait_for_job_
     if( num_threads_ > 1 )
          sequential_execution_ = false;
     wait_for_job_->wait();

     // Now do the work in this thread and then wait
     // until all threads have completed wait_for_work_
     worker();
     wait_for_work_->wait();

     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     sequential_execution_ = true;

     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads_; thread_num++)
          ok &= thread_all_[thread_num].ok;
     return ok;
}

bool team_destroy(void)
{     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     bool ok = sequential_execution_;
     ok     &= thread_number() == 0;
     ok     &= wait_for_work_ != CPPAD_NULL;
     ok     &= wait_for_job_  != CPPAD_NULL;

     // set the new job that other threads are waiting for
     thread_job_ = join_enum;

     // enter parallel exectuion soon as master thread completes wait_for_job_
     if( num_threads_ > 1 )
               sequential_execution_ = false;
     wait_for_job_->wait();

     // now wait for the other threads to be destroyed
     size_t thread_num;
     ok &= thread_all_[0].bthread == CPPAD_NULL;
     for(thread_num = 1; thread_num < num_threads_; thread_num++)
     {     thread_all_[thread_num].bthread->join();
          delete thread_all_[thread_num].bthread;
          thread_all_[thread_num].bthread = CPPAD_NULL;
     }
     // now we are down to just the master thread (thread zero)
     sequential_execution_ = true;

     // destroy wait_for_work_
     delete wait_for_work_;
     wait_for_work_ = CPPAD_NULL;

     // destroy wait_for_job_
     delete wait_for_job_;
     wait_for_job_ = CPPAD_NULL;

     // check ok before changing num_threads_
     for(thread_num = 0; thread_num < num_threads_; thread_num++)
          ok &= thread_all_[thread_num].ok;

     // now inform CppAD that there is only one thread
     num_threads_ = 1;
     thread_alloc::parallel_setup(num_threads_, CPPAD_NULL, CPPAD_NULL);
     thread_alloc::hold_memory(false);
     CppAD::parallel_ad<double>();

     return ok;
}

const char* team_name(void)
{     return "bthread"; }

Input File: example/multi_thread/bthread/team_bthread.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
7.2.11.3: Pthread Implementation of a Team of AD Threads
See 7.2.11: team_thread.hpp for this routines specifications.

7.2.11.3.a: Bug in Cygwin
There is a bug in pthread_exit, using cygwin 5.1 and g++ version 4.3.4, whereby calling pthread_exit is not the same as returning from the corresponding routine. To be specific, destructors for the vectors are not called and a memory leaks result. Set the following preprocessor symbol to 1 to demonstrate this bug:

# define DEMONSTRATE_BUG_IN_CYGWIN 0
# include <pthread.h>
# include <cppad/cppad.hpp>
# include "../team_thread.hpp"
# define MAX_NUMBER_THREADS 48

// It seems that when a barrier is passed, its counter is automatically reset
// to its original value and it can be used again, but where is this
// stated in the pthreads speicifcations ?
namespace {
     using CppAD::thread_alloc;

     // number of threads in the team
     size_t num_threads_ = 1;

     // key for accessing thread specific information
     pthread_key_t thread_specific_key_;

     // no need to destroy thread specific information
     void thread_specific_destructor(void* thread_num_vptr)
     {     return; }

     // type of the job currently being done by each thread
     enum thread_job_t { init_enum, work_enum, join_enum } thread_job_;

     // barrier used to wait for other threads to finish work
     pthread_barrier_t wait_for_work_;

     // barrier used to wait for master thread to set next job
     pthread_barrier_t wait_for_job_;

     // Are we in sequential mode; i.e., other threads are waiting for
     // master thread to set up next job ?
     bool sequential_execution_ = true;

     // structure with information for one thread
     typedef struct {
          // cppad unique identifier for thread that uses this struct
          size_t          thread_num;
          // pthread unique identifier for thread that uses this struct
          pthread_t       pthread_id;
          // true if no error for this thread, false otherwise.
          bool            ok;
     } thread_one_t;

     // vector with information for all threads
     thread_one_t thread_all_[MAX_NUMBER_THREADS];

     // pointer to function that does the work for one thread
     void (* worker_)(void) = CPPAD_NULL;

     // ---------------------------------------------------------------------
     // in_parallel()
     bool in_parallel(void)
     {     return ! sequential_execution_; }

     // ---------------------------------------------------------------------
     // thread_number()
     size_t thread_number(void)
     {     // get thread specific information
          void*   thread_num_vptr = pthread_getspecific(thread_specific_key_);
          size_t* thread_num_ptr  = static_cast<size_t*>(thread_num_vptr);
          size_t  thread_num      = *thread_num_ptr;
          if( thread_num >= num_threads_ )
          {     std::cerr << "thread_number: program error" << std::endl;
               exit(1);
          }
          return thread_num;
     }
     // --------------------------------------------------------------------
     // function that gets called by pthread_create
     void* thread_work(void* thread_num_vptr)
     {     int rc;
          bool ok = true;

          // Set thread specific data where other routines can access it
          rc = pthread_setspecific(thread_specific_key_, thread_num_vptr);
          ok &= rc == 0;

          // thread_num to problem specific information for this thread
          size_t thread_num = *static_cast<size_t*>(thread_num_vptr);

          // master thread does not use this routine
          ok &= thread_num > 0;

          while( true )
          {
               // Use wait_for_job_ to give master time in sequential mode
               // (so it can change global infromation like thread_job_)
               rc = pthread_barrier_wait(&wait_for_job_);
               ok &= (rc == 0 || rc == PTHREAD_BARRIER_SERIAL_THREAD);

               // case where we are terminating this thread (no more work)
               if( thread_job_ == join_enum )
                    break;

               // only other case once wait_for_job_ barrier is passed (so far)
               ok &= thread_job_ == work_enum;
               worker_();

               // Use wait_for_work_ to inform master that our work is done and
               // that this thread will not use global information until
               // passing its barrier wait_for_job_ above.
               rc = pthread_barrier_wait(&wait_for_work_);
               ok &= (rc == 0 || rc == PTHREAD_BARRIER_SERIAL_THREAD);
          }
          thread_all_[thread_num].ok = ok;
# if DEMONSTRATE_BUG_IN_CYGWIN
          // Terminate this thread
          void* no_status = CPPAD_NULL;
          pthread_exit(no_status);
# endif
          return CPPAD_NULL;
     }
}

bool team_create(size_t num_threads)
{     bool ok = true;;
     int rc;

     if( num_threads > MAX_NUMBER_THREADS )
     {     std::cerr << "team_create: num_threads greater than ";
          std::cerr << MAX_NUMBER_THREADS << std::endl;
          exit(1);
     }
     // check that we currently do not have multiple threads running
     ok  = num_threads_ == 1;
     ok &= sequential_execution_;

     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads; thread_num++)
     {     // Each thread gets a pointer to its version of this thread_num
          // so it knows which section of thread_all_ it is working with
          thread_all_[thread_num].thread_num = thread_num;

          // initialize
          thread_all_[thread_num].ok         = true;
     }
     // Finish setup of thread_all_ for this thread
     thread_all_[0].pthread_id = pthread_self();

     // create a key for thread specific information
     rc = pthread_key_create(&thread_specific_key_,thread_specific_destructor);
     ok &= (rc == 0);

     // set thread specific information for this (master thread)
     void* thread_num_vptr = static_cast<void*>(&(thread_all_[0].thread_num));
     rc = pthread_setspecific(thread_specific_key_, thread_num_vptr);
     ok &= (rc == 0);

     // Now that thread_number() has necessary information for this thread
     // (number zero), and while still in sequential mode,
     // call setup for using CppAD::AD<double> in parallel mode.
     thread_alloc::parallel_setup(num_threads, in_parallel, thread_number);
     thread_alloc::hold_memory(true);
     CppAD::parallel_ad<double>();

     // Now change num_threads_ to its final value. Waiting till now allows
     // calls to thread_number during parallel_setup to check thread_num == 0.
     num_threads_ = num_threads;

     // initialize two barriers, one for work done, one for new job ready
     pthread_barrierattr_t* no_barrierattr = CPPAD_NULL;
     rc = pthread_barrier_init(
          &wait_for_work_, no_barrierattr, (unsigned int) num_threads
     );
     ok &= (rc == 0);
     rc  = pthread_barrier_init(
          &wait_for_job_, no_barrierattr, (unsigned int) num_threads
     );
     ok &= (rc == 0);

     // structure used to create the threads
     pthread_t       pthread_id;
     // default for pthread_attr_setdetachstate is PTHREAD_CREATE_JOINABLE
     pthread_attr_t* no_attr= CPPAD_NULL;

     // initial job for the threads
     thread_job_           = init_enum;
     if( num_threads > 1 )
          sequential_execution_ = false;

     // This master thread is already running, we need to create
     // num_threads - 1 more threads
     for(thread_num = 1; thread_num < num_threads; thread_num++)
     {
          // Create the thread with thread number equal to thread_num
          thread_num_vptr = static_cast<void*> (
               &(thread_all_[thread_num].thread_num)
          );
          rc = pthread_create(
                    &pthread_id ,
                    no_attr     ,
                    thread_work ,
                    thread_num_vptr
          );
          thread_all_[thread_num].pthread_id = pthread_id;
          ok &= (rc == 0);
     }

     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     sequential_execution_ = true;
     return ok;
}

bool team_work(void worker(void))
{     int rc;

     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     bool ok = sequential_execution_;
     ok     &= thread_number() == 0;

     // set global version of this work routine
     worker_ = worker;


     // set the new job that other threads are waiting for
     thread_job_ = work_enum;

     // enter parallel execution soon as master thread completes wait_for_job_
     if( num_threads_ > 1 )
          sequential_execution_ = false;

     // wait until all threads have completed wait_for_job_
     rc  = pthread_barrier_wait(&wait_for_job_);
     ok &= (rc == 0 || rc == PTHREAD_BARRIER_SERIAL_THREAD);

     // Now do the work in this thread and then wait
     // until all threads have completed wait_for_work_
     worker();
     rc = pthread_barrier_wait(&wait_for_work_);
     ok &= (rc == 0 || rc == PTHREAD_BARRIER_SERIAL_THREAD);

     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     sequential_execution_ = true;

     size_t thread_num;
     for(thread_num = 0; thread_num < num_threads_; thread_num++)
          ok &= thread_all_[thread_num].ok;
     return ok;
}

bool team_destroy(void)
{     int rc;

     // Current state is other threads are at wait_for_job_.
     // This master thread (thread zero) has not completed wait_for_job_
     bool ok = sequential_execution_;
     ok     &= thread_number() == 0;

     // set the new job that other threads are waiting for
     thread_job_ = join_enum;

     // Enter parallel exectuion soon as master thread completes wait_for_job_
     if( num_threads_ > 1 )
               sequential_execution_ = false;
     rc  = pthread_barrier_wait(&wait_for_job_);
     ok &= (rc == 0 || rc == PTHREAD_BARRIER_SERIAL_THREAD);

     // now wait for the other threads to exit
     size_t thread_num;
     for(thread_num = 1; thread_num < num_threads_; thread_num++)
     {     void* no_status = CPPAD_NULL;
          rc      = pthread_join(
               thread_all_[thread_num].pthread_id, &no_status
          );
          ok &= (rc == 0);
     }

     // now we are down to just the master thread (thread zero)
     sequential_execution_ = true;

     // destroy the key for thread specific data
     pthread_key_delete(thread_specific_key_);

     // destroy wait_for_work_
     rc  = pthread_barrier_destroy(&wait_for_work_);
     ok &= (rc == 0);

     // destroy wait_for_job_
     rc  = pthread_barrier_destroy(&wait_for_job_);
     ok &= (rc == 0);

     // check ok before changing num_threads_
     for(thread_num = 0; thread_num < num_threads_; thread_num++)
          ok &= thread_all_[thread_num].ok;

     // now inform CppAD that there is only one thread
     num_threads_ = 1;
     thread_alloc::parallel_setup(num_threads_, CPPAD_NULL, CPPAD_NULL);
     thread_alloc::hold_memory(false);
     CppAD::parallel_ad<double>();

     return ok;
}

const char* team_name(void)
{     return "pthread"; }

Input File: example/multi_thread/pthread/team_pthread.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8: Some General Purpose Utilities
These routines can be include individually; for example,
 
     # include <cppad/utility/vector.hpp>
only includes the definitions necessary for the CppAD::vector class. They can also be included as a group, separate from the rest of CppAD, using
 
     # include <cppad/utility.hpp>
They will also be included, along with the rest of CppAD, using
 
     # include <cppad/cppad.hpp>

8.a: Testing
The routines listed below support numerical correctness and speed testing:
8.2: NearEqual Determine if Two Values Are Nearly Equal
8.5: time_test Determine Amount of Time to Execute a Test
8.3: speed_test Run One Speed Test and Return Results
8.4: SpeedTest Run One Speed Test and Print Results
8.6: test_boolofvoid Object that Runs a Group of Tests

8.b: C++ Concepts
We refer to a the set of classes that satisfy certain conditions as a C++ concept. The following concepts are used by the CppAD Template library:
8.7: NumericType Definition of a Numeric Type
8.8: CheckNumericType Check NumericType Class Concept
8.9: SimpleVector Definition of a Simple Vector
8.10: CheckSimpleVector Check Simple Vector Concept

8.c: General Numerical Routines
The routines listed below are general purpose numerical routines written with the floating point type a C++ template parameter. This enables them to be used with algorithmic differentiation types, as well as for other purposes.
8.11: nan Obtain Nan or Determine if a Value is Nan
8.12: pow_int The Integer Power Function
8.13: Poly Evaluate a Polynomial or its Derivative
8.14: LuDetAndSolve Compute Determinants and Solve Equations by LU Factorization
8.15: RombergOne One DimensionalRomberg Integration
8.16: RombergMul Multi-dimensional Romberg Integration
8.17: Runge45 An Embedded 4th and 5th Order Runge-Kutta ODE Solver
8.18: Rosen34 A 3rd and 4th Order Rosenbrock ODE Solver
8.19: OdeErrControl An Error Controller for ODE Solvers
8.20: OdeGear An Arbitrary Order Gear Method
8.21: OdeGearControl An Error Controller for Gear's Ode Solvers

8.d: Miscellaneous

8.d.a: Error Handler
All of the routines in the CppAD namespace use the following general purpose error handler:
8.1: ErrorHandler Replacing the CppAD Error Handler

8.d.b: The CppAD Vector Template Class
This is a simple implementation of a template vector class (that is easy to view in a C++ debugger):
8.22: CppAD_vector The CppAD::vector Template Class

8.d.c: Multi-Threading Memory Allocation
8.23: thread_alloc A Fast Multi-Threading Memory Allocator

8.d.d: Sorting Indices
8.24: index_sort Returns Indices that Sort a Vector

8.d.e: to_string
8.25: to_string Convert Certain Types to a String

8.d.f: set_union
8.26: set_union Union of Standard Sets

8.d.g: Sparse Matrices
8.27: sparse_rc Row and Column Index Sparsity Patterns
8.28: sparse_rcv Sparse Matrix Row, Column, Value Representation

Input File: omh/utility.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.1: Replacing the CppAD Error Handler

8.1.a: Syntax
# include <cppad/utility/error_handler.hpp>
ErrorHandler info(handler)
ErrorHandler::Call(knownlinefileexpmsg)

8.1.b: Constructor
When you construct a ErrorHandler object, the current CppAD error handler is replaced by handler . When the object is destructed, the previous CppAD error handler is restored.

8.1.b.a: Parallel Mode
The ErrorHandler constructor and destructor cannot be called in 8.23.4: parallel execution mode. Furthermore, this rule is not abided by, a raw C++ assert, instead of one that uses this error handler, will be generated.

8.1.c: Call
When ErrorHandler::Call is called, the current CppAD error handler is used to report an error. This starts out as a default error handler and can be replaced using the ErrorHandler constructor.

8.1.d: info
The object info is used to store information that is necessary to restore the previous CppAD error handler. This is done when the destructor for info is called.

8.1.e: handler
The argument handler has prototype
     void (*
handler)
          (bool, int, const char *, const char *, const char *);
When an error is detected, it is called with the syntax
     
handler (knownlinefileexpmsg)
This routine should not return; i.e., upon detection of the error, the routine calling handler does not know how to proceed.

8.1.f: known
The handler argument known has prototype
     bool 
known
If it is true, the error being reported is from a know problem.

8.1.g: line
The handler argument line has prototype
     int 
line
It reports the source code line number where the error is detected.

8.1.h: file
The handler argument file has prototype
     const char *
file
and is a '\0' terminated character vector. It reports the source code file where the error is detected.

8.1.i: exp
The handler argument exp has prototype
     const char *
exp
and is a '\0' terminated character vector. It is a source code boolean expression that should have been true, but is false, and thereby causes this call to handler .

8.1.j: msg
The handler argument msg has prototype
     const char *
msg
and is a '\0' terminated character vector. It reports the meaning of the error from the C++ programmers point of view.

8.1.k: Example
The file 8.1.1: error_handler.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.
Input File: cppad/utility/error_handler.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.1.1: Replacing The CppAD Error Handler: Example and Test

# include <cppad/utility/error_handler.hpp>
# include <cstring>

namespace {
     void myhandler(
          bool known       ,
          int  line        ,
          const char *file ,
          const char *exp  ,
          const char *msg  )
     {     // error handler must not return, so throw an exception
          throw line;
     }
}


bool ErrorHandler(void)
{     using CppAD::ErrorHandler;

     int lineMinusFive = 0;

     // replace the default CppAD error handler
     ErrorHandler info(myhandler);

     // set ok to false unless catch block is executed
     bool ok = false;

     // use try / catch because handler throws an exception
     try {
          // set the static variable Line to next source code line
          lineMinusFive = __LINE__;

          // can call myhandler anywhere that ErrorHandler is defined
          ErrorHandler::Call(
               true     , // reason for the error is known
               __LINE__ , // current source code line number
               __FILE__ , // current source code file name
               "1 > 0"  , // an intentional error condition
               "Testing ErrorHandler"     // reason for error
          );
     }
     catch ( int line )
     {     // check value of the line number that was passed to handler
          ok = (line == lineMinusFive + 5);
     }

     // info drops out of scope and the default CppAD error handler
     // is restored when this routine returns.
     return ok;
}

Input File: example/utility/error_handler.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.1.2: CppAD Assertions During Execution

8.1.2.a: Syntax
CPPAD_ASSERT_KNOWN(expmsg)
CPPAD_ASSERT_UNKNOWN(exp)

8.1.2.b: Purpose
These CppAD macros are used to detect and report errors. They are documented here because they correspond to the C++ source code that the error is reported at.

8.1.2.c: NDEBUG
If the preprocessor symbol 12.1.j.a: NDEBUG is defined, these macros do nothing; i.e., they are optimized out.

8.1.2.d: Restriction
The CppAD user should not uses these macros. You can however write your own macros that do not begin with CPPAD and that call the 8.1: CppAD error handler .

8.1.2.e: Known
The CPPAD_ASSERT_KNOWN macro is used to check for an error with a known cause. For example, many CppAD routines uses these macros to make sure their arguments conform to their specifications.

8.1.2.f: Unknown
The CPPAD_ASSERT_UNKNOWN macro is used to check that the CppAD internal data structures conform as expected. If this is not the case, CppAD does not know why the error has occurred; for example, the user may have written past the end of an allocated array.

8.1.2.g: Exp
The argument exp is a C++ source code expression that results in a bool value that should be true. If it is false, an error has occurred. This expression may be execute any number of times (including zero times) so it must have not side effects.

8.1.2.h: Msg
The argument msg has prototype
     const char *
msg
and contains a '\0' terminated character string. This string is a description of the error corresponding to exp being false.

8.1.2.i: Error Handler
These macros use the 8.1: CppAD error handler to report errors. This error handler can be replaced by the user.
Input File: cppad/core/cppad_assert.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.2: Determine if Two Values Are Nearly Equal

8.2.a: Syntax
# include <cppad/utility/near_equal.hpp>
b = NearEqual(xyra)

8.2.b: Purpose
Returns true, if x and y are nearly equal, and false otherwise.

8.2.c: x
The argument x has one of the following possible prototypes
     const 
Type               &x,
     const std::complex<
Type> &x,

8.2.d: y
The argument y has one of the following possible prototypes
     const 
Type               &y,
     const std::complex<
Type> &y,

8.2.e: r
The relative error criteria r has prototype
     const 
Type &r
It must be greater than or equal to zero. The relative error condition is defined as: @[@ | x - y | \leq r ( |x| + |y| ) @]@

8.2.f: a
The absolute error criteria a has prototype
     const 
Type &a
It must be greater than or equal to zero. The absolute error condition is defined as: @[@ | x - y | \leq a @]@

8.2.g: b
The return value b has prototype
     bool 
b
If either x or y is infinite or not a number, the return value is false. Otherwise, if either the relative or absolute error condition (defined above) is satisfied, the return value is true. Otherwise, the return value is false.

8.2.h: Type
The type Type must be a 8.7: NumericType . The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined objects a and b of type Type :
Operation Description
a <= b less that or equal operator (returns a bool object)

8.2.i: Include Files
The file cppad/near_equal.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.2.j: Example
The file 8.2.1: near_equal.cpp contains an example and test of NearEqual. It return true if it succeeds and false otherwise.

8.2.k: Exercise
Create and run a program that contains the following code:
 
     using std::complex;
     using std::cout;
     using std::endl;

     complex<double> one(1., 0), i(0., 1);
     complex<double> x = one / i;
     complex<double> y = - i;
     double          r = 1e-12;
     double          a = 0;
     bool           ok = CppAD::NearEqual(x, y, r, a);
     if( ok )
          cout << "Ok"    << endl;
     else cout << "Error" << endl;

Input File: cppad/utility/near_equal.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.2.1: NearEqual Function: Example and Test

8.2.1.a: File Name
This file is called near_equal.cpp instead of NearEqual.cpp to avoid a name conflict with ../lib/NearEqual.cpp in the corresponding Microsoft project file.

# include <cppad/utility/near_equal.hpp>

# include <complex>

bool Near_Equal(void)
{     bool ok = true;
     typedef std::complex<double> Complex;
     using CppAD::NearEqual;

     // double
     double x    = 1.00000;
     double y    = 1.00001;
     double a    =  .00003;
     double r    =  .00003;
     double zero = 0.;
     double inf  = 1. / zero;
     double nan  = 0. / zero;

     ok &= NearEqual(x, y, zero, a);
     ok &= NearEqual(x, y, r, zero);
     ok &= NearEqual(x, y, r, a);

     ok &= ! NearEqual(x, y, r / 10., a / 10.);
     ok &= ! NearEqual(inf, inf, r, a);
     ok &= ! NearEqual(-inf, -inf, r, a);
     ok &= ! NearEqual(nan, nan, r, a);

     // complex
     Complex X(x, x / 2.);
     Complex Y(y, y / 2.);
     Complex Inf(inf, zero);
     Complex Nan(zero, nan);

     ok &= NearEqual(X, Y, zero, a);
     ok &= NearEqual(X, Y, r, zero);
     ok &= NearEqual(X, Y, r, a);

     ok &= ! NearEqual(X, Y, r / 10., a / 10.);
     ok &= ! NearEqual(Inf, Inf, r, a);
     ok &= ! NearEqual(-Inf, -inf, r, a);
     ok &= ! NearEqual(Nan, Nan, r, a);

     return ok;
}

Input File: example/utility/near_equal.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.3: Run One Speed Test and Return Results

8.3.a: Syntax
# include <cppad/utility/speed_test.hpp>
rate_vec = speed_test(testsize_vectime_min)

8.3.b: Purpose
The speed_test function executes a speed test for various sized problems and reports the rate of execution.

8.3.c: Motivation
It is important to separate small calculation units and test them individually. This way individual changes can be tested in the context of the routine that they are in. On many machines, accurate timing of a very short execution sequences is not possible. In addition, there may be set up and tear down time for a test that we do not really want included in the timing. For this reason speed_test automatically determines how many times to repeat the section of the test that we wish to time.

8.3.d: Include
The file cppad/speed_test.hpp defines the speed_test function. This file is included by cppad/cppad.hpp and it can also be included separately with out the rest of the CppAD routines.

8.3.e: Vector
We use Vector to denote a 8.9: simple vector class with elements of type size_t.

8.3.f: test
The speed_test argument test is a function with the syntax
     
test(sizerepeat)
and its return value is void.

8.3.f.a: size
The test argument size has prototype
     size_t 
size
It specifies the size for this test.

8.3.f.b: repeat
The test argument repeat has prototype
     size_t 
repeat
It specifies the number of times to repeat the test.

8.3.g: size_vec
The speed_test argument size_vec has prototype
     const 
Vectorsize_vec
This vector determines the size for each of the tests problems.

8.3.h: time_min
The argument time_min has prototype
     double 
time_min
It specifies the minimum amount of time in seconds that the test routine should take. The repeat argument to test is increased until this amount of execution time is reached.

8.3.i: rate_vec
The return value rate_vec has prototype
     
Vectorrate_vec
We use @(@ n @)@ to denote its size which is the same as the vector size_vec . For @(@ i = 0 , \ldots , n-1 @)@,
     
rate_vec[i]
is the ratio of repeat divided by time in seconds for the problem with size size_vec[i] .

8.3.j: Timing
If your system supports the unix gettimeofday function, it will be used to measure time. Otherwise, time is measured by the difference in
 
     (double) clock() / (double) CLOCKS_PER_SEC
in the context of the standard <ctime> definitions.

8.3.k: Example
The routine 8.3.1: speed_test.cpp is an example and test of speed_test.
Input File: cppad/utility/speed_test.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.3.1: speed_test: Example and test
# include <cppad/utility/speed_test.hpp>
# include <cppad/utility/vector.hpp>

namespace { // empty namespace
     using CppAD::vector;
     vector<double> a, b, c;
     void test(size_t size, size_t repeat)
     {     // setup
          a.resize(size);
          b.resize(size);
          c.resize(size);
          size_t i  = size;;
          while(i)
          {     --i;
               a[i] = double(i);
               b[i] = double(2 * i);
               c[i] = 0.0;
          }
          // operations we are timing
          while(repeat--)
          {     i = size;;
               while(i)
               {     --i;
                    c[i] += std::sqrt(a[i] * a[i] + b[i] * b[i]);
               }
          }
     }
}
bool speed_test(void)
{     bool ok = true;

     // size of the test cases
     vector<size_t> size_vec(2);
     size_vec[0] = 40;
     size_vec[1] = 80;

     // minimum amount of time to run test
     double time_min = 0.5;

     // run the test cases
     vector<size_t> rate_vec(2);
     rate_vec = CppAD::speed_test(test, size_vec, time_min);

     // time per repeat loop (note counting setup or teardown)
     double time_0 = 1. / double(rate_vec[0]);
     double time_1 = 1. / double(rate_vec[1]);

     // for this case, time should be linear w.r.t size
     double check    = double(size_vec[1]) * time_0 / double(size_vec[0]);
     double rel_diff = (check - time_1) / time_1;
     ok             &= (std::fabs(rel_diff) <= .1);
     if( ! ok )
          std::cout << std::endl << "rel_diff = " << rel_diff << std::endl;

     a.clear();
     b.clear();
     c.clear();
     return ok;
}

Input File: speed/example/speed_test.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.4: Run One Speed Test and Print Results

8.4.a: Syntax
# include <cppad/utility/speed_test.hpp>
SpeedTest(Testfirstinclast)

8.4.b: Purpose
The SpeedTest function executes a speed test for various sized problems and reports the results on standard output; i.e. std::cout. The size of each test problem is included in its report (unless first is equal to last ).

8.4.c: Motivation
It is important to separate small calculation units and test them individually. This way individual changes can be tested in the context of the routine that they are in. On many machines, accurate timing of a very short execution sequences is not possible. In addition, there may be set up time for a test that we do not really want included in the timing. For this reason SpeedTest automatically determines how many times to repeat the section of the test that we wish to time.

8.4.d: Include
The file speed_test.hpp contains the SpeedTest function. This file is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.4.e: Test
The SpeedTest argument Test is a function with the syntax
     
name = Test(sizerepeat)

8.4.e.a: size
The Test argument size has prototype
     size_t 
size
It specifies the size for this test.

8.4.e.b: repeat
The Test argument repeat has prototype
     size_t 
repeat
It specifies the number of times to repeat the test.

8.4.e.c: name
The Test result name has prototype
     std::string 
name
The results for this test are reported on std::cout with name as an identifier for the test. It is assumed that, for the duration of this call to SpeedTest, Test will always return the same value for name . If name is the empty string, no test name is reported by SpeedTest.

8.4.f: first
The SpeedTest argument first has prototype
     size_t 
first
It specifies the size of the first test problem reported by this call to SpeedTest.

8.4.g: last
The SpeedTest argument last has prototype
     size_t 
last
It specifies the size of the last test problem reported by this call to SpeedTest.

8.4.h: inc
The SpeedTest argument inc has prototype
     int 
inc
It specifies the increment between problem sizes; i.e., all values of size in calls to Test are given by
     
size = first + j * inc
where j is a positive integer. The increment can be positive or negative but it cannot be zero. The values first , last and inc must satisfy the relation @[@ inc * ( last - first ) \geq 0 @]@

8.4.i: rate
The value displayed in the rate column on std::cout is defined as the value of repeat divided by the corresponding elapsed execution time in seconds. The elapsed execution time is measured by the difference in
 
     (double) clock() / (double) CLOCKS_PER_SEC
in the context of the standard <ctime> definitions.

8.4.j: Errors
If one of the restrictions above is violated, the CppAD error handler is used to report the error. You can redefine this action using the instructions in 8.1: ErrorHandler

8.4.k: Example
The program 8.4.1: speed_program.cpp is an example usage of SpeedTest.
Input File: cppad/utility/speed_test.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.4.1: Example Use of SpeedTest

8.4.1.a: Running This Program
On a Unix system that includes the g++ compiler, you can compile and run this program by changing into the speed/example directory and executing the following commands
 
     g++ -I../.. speed_program.cpp -o speed_program.exe
     ./speed_program.exe

8.4.1.b: Program
# include <cppad/utility/speed_test.hpp>

std::string Test(size_t size, size_t repeat)
{     // setup
     double *a = new double[size];
     double *b = new double[size];
     double *c = new double[size];
     size_t i  = size;;
     while(i)
     {     --i;
          a[i] = i;
          b[i] = 2 * i;
     }
     // operations we are timing
     while(repeat--)
     {     i = size;;
          while(i)
          {     --i;
               c[i] = a[i] + b[i];
          }
     }
     // teardown
     delete [] a;
     delete [] b;
     delete [] c;

     // return a test name that is valid for all sizes and repeats
     return "double: c[*] = a[*] + b[*]";
}
int main(void)
{
     CppAD::SpeedTest(Test, 10, 10, 100);
     return 0;
}
8.4.1.c: Output
Executing of the program above generated the following output (the rates will be different for each particular system):
 
     double: c[*] = a[*] + b[*]
     size = 10  rate = 14,122,236
     size = 20  rate = 7,157,515
     size = 30  rate = 4,972,500
     size = 40  rate = 3,887,214
     size = 50  rate = 3,123,086
     size = 60  rate = 2,685,214
     size = 70  rate = 2,314,737
     size = 80  rate = 2,032,124
     size = 90  rate = 1,814,145
     size = 100 rate = 1,657,828

Input File: speed/example/speed_program.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.5: Determine Amount of Time to Execute a Test

8.5.a: Syntax
# include <cppad/utility/time_test.hpp>
time = time_test(testtime_min)
time = time_test(testtime_mintest_size)

8.5.b: Purpose
The time_test function executes a timing test and reports the amount of wall clock time for execution.

8.5.c: Motivation
It is important to separate small calculation units and test them individually. This way individual changes can be tested in the context of the routine that they are in. On many machines, accurate timing of a very short execution sequences is not possible. In addition, there may be set up and tear down time for a test that we do not really want included in the timing. For this reason time_test automatically determines how many times to repeat the section of the test that we wish to time.

8.5.d: Include
The file cppad/time_test.hpp defines the time_test function. This file is included by cppad/cppad.hpp and it can also be included separately with out the rest of the CppAD routines.

8.5.e: test
The time_test argument test is a function, or function object. In the case where test_size is not present, test supports the syntax
     
test(repeat)
In the case where test_size is present, test supports the syntax
     
test(sizerepeat)
In either case, the return value for test is void.

8.5.e.a: size
If the argument size is present, it has prototype
     size_t 
size
and is equal to the test_size argument to time_test.

8.5.e.b: repeat
The test argument repeat has prototype
     size_t 
repeat
It will be equal to the size argument to time_test.

8.5.f: time_min
The argument time_min has prototype
     double 
time_min
It specifies the minimum amount of time in seconds that the test routine should take. The repeat argument to test is increased until this amount of execution time (or more) is reached.

8.5.g: test_size
This argument has prototype
     size_t 
test_size
It specifies the size argument to test .

8.5.h: time
The return value time has prototype
     double 
time
and is the number of wall clock seconds that it took to execute test divided by the value used for repeat .

8.5.i: Timing
The routine 8.5.1: elapsed_seconds will be used to determine the amount of time it took to execute the test.

8.5.j: Example
The routine 8.5.2: time_test.cpp is an example and test of time_test.
Input File: cppad/utility/time_test.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.5.1: Returns Elapsed Number of Seconds

8.5.1.a: Syntax
# include <cppad/utility/elapsed_seconds.hpp>
s = elapsed_seconds()

8.5.1.b: Purpose
This routine is accurate to within .02 seconds (see 8.5.1.1: elapsed_seconds.cpp ). It does not necessary work for time intervals that are greater than a day.
  1. If the C++11 std::chrono::steady_clock is available, it will be used for timing.
  2. Otherwise, if running under the Microsoft compiler, ::GetSystemTime will be used for timing.
  3. Otherwise, if gettimeofday is available, it is used for timing.
  4. Otherwise, std::clock() will be used for timing.


8.5.1.c: s
is a double equal to the number of seconds since the first call to elapsed_seconds.

8.5.1.d: Microsoft Systems
It you are using ::GetSystemTime, you will need to link in the external routine called 11.1.8: microsoft_timer .

8.5.1.e: Example
The routine 8.5.1.1: elapsed_seconds.cpp is an example and test of this routine.
Input File: cppad/utility/elapsed_seconds.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.5.1.1: Elapsed Seconds: Example and Test
# include <cppad/utility/elapsed_seconds.hpp>

# include <iostream>
# include <algorithm>
# include <cmath>

# define CPPAD_DEBUG_ELAPSED_SECONDS 0

bool elapsed_seconds(void)
{     bool ok = true;

     double max_diff = 0.;
     double s0 = CppAD::elapsed_seconds();
     double s1 = CppAD::elapsed_seconds();
     double s2 = CppAD::elapsed_seconds();
     while(s2 - s0 < 1.)
     {     max_diff = std::max(s2 - s1, max_diff);
          s1 = s2;
          s2 = CppAD::elapsed_seconds();

     }
# if CPPAD_DEBUG_ELAPSED_SECONDS
     std::cout << "max_diff = " << max_diff << std::endl;
# endif
     ok &= 0. < max_diff && max_diff < .04;
     return ok;
}

Input File: speed/example/elapsed_seconds.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.5.2: time_test: Example and test
# include <cppad/utility/time_test.hpp>
# include <cppad/utility/vector.hpp>

namespace { // empty namespace
     using CppAD::vector;

     // size for the test
     size_t size_;

     vector<double> a, b, c;
     void test(size_t repeat)
     {     // setup
          a.resize(size_);
          b.resize(size_);
          c.resize(size_);
          size_t i  = size_;;
          while(i)
          {     --i;
               a[i] = float(i);
               b[i] = float(2 * i);
               c[i] = 0.0;
          }
          // operations we are timing
          while(repeat--)
          {     i = size_;;
               while(i)
               {     --i;
                    c[i] += std::sqrt(a[i] * a[i] + b[i] * b[i]);
               }
          }
     }

}
bool time_test(void)
{     bool ok = true;

     // minimum amount of time to run test
     double time_min = 0.5;

     // size of first test case
     size_ = 20;

     // run the first test case
     double time_first = CppAD::time_test(test, time_min);

     // size of second test case is twice as large
     size_ = 2 * size_;

     // run the second test case
     double time_second = CppAD::time_test(test, time_min);

     // for this case, time should be linear w.r.t size
     double rel_diff = 1. - 2. * time_first / time_second;
     ok             &= (std::fabs(rel_diff) <= .1);
     if( ! ok )
          std::cout << std::endl << "rel_diff = " << rel_diff  << std::endl;

     a.clear();
     b.clear();
     c.clear();
     return ok;
}

Input File: speed/example/time_test.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.6: Object that Runs a Group of Tests

8.6.a: Syntax
test_boolofvoid Run(groupwidth)
Run(testname)
ok = Run.summary(memory_ok)

8.6.b: Purpose
The object Run is used to run a group of tests functions and report the results on standard output.

8.6.c: group
The argument has prototype
     const std::string& 
group
It is the name for this group of tests.

8.6.d: width
The argument has prototype
     size_t 
width
It is the number of columns used to display the name of each test. It must be greater than the maximum number of characters in a test name.

8.6.e: test
The argument has prototype
     bool 
test(void)
It is a function that returns true (when the test passes) and false otherwise.

8.6.f: name
The argument has prototype
     const std::string& 
name
It is the name for the corresponding test .

8.6.g: memory_ok
The argument has prototype
     bool 
memory_ok
It is false if a memory leak is detected (and true otherwise).

8.6.h: ok
This is true if all of the tests pass (including the memory leak test), otherwise it is false.

8.6.i: Example
See any of the main programs in the example directory; e.g., example/ipopt_solve.cpp.
Input File: cppad/utility/test_boolofvoid.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.7: Definition of a Numeric Type

8.7.a: Type Requirements
A NumericType is any type that satisfies the requirements below. The following is a list of some numeric types: int, float, double, AD<double>, AD< AD<double> >. The routine 8.8: CheckNumericType can be used to check that a type satisfies these conditions.

8.7.b: Default Constructor
The syntax
     
NumericType x;
creates a NumericType object with an unspecified value.

8.7.c: Constructor From Integer
If i is an int, the syntax
     
NumericType x(i);
creates a NumericType object with a value equal to i where i can be const.

8.7.d: Copy Constructor
If x is a NumericType object the syntax
     
NumericType y(x);
creates a NumericType object y with the same value as x where x can be const.

8.7.e: Assignment
If x and y are NumericType objects, the syntax
     
x = y
sets the value of x equal to the value of y where y can be const. The expression corresponding to this operation is unspecified; i.e., it could be void and hence
     
x = y = z
may not be legal.

8.7.f: Operators
Suppose x , y and z NumericType objects where x and y may be const. In the result type column, NumericType can be replaced by any type that can be used just like a NumericType object.
Operation Description Result Type
+x unary plus NumericType
-x unary minus NumericType
x +  y binary addition NumericType
x -  y binary subtraction NumericType
x *  y binary multiplication NumericType
x /  y binary division NumericType
z += y compound assignment addition unspecified
z -= y compound assignment subtraction unspecified
z *= y compound assignment multiplication unspecified
z /= y compound assignment division unspecified

8.7.g: Example
The file 8.7.1: numeric_type.cpp contains an example and test of using numeric types. It returns true if it succeeds and false otherwise. (It is easy to modify to test additional numeric types.)

8.7.h: Exercise
  1. List three operators that are not supported by every numeric type but that are supported by the numeric types int, float, double.
  2. Which of the following are numeric types: std::complex<double>, std::valarray<double>, std::vector<double> ?

Input File: omh/numeric_type.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.7.1: The NumericType: Example and Test

# include <cppad/cppad.hpp>

namespace { // Empty namespace

     // -------------------------------------------------------------------
     class MyType {
     private:
          double d;
     public:
          // constructor from void
          MyType(void) : d(0.)
          { }
          // constructor from an int
          MyType(int d_) : d(d_)
          { }
          // copy constructor
          MyType(const MyType &x)
          {     d = x.d; }
          // assignment operator
          void operator = (const MyType &x)
          {     d = x.d; }
          // member function that converts to double
          double Double(void) const
          {     return d; }
          // unary plus
          MyType operator + (void) const
          {     MyType x;
               x.d =  d;
               return x;
          }
          // unary plus
          MyType operator - (void) const
          {     MyType x;
               x.d = - d;
               return x;
          }
          // binary addition
          MyType operator + (const MyType &x) const
          {     MyType y;
               y.d = d + x.d ;
               return y;
          }
          // binary subtraction
          MyType operator - (const MyType &x) const
          {     MyType y;
               y.d = d - x.d ;
               return y;
          }
          // binary multiplication
          MyType operator * (const MyType &x) const
          {     MyType y;
               y.d = d * x.d ;
               return y;
          }
          // binary division
          MyType operator / (const MyType &x) const
          {     MyType y;
               y.d = d / x.d ;
               return y;
          }
          // compound assignment addition
          void operator += (const MyType &x)
          {     d += x.d; }
          // compound assignment subtraction
          void operator -= (const MyType &x)
          {     d -= x.d; }
          // compound assignment multiplication
          void operator *= (const MyType &x)
          {     d *= x.d; }
          // compound assignment division
          void operator /= (const MyType &x)
          {     d /= x.d; }
     };
}
bool NumericType(void)
{     bool ok  = true;
     using CppAD::AD;
     using CppAD::CheckNumericType;

     CheckNumericType<MyType>            ();

     CheckNumericType<int>               ();
     CheckNumericType<double>            ();
     CheckNumericType< AD<double> >      ();
     CheckNumericType< AD< AD<double> > >();

     return ok;
}

Input File: example/general/numeric_type.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.8: Check NumericType Class Concept

8.8.a: Syntax
# include <cppad/utility/check_numeric_type.hpp>
CheckNumericType<NumericType>()

8.8.b: Purpose
The syntax
     CheckNumericType<
NumericType>()
preforms compile and run time checks that the type specified by NumericType satisfies all the requirements for a 8.7: NumericType class. If a requirement is not satisfied, a an error message makes it clear what condition is not satisfied.

8.8.c: Include
The file cppad/check_numeric_type.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest if the CppAD include files.

8.8.d: Parallel Mode
The routine 8.23.2: thread_alloc::parallel_setup must be called before it can be used in 8.23.4: parallel mode.

8.8.e: Example
The file 8.8.1: check_numeric_type.cpp contains an example and test of this function. It returns true, if it succeeds an false otherwise. The comments in this example suggest a way to change the example so an error message occurs.
Input File: cppad/utility/check_numeric_type.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.8.1: The CheckNumericType Function: Example and Test

# include <cppad/utility/check_numeric_type.hpp>
# include <cppad/utility/near_equal.hpp>


// Chosing a value between 1 and 10 selects a numeric class properity to be
// omitted and result in an error message being generated
# define CppADMyTypeOmit 0

namespace { // Empty namespace

     // -------------------------------------------------------------------
     class MyType {
     private:
          double d;
     public:
          // constructor from void
          MyType(void) : d(0.)
          { }
          // constructor from an int
          MyType(int d_) : d(d_)
          { }
          // copy constuctor
          MyType(const MyType &x)
          {     d = x.d; }
          // assignment operator
          void operator = (const MyType &x)
          {     d = x.d; }
          // member function that converts to double
          double Double(void) const
          {     return d; }
# if CppADMyTypeOmit != 1
          // unary plus
          MyType operator + (void) const
          {     MyType x;
               x.d =  d;
               return x;
          }
# endif
# if CppADMyTypeOmit != 2
          // unary plus
          MyType operator - (void) const
          {     MyType x;
               x.d = - d;
               return x;
          }
# endif
# if CppADMyTypeOmit != 3
          // binary addition
          MyType operator + (const MyType &x) const
          {     MyType y;
               y.d = d + x.d ;
               return y;
          }
# endif
# if CppADMyTypeOmit != 4
          // binary subtraction
          MyType operator - (const MyType &x) const
          {     MyType y;
               y.d = d - x.d ;
               return y;
          }
# endif
# if CppADMyTypeOmit != 5
          // binary multiplication
          MyType operator * (const MyType &x) const
          {     MyType y;
               y.d = d * x.d ;
               return y;
          }
# endif
# if CppADMyTypeOmit != 6
          // binary division
          MyType operator / (const MyType &x) const
          {     MyType y;
               y.d = d / x.d ;
               return y;
          }
# endif
# if CppADMyTypeOmit != 7
          // compound assignment addition
          void operator += (const MyType &x)
          {     d += x.d; }
# endif
# if CppADMyTypeOmit != 8
          // compound assignment subtraction
          void operator -= (const MyType &x)
          {     d -= x.d; }
# endif
# if CppADMyTypeOmit != 9
          // compound assignment multiplication
          void operator *= (const MyType &x)
          {     d *= x.d; }
# endif
# if CppADMyTypeOmit != 10
          // compound assignment division
          void operator /= (const MyType &x)
          {     d /= x.d; }
# endif
     };
     // -------------------------------------------------------------------
     /*
     Solve: A[0] * x[0] + A[1] * x[1] = b[0]
            A[2] * x[0] + A[3] * x[1] = b[1]
     */
     template <class NumericType>
     void Solve(NumericType *A, NumericType *x, NumericType *b)
     {
          // make sure NumericType satisfies its conditions
          CppAD::CheckNumericType<NumericType>();

          // copy b to x
          x[0] = b[0];
          x[1] = b[1];

          // copy A to work space
          NumericType W[4];
          W[0] = A[0];
          W[1] = A[1];
          W[2] = A[2];
          W[3] = A[3];

          // divide first row by W(1,1)
          W[1] /= W[0];
          x[0] /= W[0];
          W[0] = NumericType(1);

          // subtract W(2,1) times first row from second row
          W[3] -= W[2] * W[1];
          x[1] -= W[2] * x[0];
          W[2] = NumericType(0);

          // divide second row by W(2, 2)
          x[1] /= W[3];
          W[3]  = NumericType(1);

          // use first row to solve for x[0]
          x[0] -= W[1] * x[1];
     }
} // End Empty namespace

bool CheckNumericType(void)
{     bool ok  = true;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     MyType A[4];
     A[0] = MyType(1); A[1] = MyType(2);
     A[2] = MyType(3); A[3] = MyType(4);

     MyType b[2];
     b[0] = MyType(1);
     b[1] = MyType(2);

     MyType x[2];
     Solve(A, x, b);

     MyType sum;
     sum = A[0] * x[0] + A[1] * x[1];
     ok &= NearEqual(sum.Double(), b[0].Double(), eps99, eps99);

     sum = A[2] * x[0] + A[3] * x[1];
     ok &= NearEqual(sum.Double(), b[1].Double(), eps99, eps99);

     return ok;
}

Input File: example/utility/check_numeric_type.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.9: Definition of a Simple Vector

8.9.a: Template Class Requirements
A simple vector template class SimpleVector , is any template class that satisfies the requirements below. The following is a list of some simple vector template classes:
Name Documentation
std::vector Section 16.3 of 12.5.b: The C++ Programming Language
std::valarray Section 22.4 of 12.5.b: The C++ Programming Language
CppAD::vector 8.22: The CppAD::vector Template Class

8.9.b: Elements of Specified Type
A simple vector class with elements of type Scalar , is any class that satisfies the requirements for a class of the form
     
SimpleVector<Scalar>
The routine 8.10: CheckSimpleVector can be used to check that a class is a simple vector class with a specified element type.

8.9.c: Default Constructor
The syntax
     
SimpleVector<Scalarx;
creates an empty vector x ( x.size() is zero) that can later contain elements of the specified type (see 8.9.i: resize below).

8.9.d: Sizing Constructor
If n has type size_t,
     
SimpleVector<Scalarx(n)
creates a vector x with n elements each of the specified type.

8.9.e: Copy Constructor
If x is a SimpleVector<Scalar> object,
     
SimpleVector<Scalary(x)
creates a vector with the same type and number of elements as x . The Scalar assignment operator ( = ) is used to set each element of y equal to the corresponding element of x . This is a `deep copy' in that the values of the elements of x and y can be set independently after the copy. The argument x is passed by reference and may be const.

8.9.f: Element Constructor and Destructor
The default constructor for type Scalar is called for every element in a vector when the vector element is created. The Scalar destructor is called when it is removed from the vector (this includes when the vector is destroyed).

8.9.g: Assignment
If x and y are SimpleVector<Scalar> objects,
     
y = x
uses the Scalar assignment operator ( = ) to set each element of y equal to the corresponding element of x . This is a `deep assignment' in that the values of the elements of x and y can be set independently after the assignment. The vectors x and y must have the same number of elements. The argument x is passed by reference and may be const.

The type returned by this assignment is unspecified; for example, it might be void in which case the syntax
     
z = y = x
would not be valid.

8.9.h: Size
If x is a SimpleVector<Scalar> object and n has type size_t,
     
n = size_t( x.size() )
sets n to the number of elements in the vector x . The object x may be const.

8.9.i: Resize
If x is a SimpleVector<Scalar> object and n has type size_t,
     
x.resize(n)
changes the number of elements contained in the vector x to be n . The value of the elements of x are not specified after this operation; i.e., any values previously stored in x are lost. (The object x can not be const.)

8.9.j: Value Type
If Vector is any simple vector class, the syntax
     
Vector::value_type
is the type of the elements corresponding to the vector class; i.e.,
     
SimpleVector<Scalar>::value_type
is equal to Scalar .

8.9.k: Element Access
If x is a SimpleVector<Scalar> object and i has type size_t,
     
x[i]
returns an object of an unspecified type, referred to here as elementType .

8.9.k.a: Using Value
If elementType is not the same as Scalar , the conversion operator
     static_cast<
Scalar>(x[i])
is used implicitly when x[i] is used in an expression with values of type Scalar . For this type of usage, the object x may be const.

8.9.k.b: Assignment
If y is an object of type Scalar ,
     
x[i] = y
assigns the i-th element of x to have value y . For this type of usage, the object x can not be const. The type returned by this assignment is unspecified; for example, it might be void in which case the syntax
     
z = x[i] = y
would not be valid.

8.9.l: Example
The file 8.9.1: simple_vector.cpp contains an example and test of a Simple template class. It returns true if it succeeds and false otherwise. (It is easy to modify to test additional simple vector template classes.)

8.9.m: Exercise
  1. If Vector is a simple vector template class, the following code may not be valid:
         
    Vector<double> x(2);
         x[2] = 1.;
    Create and run a program that executes the code segment above where Vector is each of the following cases: std::vector, CppAD::vector. Do this both where the compiler option -DNDEBUG is and is not present on the compilation command line.
  2. If Vector is a simple vector template class, the following code may not be valid:
         
    Vector<int> x(2);
         
    Vector<int> y(1);
         x[0] = 0;
         x[1] = 1;
         y    = x;
    Create and run a program that executes the code segment above where Vector is each of the following cases: std::valarray, CppAD::vector. Do this both where the compiler option -DNDEBUG is and is not present on the compilation command line.

Input File: omh/simple_vector.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.9.1: Simple Vector Template Class: Example and Test
# include <iostream>                   // std::cout and std::endl

# include <vector>                     // std::vector
# include <valarray>                   // std::valarray
# include <cppad/utility/vector.hpp>       // CppAD::vector
# include <cppad/utility/check_simple_vector.hpp>  // CppAD::CheckSimpleVector
namespace {
     template <typename Vector>
     bool Ok(void)
     {     // type corresponding to elements of Vector
          typedef typename Vector::value_type Scalar;

          bool ok = true;             // initialize testing flag

          Vector x;                   // use the default constructor
          ok &= (x.size() == 0);      // test size for an empty vector
          Vector y(2);                // use the sizing constructor
          ok &= (y.size() == 2);      // size for an vector with elements

          // non-const access to the elements of y
          size_t i;
          for(i = 0; i < 2; i++)
               y[i] = Scalar(i);

          const Vector z(y);          // copy constructor
          x.resize(2);                // resize
          x = z;                      // vector assignment

          // use the const access to the elements of x
          // and test the values of elements of x, y, z
          for(i = 0; i < 2; i++)
          {     ok &= (x[i] == Scalar(i));
               ok &= (y[i] == Scalar(i));
               ok &= (z[i] == Scalar(i));
          }
          return ok;
     }
}
bool SimpleVector (void)
{     bool ok = true;

     // use routine above to check these cases
     ok &= Ok< std::vector<double> >();
     ok &= Ok< std::valarray<float> >();
     ok &= Ok< CppAD::vector<int> >();
# ifndef _MSC_VER
     // Avoid Microsoft following compiler warning:  'size_t' :
     // forcing value to bool 'true' or 'false' (performance warning)
     ok &= Ok< std::vector<bool> >();
     ok &= Ok< CppAD::vector<bool> >();
# endif
     // use CheckSimpleVector for more extensive testing
     CppAD::CheckSimpleVector<double, std::vector<double>  >();
     CppAD::CheckSimpleVector<float,  std::valarray<float> >();
     CppAD::CheckSimpleVector<int,    CppAD::vector<int>   >();
     CppAD::CheckSimpleVector<bool,   std::vector<bool>    >();
     CppAD::CheckSimpleVector<bool,   CppAD::vector<bool>  >();

     return ok;
}

Input File: example/utility/simple_vector.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.10: Check Simple Vector Concept

8.10.a: Syntax
# include <cppad/utility/check_simple_vector.hpp>
CheckSimpleVector<ScalarVector>()
CheckSimpleVector<ScalarVector>(xy)

8.10.b: Purpose
Preforms compile and run time checks that the type specified by Vector satisfies all the requirements for a 8.9: SimpleVector class with 8.9.b: elements of type Scalar . If a requirement is not satisfied, a an error message makes it clear what condition is not satisfied.

8.10.c: x, y
If the arguments x and y are present, they have prototype
     const 
Scalarx
     const 
Scalary
In addition, the check
     
x == x
will return the boolean value true, and
     
x == y
will return false.

8.10.d: Restrictions
If the arguments x and y are not present, the following extra assumption is made by CheckSimpleVector: If x is a Scalar object
     
x = 0
     
y = 1
assigns values to the objects x and y . In addition, x == x would return the boolean value true and x == y would return false.

8.10.e: Include
The file cppad/check_simple_vector.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest if the CppAD include files.

8.10.f: Parallel Mode
The routine 8.23.2: thread_alloc::parallel_setup must be called before it can be used in 8.23.4: parallel mode.

8.10.g: Example
The file 8.10.1: check_simple_vector.cpp contains an example and test of this function where S is the same as T . It returns true, if it succeeds an false otherwise. The comments in this example suggest a way to change the example so S is not the same as T .
Input File: cppad/utility/check_simple_vector.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.10.1: The CheckSimpleVector Function: Example and Test

# include <cppad/utility/vector.hpp>
# include <cppad/utility/check_simple_vector.hpp>
# include <iostream>


// Chosing a value between 1 and 9 selects a simple vector properity to be
// omitted and result in an error message being generated
# define CppADMyVectorOmit 0

// -------------------------------------------------------------------------

// example class used for non-constant elements (different from Scalar)
template <class Scalar>
class MyElement {
private:
     Scalar *element;
public:
     // element constructor
     MyElement(Scalar *e)
     {     element = e; }
     // an example element assignment that returns void
     void operator = (const Scalar &s)
     {     *element = s; }
     // conversion to Scalar
     operator Scalar() const
     {     return *element; }
};


// example simple vector class
template <class Scalar>
class MyVector {
private:
     size_t length;
     Scalar * data;
public:

# if CppADMyVectorOmit != 1
     // type of the elements in the vector
     typedef Scalar value_type;
# endif
# if CppADMyVectorOmit != 2
     // default constructor
     inline MyVector(void) : length(0) , data(0)
     { }
# endif
# if CppADMyVectorOmit != 3
     // constructor with a specified size
     inline MyVector(size_t n) : length(n)
     {     if( length == 0 )
               data = 0;
          else     data = new Scalar[length];
     }
# endif
# if CppADMyVectorOmit != 4
     // copy constructor
     inline MyVector(const MyVector &x) : length(x.length)
     {     size_t i;
          if( length == 0 )
               data = 0;
          else     data = new Scalar[length];

          for(i = 0; i < length; i++)
               data[i] = x.data[i];
     }
# endif
# if CppADMyVectorOmit != 4
# if CppADMyVectorOmit != 7
     // destructor (it is not safe to delete the pointer in cases 4 and 7)
     ~MyVector(void)
     {     delete [] data; }
# endif
# endif
# if CppADMyVectorOmit != 5
     // size function
     inline size_t size(void) const
     {     return length; }
# endif
# if CppADMyVectorOmit != 6
     // resize function
     inline void resize(size_t n)
     {     if( length > 0 )
               delete [] data;
          length = n;
          if( length > 0 )
               data = new Scalar[length];
          else     data = 0;
     }
# endif
# if CppADMyVectorOmit != 7
     // assignment operator
     inline MyVector & operator=(const MyVector &x)
     {     size_t i;
          for(i = 0; i < length; i++)
               data[i] = x.data[i];
          return *this;
     }
# endif
# if CppADMyVectorOmit != 8
     // non-constant element access
     MyElement<Scalar> operator[](size_t i)
     {     return data + i; }
# endif
# if CppADMyVectorOmit != 9
     // constant element access
     const Scalar & operator[](size_t i) const
     {     return data[i]; }
# endif
};
// -------------------------------------------------------------------------

/*
Compute r = a * v, where a is a scalar with same type as the elements of
the Simple Vector v. This routine uses the CheckSimpleVector function to ensure that
the types agree.
*/
namespace { // Empty namespace
     template <class Scalar, class Vector>
     Vector Sscal(const Scalar &a, const Vector &v)
     {
          // invoke CheckSimpleVector function
          CppAD::CheckSimpleVector<Scalar, Vector>();

          size_t n = v.size();
          Vector r(n);

          size_t i;
          for(i = 0; i < n; i++)
               r[i] = a * v[i];

          return r;
     }
}

bool CheckSimpleVector(void)
{     bool ok  = true;
     using CppAD::vector;

     // --------------------------------------------------------
     // If you change double to float in the next statement,
     // CheckSimpleVector will generate an error message at compile time.
     double a = 3.;
     // --------------------------------------------------------

     size_t n = 2;
     MyVector<double> v(n);
     v[0]     = 1.;
     v[1]     = 2.;
     MyVector<double> r = Sscal(a, v);
     ok      &= (r[0] == 3.);
     ok      &= (r[1] == 6.);

     return ok;
}

Input File: example/utility/check_simple_vector.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.11: Obtain Nan or Determine if a Value is Nan

8.11.a: Syntax
# include <cppad/utility/nan.hpp>
b = isnan(s)
b = hasnan(v)

8.11.b: Purpose
It obtain and check for the value not a number nan. The IEEE standard specifies that a floating point value a is nan if and only if the following returns true
     
a != a

8.11.c: Include
The file cppad/nan.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.11.c.a: Macros
Some C++ compilers use preprocessor symbols called nan and isnan. These preprocessor symbols will no longer be defined after this file is included.

8.11.d: isnan
This routine determines if a scalar value is nan.

8.11.d.a: s
The argument s has prototype
     const 
Scalar s

8.11.d.b: b
The return value b has prototype
     bool 
b
It is true if the value s is nan.

8.11.e: hasnan
This routine determines if a 8.9: SimpleVector has an element that is nan.

8.11.e.a: v
The argument v has prototype
     const 
Vector &v
(see 8.11.h: Vector for the definition of Vector ).

8.11.e.b: b
The return value b has prototype
     bool 
b
It is true if the vector v has a nan.

8.11.f: nan(zero)

8.11.f.a: Deprecated 2015-10-04
This routine has been deprecated, use CppAD numeric limits 4.4.6.h: quiet_NaN in its place.

8.11.f.b: Syntax
s = nan(z)

8.11.f.c: z
The argument z has prototype
     const 
Scalar &z
and its value is zero (see 8.11.g: Scalar for the definition of Scalar ).

8.11.f.d: s
The return value s has prototype
     
Scalar s
It is the value nan for this floating point type.

8.11.g: Scalar
The type Scalar must support the following operations;
Operation Description
a / b division operator (returns a Scalar object)
a == b equality operator (returns a bool object)
a != b not equality operator (returns a bool object)
Note that the division operator will be used with a and b equal to zero. For some types (e.g. int) this may generate an exception. No attempt is made to catch any such exception.

8.11.h: Vector
The type Vector must be a 8.9: SimpleVector class with elements of type Scalar .

8.11.i: Example
The file 8.11.1: nan.cpp contains an example and test of this routine. It returns true if it succeeds and false otherwise.
Input File: cppad/utility/nan.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.11.1: nan: Example and Test
# include <cppad/utility/nan.hpp>
# include <vector>
# include <limits>

bool nan(void)
{     bool ok = true;

     // get a nan
     double double_zero = 0.;
     double double_nan = std::numeric_limits<double>::quiet_NaN();

     // create a simple vector with no nans
     std::vector<double> v(2);
     v[0] = double_zero;
     v[1] = double_zero;

     // check that zero is not nan
     ok &= ! CppAD::isnan(double_zero);
     ok &= ! CppAD::hasnan(v);

     // check that nan is a nan
     v[1] = double_nan;
     ok &= CppAD::isnan(double_nan);
     ok &= CppAD::hasnan(v);

     // check that nan is not equal to itself
     ok &= (double_nan != double_nan);

     return ok;
}

Input File: example/utility/nan.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.12: The Integer Power Function

8.12.a: Syntax
# include <cppad/utility/pow_int.hpp>
z = pow(xy)

8.12.b: See Also
4.4.3.2: pow

8.12.c: Purpose
Determines the value of the power function @[@ {\rm pow} (x, y) = x^y @]@ for integer exponents n using multiplication and possibly division to compute the value. The other CppAD 4.4.3.2: pow function may use logarithms and exponentiation to compute derivatives of the same value (which will not work if x is less than or equal zero).

8.12.d: Include
The file cppad/pow_int.h is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines. Including this file defines this version of the pow within the CppAD namespace.

8.12.e: x
The argument x has prototype
     const 
Typex

8.12.f: y
The argument y has prototype
     const int& 
y

8.12.g: z
The result z has prototype
     
Type z

8.12.h: Type
The type Type must support the following operations where a and b are Type objects and i is an int:
Operation    Description Result Type
Type a(i) construction of a Type object from an int Type
a * b binary multiplication of Type objects Type
a / b binary division of Type objects Type

8.12.i: Operation Sequence
The Type operation sequence used to calculate z is 12.4.g.d: independent of x .

8.12.j: Example
The file 8.12.1: pow_int.cpp is an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/utility/pow_int.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.12.1: The Pow Integer Exponent: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool pow_int(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // declare independent variables and start tape recording
     size_t n  = 1;
     double x0 = -0.5;
     CPPAD_TESTVECTOR(AD<double>) x(n);
     x[0]      = x0;
     CppAD::Independent(x);

     // dependent variable vector
     size_t m = 7;
     CPPAD_TESTVECTOR(AD<double>) y(m);
     int i;
     for(i = 0; i < int(m); i++)
          y[i] = CppAD::pow(x[0], i - 3);

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(x, y);

     // check value
     double check;
     for(i = 0; i < int(m); i++)
     {     check = std::pow(x0, double(i - 3));
          ok &= NearEqual(y[i] , check,  eps99 , eps99);
     }

     // forward computation of first partial w.r.t. x[0]
     CPPAD_TESTVECTOR(double) dx(n);
     CPPAD_TESTVECTOR(double) dy(m);
     dx[0] = 1.;
     dy    = f.Forward(1, dx);
     for(i = 0; i < int(m); i++)
     {     check = double(i-3) * std::pow(x0, double(i - 4));
          ok &= NearEqual(dy[i] , check,  eps99 , eps99);
     }

     // reverse computation of derivative of y[i]
     CPPAD_TESTVECTOR(double)  w(m);
     CPPAD_TESTVECTOR(double) dw(n);
     for(i = 0; i < int(m); i++)
          w[i] = 0.;
     for(i = 0; i < int(m); i++)
     {     w[i] = 1.;
          dw    = f.Reverse(1, w);
          check = double(i-3) * std::pow(x0, double(i - 4));
          ok &= NearEqual(dw[0] , check,  eps99 , eps99);
          w[i] = 0.;
     }

     return ok;
}

Input File: example/general/pow_int.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.13: Evaluate a Polynomial or its Derivative

8.13.a: Syntax
# include <cppad/utility/poly.hpp>
p = Poly(kaz)

8.13.b: Description
Computes the k-th derivative of the polynomial @[@ P(z) = a_0 + a_1 z^1 + \cdots + a_d z^d @]@ If k is equal to zero, the return value is @(@ P(z) @)@.

8.13.c: Include
The file cppad/poly.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines. Including this file defines Poly within the CppAD namespace.

8.13.d: k
The argument k has prototype
     size_t 
k
It specifies the order of the derivative to calculate.

8.13.e: a
The argument a has prototype
     const 
Vector &a
(see 8.13.i: Vector below). It specifies the vector corresponding to the polynomial @(@ P(z) @)@.

8.13.f: z
The argument z has prototype
     const 
Type &z
(see Type below). It specifies the point at which to evaluate the polynomial

8.13.g: p
The result p has prototype
     
Type p
(see 8.13.h: Type below) and it is equal to the k-th derivative of @(@ P(z) @)@; i.e., @[@ p = \frac{k !}{0 !} a_k + \frac{(k+1) !}{1 !} a_{k+1} z^1 + \ldots + \frac{d !}{(d - k) !} a_d z^{d - k} @]@ If @(@ k > d @)@, p = Type(0) .

8.13.h: Type
The type Type is determined by the argument z . It is assumed that multiplication and addition of Type objects are commutative.

8.13.h.a: Operations
The following operations must be supported where x and y are objects of type Type and i is an int:
x  = i assignment
x  = y assignment
x *= y multiplication compound assignment
x += y addition compound assignment

8.13.i: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Type . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.13.j: Operation Sequence
The Type operation sequence used to calculate p is 12.4.g.d: independent of z and the elements of a (it does depend on the size of the vector a ).

8.13.k: Example
The file 8.13.1: poly.cpp contains an example and test of this routine. It returns true if it succeeds and false otherwise.

8.13.l: Source
The file 8.13.2: poly.hpp contains the current source code that implements these specifications.
Input File: cppad/utility/poly.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.13.1: Polynomial Evaluation: Example and Test

# include <cppad/cppad.hpp>
# include <cmath>

bool Poly(void)
{     bool ok = true;

     // degree of the polynomial
     size_t deg = 3;

     // set the polynomial coefficients
     CPPAD_TESTVECTOR(double)   a(deg + 1);
     size_t i;
     for(i = 0; i <= deg; i++)
          a[i] = 1.;

     // evaluate this polynomial
     size_t k = 0;
     double z = 2.;
     double p = CppAD::Poly(k, a, z);
     ok      &= (p == 1. + z + z*z + z*z*z);

     // evaluate derivative
     k = 1;
     p = CppAD::Poly(k, a, z);
     ok &= (p == 1 + 2.*z + 3.*z*z);

     return ok;
}

Input File: example/general/poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.13.2: Source: Poly
# ifndef CPPAD_POLY_HPP
# define CPPAD_POLY_HPP
# include <cstddef>  // used to defined size_t
# include <cppad/utility/check_simple_vector.hpp>

namespace CppAD {    // BEGIN CppAD namespace

template <class Type, class Vector>
Type Poly(size_t k, const Vector &a, const Type &z)
{     size_t i;
     size_t d = a.size() - 1;

     Type tmp;

     // check Vector is Simple Vector class with Type elements
     CheckSimpleVector<Type, Vector>();

     // case where derivative order greater than degree of polynomial
     if( k > d )
     {     tmp = 0;
          return tmp;
     }
     // case where we are evaluating a derivative
     if( k > 0 )
     {     // initialize factor as (k-1) !
          size_t factor = 1;
          for(i = 2; i < k; i++)
               factor *= i;

          // set b to coefficient vector corresponding to derivative
          Vector b(d - k + 1);
          for(i = k; i <= d; i++)
          {     factor   *= i;
               tmp       = double( factor );
               b[i - k]  = a[i] * tmp;
               factor   /= (i - k + 1);
          }
          // value of derivative polynomial
          return Poly(0, b, z);
     }
     // case where we are evaluating the original polynomial
     Type sum = a[d];
     i        = d;
     while(i > 0)
     {     sum *= z;
          sum += a[--i];
     }
     return sum;
}
} // END CppAD namespace
# endif

Input File: omh/poly_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14: Compute Determinants and Solve Equations by LU Factorization

8.14.a: Contents
LuSolve: 8.14.1Compute Determinant and Solve Linear Equations
LuFactor: 8.14.2LU Factorization of A Square Matrix
LuInvert: 8.14.3Invert an LU Factored Equation

Input File: omh/lu_det_and_solve.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.1: Compute Determinant and Solve Linear Equations

8.14.1.a: Syntax
 include <cppad/utility/lu_solve.hpp>
signdet = LuSolve(nmABXlogdet)

8.14.1.b: Description
Use an LU factorization of the matrix A to compute its determinant and solve for X in the linear of equation @[@ A * X = B @]@ where A is an n by n matrix, X is an n by m matrix, and B is an @(@ n x m @)@ matrix.

8.14.1.c: Include
The file cppad/lu_solve.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.14.1.d: Factor and Invert
This routine is an easy to user interface to 8.14.2: LuFactor and 8.14.3: LuInvert for computing determinants and solutions of linear equations. These separate routines should be used if one right hand side B depends on the solution corresponding to another right hand side (with the same value of A ). In this case only one call to LuFactor is required but there will be multiple calls to LuInvert.

8.14.1.e: Matrix Storage
All matrices are stored in row major order. To be specific, if @(@ Y @)@ is a vector that contains a @(@ p @)@ by @(@ q @)@ matrix, the size of @(@ Y @)@ must be equal to @(@ p * q @)@ and for @(@ i = 0 , \ldots , p-1 @)@, @(@ j = 0 , \ldots , q-1 @)@, @[@ Y_{i,j} = Y[ i * q + j ] @]@

8.14.1.f: signdet
The return value signdet is a int value that specifies the sign factor for the determinant of A . This determinant of A is zero if and only if signdet is zero.

8.14.1.g: n
The argument n has type size_t and specifies the number of rows in the matrices A , X , and B . The number of columns in A is also equal to n .

8.14.1.h: m
The argument m has type size_t and specifies the number of columns in the matrices X and B . If m is zero, only the determinant of A is computed and the matrices X and B are not used.

8.14.1.i: A
The argument A has the prototype
     const 
FloatVector &A
and the size of A must equal @(@ n * n @)@ (see description of 8.14.1.n: FloatVector below). This is the @(@ n @)@ by n matrix that we are computing the determinant of and that defines the linear equation.

8.14.1.j: B
The argument B has the prototype
     const 
FloatVector &B
and the size of B must equal @(@ n * m @)@ (see description of 8.14.1.n: FloatVector below). This is the @(@ n @)@ by m matrix that defines the right hand side of the linear equations. If m is zero, B is not used.

8.14.1.k: X
The argument X has the prototype
     
FloatVector &X
and the size of X must equal @(@ n * m @)@ (see description of 8.14.1.n: FloatVector below). The input value of X does not matter. On output, the elements of X contain the solution of the equation we wish to solve (unless signdet is equal to zero). If m is zero, X is not used.

8.14.1.l: logdet
The argument logdet has prototype
     
Float &logdet
On input, the value of logdet does not matter. On output, it has been set to the log of the determinant of A (but not quite). To be more specific, the determinant of A is given by the formula
     
det = signdet * exp( logdet )
This enables LuSolve to use logs of absolute values in the case where Float corresponds to a real number.

8.14.1.m: Float
The type Float must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for any pair of Float objects x and y :
Operation Description
log(x) returns the logarithm of x as a Float object

8.14.1.n: FloatVector
The type FloatVector must be a 8.9: SimpleVector class with 8.9.b: elements of type Float . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.14.1.o: LeqZero
Including the file lu_solve.hpp defines the template function
     template <typename 
Float>
     bool LeqZero<
Float>(const Float &x)
in the CppAD namespace. This function returns true if x is less than or equal to zero and false otherwise. It is used by LuSolve to avoid taking the log of zero (or a negative number if Float corresponds to real numbers). This template function definition assumes that the operator <= is defined for Float objects. If this operator is not defined for your use of Float , you will need to specialize this template so that it works for your use of LuSolve.

Complex numbers do not have the operation or <= defined. In addition, in the complex case, one can take the log of a negative number. The specializations
     bool LeqZero< std::complex<float> > (const std::complex<float> &
x)
     bool LeqZero< std::complex<double> >(const std::complex<double> &
x)
are defined by including lu_solve.hpp. These return true if x is zero and false otherwise.

8.14.1.p: AbsGeq
Including the file lu_solve.hpp defines the template function
     template <typename 
Float>
     bool AbsGeq<
Float>(const Float &x, const Float &y)
If the type Float does not support the <= operation and it is not std::complex<float> or std::complex<double>, see the documentation for AbsGeq in 8.14.2.l: LuFactor .

8.14.1.q: Example
The file 8.14.1.1: lu_solve.cpp contains an example and test of using this routine. It returns true if it succeeds and false otherwise.

8.14.1.r: Source
The file 8.14.1.2: lu_solve.hpp contains the current source code that implements these specifications.
Input File: cppad/utility/lu_solve.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.1.1: LuSolve With Complex Arguments: Example and Test

# include <cppad/utility/lu_solve.hpp>       // for CppAD::LuSolve
# include <cppad/utility/near_equal.hpp>     // for CppAD::NearEqual
# include <cppad/utility/vector.hpp>  // for CppAD::vector
# include <complex>               // for std::complex

typedef std::complex<double> Complex;    // define the Complex type
bool LuSolve(void)
{     bool  ok = true;
     using namespace CppAD;

     size_t   n = 3;           // number rows in A and B
     size_t   m = 2;           // number columns in B, X and S

     // A is an n by n matrix, B, X, and S are n by m matrices
     CppAD::vector<Complex> A(n * n), B(n * m), X(n * m) , S(n * m);

     Complex  logdet;          // log of determinant of A
     int      signdet;         // zero if A is singular
     Complex  det;             // determinant of A
     size_t   i, j, k;         // some temporary indices

     // set A equal to the n by n Hilbert Matrix
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               A[i * n + j] = 1. / (double) (i + j + 1);

     // set S to the solution of the equation we will solve
     for(j = 0; j < n; j++)
          for(k = 0; k < m; k++)
               S[ j * m + k ] = Complex(double(j), double(j + k));

     // set B = A * S
     size_t ik;
     Complex sum;
     for(k = 0; k < m; k++)
     {     for(i = 0; i < n; i++)
          {     sum = 0.;
               for(j = 0; j < n; j++)
                    sum += A[i * n + j] * S[j * m + k];
               B[i * m + k] = sum;
          }
     }

     // solve the equation A * X = B and compute determinant of A
     signdet = CppAD::LuSolve(n, m, A, B, X, logdet);
     det     = Complex( signdet ) * exp( logdet );

     double cond  = 4.62963e-4;       // condition number of A when n = 3
     double determinant = 1. / 2160.; // determinant of A when n = 3
     double delta = 1e-14 / cond;     // accuracy expected in X

     // check determinant
     ok &= CppAD::NearEqual(det, determinant, delta, delta);

     // check solution
     for(ik = 0; ik < n * m; ik++)
          ok &= CppAD::NearEqual(X[ik], S[ik], delta, delta);

     return ok;
}

Input File: example/utility/lu_solve.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.1.2: Source: LuSolve
# ifndef CPPAD_LU_SOLVE_HPP
# define CPPAD_LU_SOLVE_HPP
# include <complex>
# include <vector>

// link exp for float and double cases
# include <cppad/base_require.hpp>

# include <cppad/core/cppad_assert.hpp>
# include <cppad/utility/check_simple_vector.hpp>
# include <cppad/utility/check_numeric_type.hpp>
# include <cppad/utility/lu_factor.hpp>
# include <cppad/utility/lu_invert.hpp>

namespace CppAD { // BEGIN CppAD namespace

// LeqZero
template <typename Float>
inline bool LeqZero(const Float &x)
{     return x <= Float(0); }
inline bool LeqZero( const std::complex<double> &x )
{     return x == std::complex<double>(0); }
inline bool LeqZero( const std::complex<float> &x )
{     return x == std::complex<float>(0); }

// LuSolve
template <typename Float, typename FloatVector>
int LuSolve(
     size_t             n      ,
     size_t             m      ,
     const FloatVector &A      ,
     const FloatVector &B      ,
     FloatVector       &X      ,
     Float        &logdet      )
{
     // check numeric type specifications
     CheckNumericType<Float>();

     // check simple vector class specifications
     CheckSimpleVector<Float, FloatVector>();

     size_t        p;       // index of pivot element (diagonal of L)
     int     signdet;       // sign of the determinant
     Float     pivot;       // pivot element

     // the value zero
     const Float zero(0);

     // pivot row and column order in the matrix
     std::vector<size_t> ip(n);
     std::vector<size_t> jp(n);

     // -------------------------------------------------------
     CPPAD_ASSERT_KNOWN(
          size_t(A.size()) == n * n,
          "Error in LuSolve: A must have size equal to n * n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(B.size()) == n * m,
          "Error in LuSolve: B must have size equal to n * m"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(X.size()) == n * m,
          "Error in LuSolve: X must have size equal to n * m"
     );
     // -------------------------------------------------------

     // copy A so that it does not change
     FloatVector Lu(A);

     // copy B so that it does not change
     X = B;

     // Lu factor the matrix A
     signdet = LuFactor(ip, jp, Lu);

     // compute the log of the determinant
     logdet  = Float(0);
     for(p = 0; p < n; p++)
     {     // pivot using the max absolute element
          pivot   = Lu[ ip[p] * n + jp[p] ];

          // check for determinant equal to zero
          if( pivot == zero )
          {     // abort the mission
               logdet = Float(0);
               return   0;
          }

          // update the determinant
          if( LeqZero ( pivot ) )
          {     logdet += log( - pivot );
               signdet = - signdet;
          }
          else     logdet += log( pivot );

     }

     // solve the linear equations
     LuInvert(ip, jp, Lu, X);

     // return the sign factor for the determinant
     return signdet;
}
} // END CppAD namespace
# endif

Input File: omh/lu_solve_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.2: LU Factorization of A Square Matrix

8.14.2.a: Syntax
 include <cppad/utility/lu_factor.hpp>
sign = LuFactor(ipjpLU)

8.14.2.b: Description
Computes an LU factorization of the matrix A where A is a square matrix.

8.14.2.c: Include
The file cppad/lu_factor.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.14.2.d: Matrix Storage
All matrices are stored in row major order. To be specific, if @(@ Y @)@ is a vector that contains a @(@ p @)@ by @(@ q @)@ matrix, the size of @(@ Y @)@ must be equal to @(@ p * q @)@ and for @(@ i = 0 , \ldots , p-1 @)@, @(@ j = 0 , \ldots , q-1 @)@, @[@ Y_{i,j} = Y[ i * q + j ] @]@

8.14.2.e: sign
The return value sign has prototype
     int 
sign
If A is invertible, sign is plus or minus one and is the sign of the permutation corresponding to the row ordering ip and column ordering jp . If A is not invertible, sign is zero.

8.14.2.f: ip
The argument ip has prototype
     
SizeVector &ip
(see description of 8.14.2.i: SizeVector below). The size of ip is referred to as n in the specifications below. The input value of the elements of ip does not matter. The output value of the elements of ip determine the order of the rows in the permuted matrix.

8.14.2.g: jp
The argument jp has prototype
     
SizeVector &jp
(see description of 8.14.2.i: SizeVector below). The size of jp must be equal to n . The input value of the elements of jp does not matter. The output value of the elements of jp determine the order of the columns in the permuted matrix.

8.14.2.h: LU
The argument LU has the prototype
     
FloatVector &LU
and the size of LU must equal @(@ n * n @)@ (see description of 8.14.2.j: FloatVector below).

8.14.2.h.a: A
We define A as the matrix corresponding to the input value of LU .

8.14.2.h.b: P
We define the permuted matrix P in terms of A by
     
P(ij) = Aip[i] * n + jp[j] ]

8.14.2.h.c: L
We define the lower triangular matrix L in terms of the output value of LU . The matrix L is zero above the diagonal and the rest of the elements are defined by
     
L(ij) = LUip[i] * n + jp[j] ]
for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , i @)@.

8.14.2.h.d: U
We define the upper triangular matrix U in terms of the output value of LU . The matrix U is zero below the diagonal, one on the diagonal, and the rest of the elements are defined by
     
U(ij) = LUip[i] * n + jp[j] ]
for @(@ i = 0 , \ldots , n-2 @)@ and @(@ j = i+1 , \ldots , n-1 @)@.

8.14.2.h.e: Factor
If the return value sign is non-zero,
     
L * U = P
If the return value of sign is zero, the contents of L and U are not defined.

8.14.2.h.f: Determinant
If the return value sign is zero, the determinant of A is zero. If sign is non-zero, using the output value of LU the determinant of the matrix A is equal to
sign * LU[ip[0], jp[0]] * ... * LU[ip[n-1], jp[n-1]]

8.14.2.i: SizeVector
The type SizeVector must be a 8.9: SimpleVector class with 8.9.b: elements of type size_t . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.14.2.j: FloatVector
The type FloatVector must be a 8.9: simple vector class . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.14.2.k: Float
This notation is used to denote the type corresponding to the elements of a FloatVector . The type Float must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for any pair of Float objects x and y :
Operation Description
log(x) returns the logarithm of x as a Float object

8.14.2.l: AbsGeq
Including the file lu_factor.hpp defines the template function
     template <typename 
Float>
     bool AbsGeq<
Float>(const Float &x, const Float &y)
in the CppAD namespace. This function returns true if the absolute value of x is greater than or equal the absolute value of y . It is used by LuFactor to choose the pivot elements. This template function definition uses the operator <= to obtain the absolute value for Float objects. If this operator is not defined for your use of Float , you will need to specialize this template so that it works for your use of LuFactor.

Complex numbers do not have the operation <= defined. The specializations
bool AbsGeq< std::complex<float> >
     (const std::complex<float> &
x, const std::complex<float> &y)
bool AbsGeq< std::complex<double> >
     (const std::complex<double> &
x, const std::complex<double> &y)
are define by including lu_factor.hpp These return true if the sum of the square of the real and imaginary parts of x is greater than or equal the sum of the square of the real and imaginary parts of y .

8.14.2.m: Example
The file 8.14.2.1: lu_factor.cpp contains an example and test of using LuFactor by itself. It returns true if it succeeds and false otherwise.

The file 8.14.1.2: lu_solve.hpp provides a useful example usage of LuFactor with LuInvert.

8.14.2.n: Source
The file 8.14.2.2: lu_factor.hpp contains the current source code that implements these specifications.
Input File: cppad/utility/lu_factor.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.2.1: LuFactor: Example and Test
# include <cstdlib>               // for rand function
# include <cppad/utility/lu_factor.hpp>      // for CppAD::LuFactor
# include <cppad/utility/near_equal.hpp>     // for CppAD::NearEqual
# include <cppad/utility/vector.hpp>  // for CppAD::vector

bool LuFactor(void)
{     bool  ok = true;

# ifndef _MSC_VER
     using std::rand;
     using std::srand;
# endif
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     size_t  n = 5;                        // number rows in A
     double  rand_max = double(RAND_MAX);  // maximum rand value
     double  sum;                          // element of L * U
     double  pij;                          // element of permuted A
     size_t  i, j, k;                      // temporary indices

     // A is an n by n matrix
     CppAD::vector<double> A(n*n), LU(n*n), L(n*n), U(n*n);

     // set A equal to an n by n random matrix
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               A[i * n + j] = rand() / rand_max;

     // pivot vectors
     CppAD::vector<size_t> ip(n);
     CppAD::vector<size_t> jp(n);

     // factor the matrix A
     LU       = A;
     CppAD::LuFactor(ip, jp, LU);

     // check that ip and jp are permutations of the indices 0, ... , n-1
     for(i = 0; i < n; i++)
     {     ok &= (ip[i] < n);
          ok &= (jp[i] < n);
          for(j = 0; j < n; j++)
          {     if( i != j )
               {     ok &= (ip[i] != ip[j]);
                    ok &= (jp[i] != jp[j]);
               }
          }
     }

     // Extract L from LU
     for(i = 0; i < n; i++)
     {     // elements along and below the diagonal
          for(j = 0; j <= i; j++)
               L[i * n + j] = LU[ ip[i] * n + jp[j] ];
          // elements above the diagonal
          for(j = i+1; j < n; j++)
               L[i * n + j] = 0.;
     }

     // Extract U from LU
     for(i = 0; i < n; i++)
     {     // elements below the diagonal
          for(j = 0; j < i; j++)
               U[i * n + j] = 0.;
          // elements along the diagonal
          U[i * n + i] = 1.;
          // elements above the diagonal
          for(j = i+1; j < n; j++)
               U[i * n + j] = LU[ ip[i] * n + jp[j] ];
     }

     // Compute L * U
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     // compute element (i,j) entry in L * U
               sum = 0.;
               for(k = 0; k < n; k++)
                    sum += L[i * n + k] * U[k * n + j];
               // element (i,j) in permuted version of A
               pij  = A[ ip[i] * n + jp[j] ];
               // compare
               ok  &= NearEqual(pij, sum, eps99, eps99);
          }
     }

     return ok;
}

Input File: example/utility/lu_factor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.2.2: Source: LuFactor
# ifndef CPPAD_LU_FACTOR_HPP
# define CPPAD_LU_FACTOR_HPP

# include <complex>
# include <vector>

# include <cppad/core/cppad_assert.hpp>
# include <cppad/utility/check_simple_vector.hpp>
# include <cppad/utility/check_numeric_type.hpp>

namespace CppAD { // BEGIN CppAD namespace

// AbsGeq
template <typename Float>
inline bool AbsGeq(const Float &x, const Float &y)
{     Float xabs = x;
     if( xabs <= Float(0) )
          xabs = - xabs;
     Float yabs = y;
     if( yabs <= Float(0) )
          yabs = - yabs;
     return xabs >= yabs;
}
inline bool AbsGeq(
     const std::complex<double> &x,
     const std::complex<double> &y)
{     double xsq = x.real() * x.real() + x.imag() * x.imag();
     double ysq = y.real() * y.real() + y.imag() * y.imag();

     return xsq >= ysq;
}
inline bool AbsGeq(
     const std::complex<float> &x,
     const std::complex<float> &y)
{     float xsq = x.real() * x.real() + x.imag() * x.imag();
     float ysq = y.real() * y.real() + y.imag() * y.imag();

     return xsq >= ysq;
}

// Lines that are different from code in cppad/core/lu_ratio.hpp end with //
template <class SizeVector, class FloatVector>                          //
int LuFactor(SizeVector &ip, SizeVector &jp, FloatVector &LU)           //
{
     // type of the elements of LU                                   //
     typedef typename FloatVector::value_type Float;                 //

     // check numeric type specifications
     CheckNumericType<Float>();

     // check simple vector class specifications
     CheckSimpleVector<Float, FloatVector>();
     CheckSimpleVector<size_t, SizeVector>();

     size_t  i, j;          // some temporary indices
     const Float zero( 0 ); // the value zero as a Float object
     size_t  imax;          // row index of maximum element
     size_t  jmax;          // column indx of maximum element
     Float    emax;         // maximum absolute value
     size_t  p;             // count pivots
     int     sign;          // sign of the permutation
     Float   etmp;          // temporary element
     Float   pivot;         // pivot element

     // -------------------------------------------------------
     size_t n = ip.size();
     CPPAD_ASSERT_KNOWN(
          size_t(jp.size()) == n,
          "Error in LuFactor: jp must have size equal to n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(LU.size()) == n * n,
          "Error in LuFactor: LU must have size equal to n * m"
     );
     // -------------------------------------------------------

     // initialize row and column order in matrix not yet pivoted
     for(i = 0; i < n; i++)
     {     ip[i] = i;
          jp[i] = i;
     }
     // initialize the sign of the permutation
     sign = 1;
     // ---------------------------------------------------------

     // Reduce the matrix P to L * U using n pivots
     for(p = 0; p < n; p++)
     {     // determine row and column corresponding to element of
          // maximum absolute value in remaining part of P
          imax = jmax = n;
          emax = zero;
          for(i = p; i < n; i++)
          {     for(j = p; j < n; j++)
               {     CPPAD_ASSERT_UNKNOWN(
                         (ip[i] < n) & (jp[j] < n)
                    );
                    etmp = LU[ ip[i] * n + jp[j] ];

                    // check if maximum absolute value so far
                    if( AbsGeq (etmp, emax) )
                    {     imax = i;
                         jmax = j;
                         emax = etmp;
                    }
               }
          }
          CPPAD_ASSERT_KNOWN(
          (imax < n) & (jmax < n) ,
          "LuFactor can't determine an element with "
          "maximum absolute value.\n"
          "Perhaps original matrix contains not a number or infinity.\n"
          "Perhaps your specialization of AbsGeq is not correct."
          );
          if( imax != p )
          {     // switch rows so max absolute element is in row p
               i        = ip[p];
               ip[p]    = ip[imax];
               ip[imax] = i;
               sign     = -sign;
          }
          if( jmax != p )
          {     // switch columns so max absolute element is in column p
               j        = jp[p];
               jp[p]    = jp[jmax];
               jp[jmax] = j;
               sign     = -sign;
          }
          // pivot using the max absolute element
          pivot   = LU[ ip[p] * n + jp[p] ];

          // check for determinant equal to zero
          if( pivot == zero )
          {     // abort the mission
               return   0;
          }

          // Reduce U by the elementary transformations that maps
          // LU( ip[p], jp[p] ) to one.  Only need transform elements
          // above the diagonal in U and LU( ip[p] , jp[p] ) is
          // corresponding value below diagonal in L.
          for(j = p+1; j < n; j++)
               LU[ ip[p] * n + jp[j] ] /= pivot;

          // Reduce U by the elementary transformations that maps
          // LU( ip[i], jp[p] ) to zero. Only need transform elements
          // above the diagonal in U and LU( ip[i], jp[p] ) is
          // corresponding value below diagonal in L.
          for(i = p+1; i < n; i++ )
          {     etmp = LU[ ip[i] * n + jp[p] ];
               for(j = p+1; j < n; j++)
               {     LU[ ip[i] * n + jp[j] ] -=
                         etmp * LU[ ip[p] * n + jp[j] ];
               }
          }
     }
     return sign;
}
} // END CppAD namespace
# endif

Input File: omh/lu_factor_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.3: Invert an LU Factored Equation

8.14.3.a: Syntax
 include <cppad/utility/lu_invert.hpp>
LuInvert(ipjpLUX)

8.14.3.b: Description
Solves the matrix equation A * X = B using an LU factorization computed by 8.14.2: LuFactor .

8.14.3.c: Include
The file cppad/lu_invert.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.14.3.d: Matrix Storage
All matrices are stored in row major order. To be specific, if @(@ Y @)@ is a vector that contains a @(@ p @)@ by @(@ q @)@ matrix, the size of @(@ Y @)@ must be equal to @(@ p * q @)@ and for @(@ i = 0 , \ldots , p-1 @)@, @(@ j = 0 , \ldots , q-1 @)@, @[@ Y_{i,j} = Y[ i * q + j ] @]@

8.14.3.e: ip
The argument ip has prototype
     const 
SizeVector &ip
(see description for SizeVector in 8.14.2.i: LuFactor specifications). The size of ip is referred to as n in the specifications below. The elements of ip determine the order of the rows in the permuted matrix.

8.14.3.f: jp
The argument jp has prototype
     const 
SizeVector &jp
(see description for SizeVector in 8.14.2.i: LuFactor specifications). The size of jp must be equal to n . The elements of jp determine the order of the columns in the permuted matrix.

8.14.3.g: LU
The argument LU has the prototype
     const 
FloatVector &LU
and the size of LU must equal @(@ n * n @)@ (see description for FloatVector in 8.14.2.j: LuFactor specifications).

8.14.3.g.a: L
We define the lower triangular matrix L in terms of LU . The matrix L is zero above the diagonal and the rest of the elements are defined by
     
L(ij) = LUip[i] * n + jp[j] ]
for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , i @)@.

8.14.3.g.b: U
We define the upper triangular matrix U in terms of LU . The matrix U is zero below the diagonal, one on the diagonal, and the rest of the elements are defined by
     
U(ij) = LUip[i] * n + jp[j] ]
for @(@ i = 0 , \ldots , n-2 @)@ and @(@ j = i+1 , \ldots , n-1 @)@.

8.14.3.g.c: P
We define the permuted matrix P in terms of the matrix L and the matrix U by P = L * U .

8.14.3.g.d: A
The matrix A , which defines the linear equations that we are solving, is given by
     
P(ij) = Aip[i] * n + jp[j] ]
(Hence LU contains a permuted factorization of the matrix A .)

8.14.3.h: X
The argument X has prototype
     
FloatVector &X
(see description for FloatVector in 8.14.2.j: LuFactor specifications). The matrix X must have the same number of rows as the matrix A . The input value of X is the matrix B and the output value solves the matrix equation A * X = B .

8.14.3.i: Example
The file 8.14.1.2: lu_solve.hpp is a good example usage of LuFactor with LuInvert. The file 8.14.3.1: lu_invert.cpp contains an example and test of using LuInvert by itself. It returns true if it succeeds and false otherwise.

8.14.3.j: Source
The file 8.14.3.2: lu_invert.hpp contains the current source code that implements these specifications.
Input File: cppad/utility/lu_invert.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.3.1: LuInvert: Example and Test
# include <cstdlib>               // for rand function
# include <cppad/utility/lu_invert.hpp>      // for CppAD::LuInvert
# include <cppad/utility/near_equal.hpp>     // for CppAD::NearEqual
# include <cppad/utility/vector.hpp>  // for CppAD::vector

bool LuInvert(void)
{     bool  ok = true;

# ifndef _MSC_VER
     using std::rand;
     using std::srand;
# endif
     double eps200 = 200.0 * std::numeric_limits<double>::epsilon();

     size_t  n = 7;                        // number rows in A
     size_t  m = 3;                        // number columns in B
     double  rand_max = double(RAND_MAX);  // maximum rand value
     double  sum;                          // element of L * U
     size_t  i, j, k;                      // temporary indices

     // dimension matrices
     CppAD::vector<double>
          A(n*n), X(n*m), B(n*m), LU(n*n), L(n*n), U(n*n);

     // seed the random number generator
     srand(123);

     // pivot vectors
     CppAD::vector<size_t> ip(n);
     CppAD::vector<size_t> jp(n);

     // set pivot vectors
     for(i = 0; i < n; i++)
     {     ip[i] = (i + 2) % n;      // ip = 2 , 3, ... , n-1, 0, 1
          jp[i] = (n + 2 - i) % n;  // jp = 2 , 1, n-1, n-2, ... , 3
     }

     // chose L, a random lower triangular matrix
     for(i = 0; i < n; i++)
     {     for(j = 0; j <= i; j++)
               L [i * n + j]  = rand() / rand_max;
          for(j = i+1; j < n; j++)
               L [i * n + j]  = 0.;
     }
     // chose U, a random upper triangular matrix with ones on diagonal
     for(i = 0; i < n; i++)
     {     for(j = 0; j < i; j++)
               U [i * n + j]  = 0.;
          U[ i * n + i ] = 1.;
          for(j = i+1; j < n; j++)
               U [i * n + j]  = rand() / rand_max;
     }
     // chose X, a random matrix
     for(i = 0; i < n; i++)
     {     for(k = 0; k < m; k++)
               X[i * m + k] = rand() / rand_max;
     }
     // set LU to a permuted combination of both L and U
     for(i = 0; i < n; i++)
     {     for(j = 0; j <= i; j++)
               LU [ ip[i] * n + jp[j] ]  = L[i * n + j];
          for(j = i+1; j < n; j++)
               LU [ ip[i] * n + jp[j] ]  = U[i * n + j];
     }
     // set A to a permuted version of L * U
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     // compute (i,j) entry in permuted matrix
               sum = 0.;
               for(k = 0; k < n; k++)
                    sum += L[i * n + k] * U[k * n + j];
               A[ ip[i] * n + jp[j] ] = sum;
          }
     }
     // set B to A * X
     for(i = 0; i < n; i++)
     {     for(k = 0; k < m; k++)
          {     // compute (i,k) entry of B
               sum = 0.;
               for(j = 0; j < n; j++)
                    sum += A[i * n + j] * X[j * m + k];
               B[i * m + k] = sum;
          }
     }
     // solve for X
     CppAD::LuInvert(ip, jp, LU, B);

     // check result
     for(i = 0; i < n; i++)
     {     for(k = 0; k < m; k++)
          {     ok &= CppAD::NearEqual(
                    X[i * m + k], B[i * m + k], eps200, eps200
               );
          }
     }
     return ok;
}

Input File: example/utility/lu_invert.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.14.3.2: Source: LuInvert
# ifndef CPPAD_LU_INVERT_HPP
# define CPPAD_LU_INVERT_HPP
# include <cppad/core/cppad_assert.hpp>
# include <cppad/utility/check_simple_vector.hpp>
# include <cppad/utility/check_numeric_type.hpp>

namespace CppAD { // BEGIN CppAD namespace

// LuInvert
template <typename SizeVector, typename FloatVector>
void LuInvert(
     const SizeVector  &ip,
     const SizeVector  &jp,
     const FloatVector &LU,
     FloatVector       &B )
{     size_t k; // column index in X
     size_t p; // index along diagonal in LU
     size_t i; // row index in LU and X

     typedef typename FloatVector::value_type Float;

     // check numeric type specifications
     CheckNumericType<Float>();

     // check simple vector class specifications
     CheckSimpleVector<Float, FloatVector>();
     CheckSimpleVector<size_t, SizeVector>();

     Float etmp;

     size_t n = ip.size();
     CPPAD_ASSERT_KNOWN(
          size_t(jp.size()) == n,
          "Error in LuInvert: jp must have size equal to n * n"
     );
     CPPAD_ASSERT_KNOWN(
          size_t(LU.size()) == n * n,
          "Error in LuInvert: Lu must have size equal to n * m"
     );
     size_t m = size_t(B.size()) / n;
     CPPAD_ASSERT_KNOWN(
          size_t(B.size()) == n * m,
          "Error in LuSolve: B must have size equal to a multiple of n"
     );

     // temporary storage for reordered solution
     FloatVector x(n);

     // loop over equations
     for(k = 0; k < m; k++)
     {     // invert the equation c = L * b
          for(p = 0; p < n; p++)
          {     // solve for c[p]
               etmp = B[ ip[p] * m + k ] / LU[ ip[p] * n + jp[p] ];
               B[ ip[p] * m + k ] = etmp;
               // subtract off effect on other variables
               for(i = p+1; i < n; i++)
                    B[ ip[i] * m + k ] -=
                         etmp * LU[ ip[i] * n + jp[p] ];
          }

          // invert the equation x = U * c
          p = n;
          while( p > 0 )
          {     --p;
               etmp       = B[ ip[p] * m + k ];
               x[ jp[p] ] = etmp;
               for(i = 0; i < p; i++ )
                    B[ ip[i] * m + k ] -=
                         etmp * LU[ ip[i] * n + jp[p] ];
          }

          // copy reordered solution into B
          for(i = 0; i < n; i++)
               B[i * m + k] = x[i];
     }
     return;
}
} // END CppAD namespace
# endif

Input File: omh/lu_invert_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.15: One DimensionalRomberg Integration

8.15.a: Syntax
# include <cppad/utility/romberg_one.hpp>
r = RombergOne(Fabne)

8.15.b: Description
Returns the Romberg integration estimate @(@ r @)@ for a one dimensional integral @[@ r = \int_a^b F(x) {\bf d} x + O \left[ (b - a) / 2^{n-1} \right]^{2(p+1)} @]@

8.15.c: Include
The file cppad/romberg_one.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.15.d: r
The return value r has prototype
     
Float r
It is the estimate computed by RombergOne for the integral above.

8.15.e: F
The object F can be of any type, but it must support the operation
     
F(x)
The argument x to F has prototype
     const 
Float &x
The return value of F is a Float object (see description of 8.15.k: Float below).

8.15.f: a
The argument a has prototype
     const 
Float &a
It specifies the lower limit for the integration.

8.15.g: b
The argument b has prototype
     const 
Float &b
It specifies the upper limit for the integration.

8.15.h: n
The argument n has prototype
     size_t 
n
A total number of @(@ 2^{n-1} + 1 @)@ evaluations of F(x) are used to estimate the integral.

8.15.i: p
The argument p has prototype
     size_t 
p
It must be less than or equal @(@ n @)@ and determines the accuracy order in the approximation for the integral that is returned by RombergOne. To be specific @[@ r = \int_a^b F(x) {\bf d} x + O \left[ (b - a) / 2^{n-1} \right]^{2(p+1)} @]@

8.15.j: e
The argument e has prototype
     
Float &e
The input value of e does not matter and its output value is an approximation for the error in the integral estimates; i.e., @[@ e \approx \left| r - \int_a^b F(x) {\bf d} x \right| @]@

8.15.k: Float
The type Float must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, if x and y are Float objects,
     
x < y
returns the bool value true if x is less than y and false otherwise.

8.15.l: Example
The file 8.15.1: romberg_one.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

8.15.m: Source Code
The source code for this routine is in the file cppad/romberg_one.hpp.
Input File: cppad/utility/romberg_one.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.15.1: One Dimensional Romberg Integration: Example and Test

# include <cppad/utility/romberg_one.hpp>
# include <cppad/utility/vector.hpp>
# include <cppad/utility/near_equal.hpp>

namespace {
     class Fun {
     private:
          const size_t degree;
     public:
          // constructor
          Fun(size_t degree_) : degree(degree_)
          { }

          // function F(x) = x^degree
          template <class Type>
          Type operator () (const Type &x)
          {     size_t i;
               Type   f = 1;
               for(i = 0; i < degree; i++)
                    f *= x;
               return f;
          }
     };
}

bool RombergOne(void)
{     bool ok = true;
     size_t i;

     size_t degree = 4;
     Fun F(degree);

     // arguments to RombergOne
     double a = 0.;
     double b = 1.;
     size_t n = 4;
     size_t p;
     double r, e;

     // int_a^b F(x) dx = [ b^(degree+1) - a^(degree+1) ] / (degree+1)
     double bpow = 1.;
     double apow = 1.;
     for(i = 0; i <= degree; i++)
     {     bpow *= b;
          apow *= a;
     }
     double check = (bpow - apow) / double(degree+1);

     // step size corresponding to r
     double step = (b - a) / exp(log(2.)*double(n-1));
     // step size corresponding to error estimate
     step *= 2.;
     // step size raised to a power
     double spow = 1;

     for(p = 0; p < n; p++)
     {     spow = spow * step * step;

          r = CppAD::RombergOne(F, a, b, n, p, e);

          ok  &= e < double(degree+1) * spow;
          ok  &= CppAD::NearEqual(check, r, 0., e);
     }

     return ok;
}

Input File: example/utility/romberg_one.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.16: Multi-dimensional Romberg Integration

8.16.a: Syntax
# include <cppad/utility/romberg_mul.hpp>
RombergMul<FunSizeVectorFloatVectormR
r = R(Fabnpe)

8.16.b: Description
Returns the Romberg integration estimate @(@ r @)@ for the multi-dimensional integral @[@ r = \int_{a[0]}^{b[0]} \cdots \int_{a[m-1]}^{b[m-1]} \; F(x) \; {\bf d} x_0 \cdots {\bf d} x_{m-1} \; + \; \sum_{i=0}^{m-1} O \left[ ( b[i] - a[i] ) / 2^{n[i]-1} \right]^{2(p[i]+1)} @]@

8.16.c: Include
The file cppad/romberg_mul.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.16.d: m
The template parameter m must be convertible to a size_t object with a value that can be determined at compile time; for example 2. It determines the dimension of the domain space for the integration.

8.16.e: r
The return value r has prototype
     
Float r
It is the estimate computed by RombergMul for the integral above (see description of 8.16.l: Float below).

8.16.f: F
The object F has the prototype
     
Fun &F
It must support the operation
     
F(x)
The argument x to F has prototype
     const 
Float &x
The return value of F is a Float object

8.16.g: a
The argument a has prototype
     const 
FloatVector &a
It specifies the lower limit for the integration (see description of 8.16.m: FloatVector below).

8.16.h: b
The argument b has prototype
     const 
FloatVector &b
It specifies the upper limit for the integration.

8.16.i: n
The argument n has prototype
     const 
SizeVector &n
A total number of @(@ 2^{n[i]-1} + 1 @)@ evaluations of F(x) are used to estimate the integral with respect to @(@ {\bf d} x_i @)@.

8.16.j: p
The argument p has prototype
     const 
SizeVector &p
For @(@ i = 0 , \ldots , m-1 @)@, @(@ n[i] @)@ determines the accuracy order in the approximation for the integral that is returned by RombergMul. The values in p must be less than or equal n ; i.e., p[i] <= n[i] .

8.16.k: e
The argument e has prototype
     
Float &e
The input value of e does not matter and its output value is an approximation for the absolute error in the integral estimate.

8.16.l: Float
The type Float is defined as the type of the elements of 8.16.m: FloatVector . The type Float must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, if x and y are Float objects,
     
x < y
returns the bool value true if x is less than y and false otherwise.

8.16.m: FloatVector
The type FloatVector must be a 8.9: SimpleVector class. The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.16.n: Example
The file 8.16.1: Rombergmul.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

8.16.o: Source Code
The source code for this routine is in the file cppad/romberg_mul.hpp.
Input File: cppad/utility/romberg_mul.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.16.1: One Dimensional Romberg Integration: Example and Test

# include <cppad/utility/romberg_mul.hpp>
# include <cppad/utility/vector.hpp>
# include <cppad/utility/near_equal.hpp>


namespace {

     class TestFun {
     private:
          const CppAD::vector<size_t> deg;
     public:
          // constructor
          TestFun(const CppAD::vector<size_t> deg_)
          : deg(deg_)
          { }

          // function F(x) = x[0]^deg[0] * x[1]^deg[1]
          double operator () (const CppAD::vector<double> &x)
          {     size_t i;
               double   f = 1;
               for(i = 0; i < deg[0]; i++)
                    f *= x[0];
               for(i = 0; i < deg[1]; i++)
                    f *= x[1];
               return f;
          }
     };

}

bool RombergMul(void)
{     bool ok = true;
     size_t i;
     size_t k;

     CppAD::vector<size_t> deg(2);
     deg[0] = 5;
     deg[1] = 3;
     TestFun F(deg);

     CppAD::RombergMul<
          TestFun              ,
          CppAD::vector<size_t>,
          CppAD::vector<double>,
          2                    > RombergMulTest;

     // arugments to RombergMul
     CppAD::vector<double> a(2);
     CppAD::vector<double> b(2);
     CppAD::vector<size_t> n(2);
     CppAD::vector<size_t> p(2);
     for(i = 0; i < 2; i++)
     {     a[i] = 0.;
          b[i] = 1.;
     }
     n[0] = 4;
     n[1] = 3;
     double r, e;

     // int_a1^b1 dx1 int_a0^b0 F(x0,x1) dx0
     //     = [ b0^(deg[0]+1) - a0^(deg[0]+1) ] / (deg[0]+1)
     //     * [ b1^(deg[1]+1) - a1^(deg[1]+1) ] / (deg[1]+1)
     double bpow = 1.;
     double apow = 1.;
     for(i = 0; i <= deg[0]; i++)
     {     bpow *= b[0];
          apow *= a[0];
     }
     double check = (bpow - apow) / double(deg[0]+1);
     bpow = 1.;
     apow = 1.;
     for(i = 0; i <= deg[1]; i++)
     {     bpow *= b[1];
          apow *= a[1];
     }
     check *= (bpow - apow) / double(deg[1]+1);

     double step = (b[1] - a[1]) / exp(log(2.)*double(n[1]-1));
     double spow = 1;
     for(k = 0; k <= n[1]; k++)
     {     spow = spow * step * step;
          double bnd = 3 * double(deg[1] + 1) * spow;

          for(i = 0; i < 2; i++)
               p[i] = k;
          r    = RombergMulTest(F, a, b, n, p, e);

          ok  &= e < bnd;
          ok  &= CppAD::NearEqual(check, r, 0., e);

     }

     return ok;
}

Input File: example/utility/romberg_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver

8.17.a: Syntax
# include <cppad/utility/runge_45.hpp>
xf = Runge45(FMtitfxi)
xf = Runge45(FMtitfxie)

8.17.b: Purpose
This is an implementation of the Cash-Karp embedded 4th and 5th order Runge-Kutta ODE solver described in Section 16.2 of 12.5.d: Numerical Recipes . We use @(@ n @)@ for the size of the vector xi . Let @(@ \B{R} @)@ denote the real numbers and let @(@ F : \B{R} \times \B{R}^n \rightarrow \B{R}^n @)@ be a smooth function. The return value xf contains a 5th order approximation for the value @(@ X(tf) @)@ where @(@ X : [ti , tf] \rightarrow \B{R}^n @)@ is defined by the following initial value problem: @[@ \begin{array}{rcl} X(ti) & = & xi \\ X'(t) & = & F[t , X(t)] \end{array} @]@ If your set of ordinary differential equations are stiff, an implicit method may be better (perhaps 8.18: Rosen34 .)

8.17.c: Operation Sequence
The 12.4.g.b: operation sequence for Runge does not depend on any of its Scalar input values provided that the operation sequence for
     
F.Ode(txf)
does not on any of its Scalar inputs (see below).

8.17.d: Include
The file cppad/runge_45.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.17.e: xf
The return value xf has the prototype
     
Vector xf
and the size of xf is equal to n (see description of 8.17.m: Vector below). @[@ X(tf) = xf + O( h^6 ) @]@ where @(@ h = (tf - ti) / M @)@ is the step size. If xf contains not a number 8.11: nan , see the discussion for 8.17.f.c: f .

8.17.f: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
The object F (and the class Fun ) must have a member function named Ode that supports the syntax
     
F.Ode(txf)

8.17.f.a: t
The argument t to F.Ode has prototype
     const 
Scalar &t
(see description of 8.17.l: Scalar below).

8.17.f.b: x
The argument x to F.Ode has prototype
     const 
Vector &x
and has size n (see description of 8.17.m: Vector below).

8.17.f.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size n and the input values of the elements of f do not matter. On output, f is set equal to @(@ F(t, x) @)@ in the differential equation. If any of the elements of f have the value not a number nan the routine Runge45 returns with all the elements of xf and e equal to nan.

8.17.f.d: Warning
The argument f to F.Ode must have a call by reference in its prototype; i.e., do not forget the & in the prototype for f .

8.17.g: M
The argument M has prototype
     size_t 
M
It specifies the number of steps to use when solving the differential equation. This must be greater than or equal one. The step size is given by @(@ h = (tf - ti) / M @)@, thus the larger M , the more accurate the return value xf is as an approximation for @(@ X(tf) @)@.

8.17.h: ti
The argument ti has prototype
     const 
Scalar &ti
(see description of 8.17.l: Scalar below). It specifies the initial time for t in the differential equation; i.e., the time corresponding to the value xi .

8.17.i: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for t in the differential equation; i.e., the time corresponding to the value xf .

8.17.j: xi
The argument xi has the prototype
     const 
Vector &xi
and the size of xi is equal to n . It specifies the value of @(@ X(ti) @)@

8.17.k: e
The argument e is optional and has the prototype
     
Vector &e
If e is present, the size of e must be equal to n . The input value of the elements of e does not matter. On output it contains an element by element estimated bound for the absolute value of the error in xf @[@ e = O( h^5 ) @]@ where @(@ h = (tf - ti) / M @)@ is the step size. If on output, e contains not a number nan, see the discussion for 8.17.f.c: f .

8.17.l: Scalar
The type Scalar must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case.

8.17.l.a: fabs
In addition, the following function must be defined for Scalar objects a and b
     
a = fabs(b)
Note that this operation is only used for computing e ; hence the operation sequence for xf can still be independent of the arguments to Runge45 even if
     fabs(
b) = std::max(-bb)
.

8.17.m: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Scalar . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.17.n: Parallel Mode
For each set of types 8.17.l: Scalar , 8.17.m: Vector , and 8.17.f: Fun , the first call to Runge45 must not be 8.23.4: parallel execution mode.

8.17.o: Example
The file 8.17.1: runge45_1.cpp contains a simple example and test of Runge45. It returns true if it succeeds and false otherwise.

The file 8.17.2: runge45_2.cpp contains an example using Runge45 in the context of algorithmic differentiation. It also returns true if it succeeds and false otherwise.

8.17.p: Source Code
The source code for this routine is in the file cppad/runge_45.hpp.
Input File: cppad/utility/runge_45.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.17.1: Runge45: Example and Test
Define @(@ X : \B{R} \rightarrow \B{R}^n @)@ by @[@ X_i (t) = t^{i+1} @]@ for @(@ i = 1 , \ldots , n-1 @)@. It follows that @[@ \begin{array}{rclr} X_i(0) & = & 0 & {\rm for \; all \;} i \\ X_i ' (t) & = & 1 & {\rm if \;} i = 0 \\ X_i '(t) & = & (i+1) t^i = (i+1) X_{i-1} (t) & {\rm if \;} i > 0 \end{array} @]@ The example tests Runge45 using the relations above:

# include <cstddef>                 // for size_t
# include <cppad/utility/near_equal.hpp>    // for CppAD::NearEqual
# include <cppad/utility/vector.hpp>        // for CppAD::vector
# include <cppad/utility/runge_45.hpp>      // for CppAD::Runge45

// Runge45 requires fabs to be defined (not std::fabs)
// <cppad/cppad.hpp> defines this for doubles, but runge_45.hpp does not.
# include <math.h>      // for fabs without std in front

namespace {
     class Fun {
     public:
          // constructor
          Fun(bool use_x_) : use_x(use_x_)
          { }

          // set f = x'(t)
          void Ode(
               const double                &t,
               const CppAD::vector<double> &x,
               CppAD::vector<double>       &f)
          {     size_t n  = x.size();
               double ti = 1.;
               f[0]      = 1.;
               size_t i;
               for(i = 1; i < n; i++)
               {     ti *= t;
                    if( use_x )
                         f[i] = double(i+1) * x[i-1];
                    else     f[i] = double(i+1) * ti;
               }
          }
     private:
          const bool use_x;

     };
}

bool runge_45_1(void)
{     bool ok = true;     // initial return value
     size_t i;           // temporary indices

     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     size_t  n = 5;      // number components in X(t) and order of method
     size_t  M = 2;      // number of Runge45 steps in [ti, tf]
     double ti = 0.;     // initial time
     double tf = 2.;     // final time

     // xi = X(0)
     CppAD::vector<double> xi(n);
     for(i = 0; i <n; i++)
          xi[i] = 0.;

     size_t use_x;
     for( use_x = 0; use_x < 2; use_x++)
     {     // function object depends on value of use_x
          Fun F(use_x > 0);

          // compute Runge45 approximation for X(tf)
          CppAD::vector<double> xf(n), e(n);
          xf = CppAD::Runge45(F, M, ti, tf, xi, e);

          double check = tf;
          for(i = 0; i < n; i++)
          {     // check that error is always positive
               ok    &= (e[i] >= 0.);
               // 5th order method is exact for i < 5
               if( i < 5 ) ok &=
                    NearEqual(xf[i], check, eps99, eps99);
               // 4th order method is exact for i < 4
               if( i < 4 )
                    ok &= (e[i] <= eps99);

               // check value for next i
               check *= tf;
          }
     }
     return ok;
}

Input File: example/utility/runge45_1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.17.2: Runge45: Example and Test
Define @(@ X : \B{R} \times \B{R} \rightarrow \B{R}^n @)@ by @[@ X_j (b, t) = b \left( \sum_{k=0}^j t^k / k ! \right) @]@ for @(@ j = 0 , \ldots , n-1 @)@. It follows that @[@ \begin{array}{rcl} X_j (b, 0) & = & b \\ \partial_t X_j (b, t) & = & b \left( \sum_{k=0}^{j-1} t^k / k ! \right) \\ \partial_t X_j (b, t) & = & \left\{ \begin{array}{ll} 0 & {\rm if} \; j = 0 \\ X_{j-1} (b, t) & {\rm otherwise} \end{array} \right. \end{array} @]@ For a fixed @(@ t_f @)@, we can use 8.17: Runge45 to define @(@ f : \B{R} \rightarrow \B{R}^n @)@ as an approximation for @(@ f(b) = X(b, t_f ) @)@. We can then compute @(@ f^{(1)} (b) @)@ which is an approximation for @[@ \partial_b X(b, t_f ) = \sum_{k=0}^j t_f^k / k ! @]@

# include <cstddef>              // for size_t
# include <limits>               // for machine epsilon
# include <cppad/cppad.hpp>      // for all of CppAD

namespace {

     template <class Scalar>
     class Fun {
     public:
          // constructor
          Fun(void)
          { }

          // set return value to X'(t)
          void Ode(
               const Scalar                    &t,
               const CPPAD_TESTVECTOR(Scalar) &x,
               CPPAD_TESTVECTOR(Scalar)       &f)
          {     size_t n  = x.size();
               f[0]      = 0.;
               for(size_t k = 1; k < n; k++)
                    f[k] = x[k-1];
          }
     };
}

bool runge_45_2(void)
{     typedef CppAD::AD<double> Scalar;
     using CppAD::NearEqual;

     bool ok = true;     // initial return value
     size_t j;           // temporary indices

     size_t     n = 5;   // number components in X(t) and order of method
     size_t     M = 2;   // number of Runge45 steps in [ti, tf]
     Scalar ad_ti = 0.;  // initial time
     Scalar ad_tf = 2.;  // final time

     // value of independent variable at which to record operations
     CPPAD_TESTVECTOR(Scalar) ad_b(1);
     ad_b[0] = 1.;

     // declare b to be the independent variable
     Independent(ad_b);

     // object to evaluate ODE
     Fun<Scalar> ad_F;

     // xi = X(0)
     CPPAD_TESTVECTOR(Scalar) ad_xi(n);
     for(j = 0; j < n; j++)
          ad_xi[j] = ad_b[0];

     // compute Runge45 approximation for X(tf)
     CPPAD_TESTVECTOR(Scalar) ad_xf(n), ad_e(n);
     ad_xf = CppAD::Runge45(ad_F, M, ad_ti, ad_tf, ad_xi, ad_e);

     // stop recording and use it to create f : b -> xf
     CppAD::ADFun<double> f(ad_b, ad_xf);

     // evaluate f(b)
     CPPAD_TESTVECTOR(double)  b(1);
     CPPAD_TESTVECTOR(double) xf(n);
     b[0] = 1.;
     xf   = f.Forward(0, b);

     // check that f(b) = X(b, tf)
     double tf    = Value(ad_tf);
     double term  = 1;
     double sum   = 0;
     double eps   = 10. * CppAD::numeric_limits<double>::epsilon();
     for(j = 0; j < n; j++)
     {     sum += term;
          ok &= NearEqual(xf[j], b[0] * sum, eps, eps);
          term *= tf;
          term /= double(j+1);
     }

     // evalute f'(b)
     CPPAD_TESTVECTOR(double) d_xf(n);
     d_xf = f.Jacobian(b);

     // check that f'(b) = partial of X(b, tf) w.r.t b
     term  = 1;
     sum   = 0;
     for(j = 0; j < n; j++)
     {     sum += term;
          ok &= NearEqual(d_xf[j], sum, eps, eps);
          term *= tf;
          term /= double(j+1);
     }

     return ok;
}

Input File: example/general/runge45_2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.18: A 3rd and 4th Order Rosenbrock ODE Solver

8.18.a: Syntax
# include <cppad/utility/rosen_34.hpp>
xf = Rosen34(FMtitfxi)
xf = Rosen34(FMtitfxie)

8.18.b: Description
This is an embedded 3rd and 4th order Rosenbrock ODE solver (see Section 16.6 of 12.5.d: Numerical Recipes for a description of Rosenbrock ODE solvers). In particular, we use the formulas taken from page 100 of 12.5.e: Shampine, L.F. (except that the fraction 98/108 has been correction to be 97/108).

We use @(@ n @)@ for the size of the vector xi . Let @(@ \B{R} @)@ denote the real numbers and let @(@ F : \B{R} \times \B{R}^n \rightarrow \B{R}^n @)@ be a smooth function. The return value xf contains a 5th order approximation for the value @(@ X(tf) @)@ where @(@ X : [ti , tf] \rightarrow \B{R}^n @)@ is defined by the following initial value problem: @[@ \begin{array}{rcl} X(ti) & = & xi \\ X'(t) & = & F[t , X(t)] \end{array} @]@ If your set of ordinary differential equations are not stiff an explicit method may be better (perhaps 8.17: Runge45 .)

8.18.c: Include
The file cppad/rosen_34.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.18.d: xf
The return value xf has the prototype
     
Vector xf
and the size of xf is equal to n (see description of 8.18.l: Vector below). @[@ X(tf) = xf + O( h^5 ) @]@ where @(@ h = (tf - ti) / M @)@ is the step size. If xf contains not a number 8.11: nan , see the discussion of 8.18.e.f: f .

8.18.e: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
This must support the following set of calls
     
F.Ode(txf)
     
F.Ode_ind(txf_t)
     
F.Ode_dep(txf_x)

8.18.e.a: t
In all three cases, the argument t has prototype
     const 
Scalar &t
(see description of 8.18.k: Scalar below).

8.18.e.b: x
In all three cases, the argument x has prototype
     const 
Vector &x
and has size n (see description of 8.18.l: Vector below).

8.18.e.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size n and the input values of the elements of f do not matter. On output, f is set equal to @(@ F(t, x) @)@ (see F(t, x) in 8.18.b: Description ).

8.18.e.d: f_t
The argument f_t to F.Ode_ind has prototype
     
Vector &f_t
On input and output, f_t is a vector of size n and the input values of the elements of f_t do not matter. On output, the i-th element of f_t is set equal to @(@ \partial_t F_i (t, x) @)@ (see F(t, x) in 8.18.b: Description ).

8.18.e.e: f_x
The argument f_x to F.Ode_dep has prototype
     
Vector &f_x
On input and output, f_x is a vector of size n*n and the input values of the elements of f_x do not matter. On output, the [ i*n+j ] element of f_x is set equal to @(@ \partial_{x(j)} F_i (t, x) @)@ (see F(t, x) in 8.18.b: Description ).

8.18.e.f: Nan
If any of the elements of f , f_t , or f_x have the value not a number nan, the routine Rosen34 returns with all the elements of xf and e equal to nan.

8.18.e.g: Warning
The arguments f , f_t , and f_x must have a call by reference in their prototypes; i.e., do not forget the & in the prototype for f , f_t and f_x .

8.18.e.h: Optimization
Every call of the form
     
F.Ode_ind(txf_t)
is directly followed by a call of the form
     
F.Ode_dep(txf_x)
where the arguments t and x have not changed between calls. In many cases it is faster to compute the values of f_t and f_x together and then pass them back one at a time.

8.18.f: M
The argument M has prototype
     size_t 
M
It specifies the number of steps to use when solving the differential equation. This must be greater than or equal one. The step size is given by @(@ h = (tf - ti) / M @)@, thus the larger M , the more accurate the return value xf is as an approximation for @(@ X(tf) @)@.

8.18.g: ti
The argument ti has prototype
     const 
Scalar &ti
(see description of 8.18.k: Scalar below). It specifies the initial time for t in the differential equation; i.e., the time corresponding to the value xi .

8.18.h: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for t in the differential equation; i.e., the time corresponding to the value xf .

8.18.i: xi
The argument xi has the prototype
     const 
Vector &xi
and the size of xi is equal to n . It specifies the value of @(@ X(ti) @)@

8.18.j: e
The argument e is optional and has the prototype
     
Vector &e
If e is present, the size of e must be equal to n . The input value of the elements of e does not matter. On output it contains an element by element estimated bound for the absolute value of the error in xf @[@ e = O( h^4 ) @]@ where @(@ h = (tf - ti) / M @)@ is the step size.

8.18.k: Scalar
The type Scalar must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b :
Operation Description
a < b less than operator (returns a bool object)

8.18.l: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Scalar . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.18.m: Parallel Mode
For each set of types 8.18.k: Scalar , 8.18.l: Vector , and 8.18.e: Fun , the first call to Rosen34 must not be 8.23.4: parallel execution mode.

8.18.n: Example
The file 8.18.1: rosen_34.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

8.18.o: Source Code
The source code for this routine is in the file cppad/rosen_34.hpp.
Input File: cppad/utility/rosen_34.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.18.1: Rosen34: Example and Test
Define @(@ X : \B{R} \rightarrow \B{R}^n @)@ by @[@ X_i (t) = t^{i+1} @]@ for @(@ i = 1 , \ldots , n-1 @)@. It follows that @[@ \begin{array}{rclr} X_i(0) & = & 0 & {\rm for \; all \;} i \\ X_i ' (t) & = & 1 & {\rm if \;} i = 0 \\ X_i '(t) & = & (i+1) t^i = (i+1) X_{i-1} (t) & {\rm if \;} i > 0 \end{array} @]@ The example tests Rosen34 using the relations above:

# include <cppad/cppad.hpp>        // For automatic differentiation

namespace {
     class Fun {
     public:
          // constructor
          Fun(bool use_x_) : use_x(use_x_)
          { }

          // compute f(t, x) both for double and AD<double>
          template <typename Scalar>
          void Ode(
               const Scalar                    &t,
               const CPPAD_TESTVECTOR(Scalar) &x,
               CPPAD_TESTVECTOR(Scalar)       &f)
          {     size_t n  = x.size();
               Scalar ti(1);
               f[0]   = Scalar(1);
               size_t i;
               for(i = 1; i < n; i++)
               {     ti *= t;
                    // convert int(size_t) to avoid warning
                    // on _MSC_VER systems
                    if( use_x )
                         f[i] = int(i+1) * x[i-1];
                    else     f[i] = int(i+1) * ti;
               }
          }

          // compute partial of f(t, x) w.r.t. t using AD
          void Ode_ind(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f_t)
          {     using namespace CppAD;

               size_t n  = x.size();
               CPPAD_TESTVECTOR(AD<double>) T(1);
               CPPAD_TESTVECTOR(AD<double>) X(n);
               CPPAD_TESTVECTOR(AD<double>) F(n);

               // set argument values
               T[0] = t;
               size_t i;
               for(i = 0; i < n; i++)
                    X[i] = x[i];

               // declare independent variables
               Independent(T);

               // compute f(t, x)
               this->Ode(T[0], X, F);

               // define AD function object
               ADFun<double> fun(T, F);

               // compute partial of f w.r.t t
               CPPAD_TESTVECTOR(double) dt(1);
               dt[0] = 1.;
               f_t = fun.Forward(1, dt);
          }

          // compute partial of f(t, x) w.r.t. x using AD
          void Ode_dep(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f_x)
          {     using namespace CppAD;

               size_t n  = x.size();
               CPPAD_TESTVECTOR(AD<double>) T(1);
               CPPAD_TESTVECTOR(AD<double>) X(n);
               CPPAD_TESTVECTOR(AD<double>) F(n);

               // set argument values
               T[0] = t;
               size_t i, j;
               for(i = 0; i < n; i++)
                    X[i] = x[i];

               // declare independent variables
               Independent(X);

               // compute f(t, x)
               this->Ode(T[0], X, F);

               // define AD function object
               ADFun<double> fun(X, F);

               // compute partial of f w.r.t x
               CPPAD_TESTVECTOR(double) dx(n);
               CPPAD_TESTVECTOR(double) df(n);
               for(j = 0; j < n; j++)
                    dx[j] = 0.;
               for(j = 0; j < n; j++)
               {     dx[j] = 1.;
                    df = fun.Forward(1, dx);
                    for(i = 0; i < n; i++)
                         f_x [i * n + j] = df[i];
                    dx[j] = 0.;
               }
          }

     private:
          const bool use_x;

     };
}

bool Rosen34(void)
{     bool ok = true;     // initial return value
     size_t i;           // temporary indices

     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     size_t  n = 4;      // number components in X(t) and order of method
     size_t  M = 2;      // number of Rosen34 steps in [ti, tf]
     double ti = 0.;     // initial time
     double tf = 2.;     // final time

     // xi = X(0)
     CPPAD_TESTVECTOR(double) xi(n);
     for(i = 0; i <n; i++)
          xi[i] = 0.;

     size_t use_x;
     for( use_x = 0; use_x < 2; use_x++)
     {     // function object depends on value of use_x
          Fun F(use_x > 0);

          // compute Rosen34 approximation for X(tf)
          CPPAD_TESTVECTOR(double) xf(n), e(n);
          xf = CppAD::Rosen34(F, M, ti, tf, xi, e);

          double check = tf;
          for(i = 0; i < n; i++)
          {     // check that error is always positive
               ok    &= (e[i] >= 0.);
               // 4th order method is exact for i < 4
               if( i < 4 ) ok &=
                    NearEqual(xf[i], check, eps99, eps99);
               // 3rd order method is exact for i < 3
               if( i < 3 )
                    ok &= (e[i] <= eps99);

               // check value for next i
               check *= tf;
          }
     }
     return ok;
}

Input File: example/general/rosen_34.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.19: An Error Controller for ODE Solvers

8.19.a: Syntax
# include <cppad/utility/ode_err_control.hpp>
xf = OdeErrControl(methodtitfxi,
     
sminsmaxscureabserelef , maxabsnstep )


8.19.b: Description
Let @(@ \B{R} @)@ denote the real numbers and let @(@ F : \B{R} \times \B{R}^n \rightarrow \B{R}^n @)@ be a smooth function. We define @(@ X : [ti , tf] \rightarrow \B{R}^n @)@ by the following initial value problem: @[@ \begin{array}{rcl} X(ti) & = & xi \\ X'(t) & = & F[t , X(t)] \end{array} @]@ The routine OdeErrControl can be used to adjust the step size used an arbitrary integration methods in order to be as fast as possible and still with in a requested error bound.

8.19.c: Include
The file cppad/ode_err_control.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.19.d: Notation
The template parameter types 8.19.s: Scalar and 8.19.t: Vector are documented below.

8.19.e: xf
The return value xf has the prototype
     
Vector xf
(see description of 8.19.t: Vector below). and the size of xf is equal to n . If xf contains not a number 8.11: nan , see the discussion of 8.19.f.b: step .

8.19.f: Method
The class Method and the object method satisfy the following syntax
     
Method &method
The object method must support step and order member functions defined below:

8.19.f.a: step
The syntax
     
method.step(tatbxaxbeb)
executes one step of the integration method.

ta
The argument ta has prototype
     const 
Scalar &ta
It specifies the initial time for this step in the ODE integration. (see description of 8.19.s: Scalar below).

tb
The argument tb has prototype
     const 
Scalar &tb
It specifies the final time for this step in the ODE integration.

xa
The argument xa has prototype
     const 
Vector &xa
and size n . It specifies the value of @(@ X(ta) @)@. (see description of 8.19.t: Vector below).

xb
The argument value xb has prototype
     
Vector &xb
and size n . The input value of its elements does not matter. On output, it contains the approximation for @(@ X(tb) @)@ that the method obtains.

eb
The argument value eb has prototype
     
Vector &eb
and size n . The input value of its elements does not matter. On output, it contains an estimate for the error in the approximation xb . It is assumed (locally) that the error bound in this approximation nearly equal to @(@ K (tb - ta)^m @)@ where K is a fixed constant and m is the corresponding argument to CodeControl.

8.19.f.b: Nan
If any element of the vector eb or xb are not a number nan, the current step is considered to large. If this happens with the current step size equal to smin , OdeErrControl returns with xf and ef as vectors of nan.

8.19.f.c: order
If m is size_t, the object method must also support the following syntax
     
m = method.order()
The return value m is the order of the error estimate; i.e., there is a constant K such that if @(@ ti \leq ta \leq tb \leq tf @)@, @[@ | eb(tb) | \leq K | tb - ta |^m @]@ where ta , tb , and eb are as in method.step(tatbxaxbeb)

8.19.g: ti
The argument ti has prototype
     const 
Scalar &ti
It specifies the initial time for the integration of the differential equation.

8.19.h: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for the integration of the differential equation.

8.19.i: xi
The argument xi has prototype
     const 
Vector &xi
and size n . It specifies value of @(@ X(ti) @)@.

8.19.j: smin
The argument smin has prototype
     const 
Scalar &smin
The step size during a call to method is defined as the corresponding value of @(@ tb - ta @)@. If @(@ tf - ti \leq smin @)@, the integration will be done in one step of size tf - ti . Otherwise, the minimum value of tb - ta will be @(@ smin @)@ except for the last two calls to method where it may be as small as @(@ smin / 2 @)@.

8.19.k: smax
The argument smax has prototype
     const 
Scalar &smax
It specifies the maximum step size to use during the integration; i.e., the maximum value for @(@ tb - ta @)@ in a call to method . The value of smax must be greater than or equal smin .

8.19.l: scur
The argument scur has prototype
     
Scalar &scur
The value of scur is the suggested next step size, based on error criteria, to try in the next call to method . On input it corresponds to the first call to method , in this call to OdeErrControl (where @(@ ta = ti @)@). On output it corresponds to the next call to method , in a subsequent call to OdeErrControl (where ta = tf ).

8.19.m: eabs
The argument eabs has prototype
     const 
Vector &eabs
and size n . Each of the elements of eabs must be greater than or equal zero. It specifies a bound for the absolute error in the return value xf as an approximation for @(@ X(tf) @)@. (see the 8.19.r: error criteria discussion below).

8.19.n: erel
The argument erel has prototype
     const 
Scalar &erel
and is greater than or equal zero. It specifies a bound for the relative error in the return value xf as an approximation for @(@ X(tf) @)@ (see the 8.19.r: error criteria discussion below).

8.19.o: ef
The argument value ef has prototype
     
Vector &ef
and size n . The input value of its elements does not matter. On output, it contains an estimated bound for the absolute error in the approximation xf ; i.e., @[@ ef_i > | X( tf )_i - xf_i | @]@ If on output ef contains not a number nan, see the discussion of 8.19.f.b: step .

8.19.p: maxabs
The argument maxabs is optional in the call to OdeErrControl. If it is present, it has the prototype
     
Vector &maxabs
and size n . The input value of its elements does not matter. On output, it contains an estimate for the maximum absolute value of @(@ X(t) @)@; i.e., @[@ maxabs[i] \approx \max \left\{ | X( t )_i | \; : \; t \in [ti, tf] \right\} @]@

8.19.q: nstep
The argument nstep is optional in the call to OdeErrControl. If it is present, it has the prototype
     
size_t &nstep
Its input value does not matter and its output value is the number of calls to method.step used by OdeErrControl.

8.19.r: Error Criteria Discussion
The relative error criteria erel and absolute error criteria eabs are enforced during each step of the integration of the ordinary differential equations. In addition, they are inversely scaled by the step size so that the total error bound is less than the sum of the error bounds. To be specific, if @(@ \tilde{X} (t) @)@ is the approximate solution at time @(@ t @)@, ta is the initial step time, and tb is the final step time, @[@ \left| \tilde{X} (tb)_j - X (tb)_j \right| \leq \frac{tf - ti}{tb - ta} \left[ eabs[j] + erel \; | \tilde{X} (tb)_j | \right] @]@ If @(@ X(tb)_j @)@ is near zero for some @(@ tb \in [ti , tf] @)@, and one uses an absolute error criteria @(@ eabs[j] @)@ of zero, the error criteria above will force OdeErrControl to use step sizes equal to 8.19.j: smin for steps ending near @(@ tb @)@. In this case, the error relative to maxabs can be judged after OdeErrControl returns. If ef is to large relative to maxabs , OdeErrControl can be called again with a smaller value of smin .

8.19.s: Scalar
The type Scalar must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b :
Operation Description
a <= b returns true (false) if a is less than or equal (greater than) b .
a == b returns true (false) if a is equal to b .
log(a) returns a Scalar equal to the logarithm of a
exp(a) returns a Scalar equal to the exponential of a

8.19.t: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Scalar . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.19.u: Example
The files 8.19.1: ode_err_control.cpp and 8.19.2: ode_err_maxabs.cpp contain examples and tests of using this routine. They return true if they succeed and false otherwise.

8.19.v: Theory
Let @(@ e(s) @)@ be the error as a function of the step size @(@ s @)@ and suppose that there is a constant @(@ K @)@ such that @(@ e(s) = K s^m @)@. Let @(@ a @)@ be our error bound. Given the value of @(@ e(s) @)@, a step of size @(@ \lambda s @)@ would be ok provided that @[@ \begin{array}{rcl} a & \geq & e( \lambda s ) (tf - ti) / ( \lambda s ) \\ a & \geq & K \lambda^m s^m (tf - ti) / ( \lambda s ) \\ a & \geq & \lambda^{m-1} s^{m-1} (tf - ti) e(s) / s^m \\ a & \geq & \lambda^{m-1} (tf - ti) e(s) / s \\ \lambda^{m-1} & \leq & \frac{a}{e(s)} \frac{s}{tf - ti} \end{array} @]@ Thus if the right hand side of the last inequality is greater than or equal to one, the step of size @(@ s @)@ is ok.

8.19.w: Source Code
The source code for this routine is in the file cppad/ode_err_control.hpp.
Input File: cppad/utility/ode_err_control.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.19.1: OdeErrControl: Example and Test
Define @(@ X : \B{R} \rightarrow \B{R}^2 @)@ by @[@ \begin{array}{rcl} X_0 (0) & = & 1 \\ X_1 (0) & = & 0 \\ X_0^{(1)} (t) & = & - \alpha X_0 (t) \\ X_1^{(1)} (t) & = & 1 / X_0 (t) \end{array} @]@ It follows that @[@ \begin{array}{rcl} X_0 (t) & = & \exp ( - \alpha t ) \\ X_1 (t) & = & [ \exp( \alpha t ) - 1 ] / \alpha \end{array} @]@ This example tests OdeErrControl using the relations above.

8.19.1.a: Nan
Note that @(@ X_0 (t) > 0 @)@ for all @(@ t @)@ and that the ODE goes through a singularity between @(@ X_0 (t) > 0 @)@ and @(@ X_0 (t) < 0 @)@. If @(@ X_0 (t) < 0 @)@, we return nan in order to inform OdeErrControl that its is taking to large a step.

# include <limits>                      // for quiet_NaN
# include <cstddef>                     // for size_t
# include <cmath>                       // for exp
# include <cppad/utility/ode_err_control.hpp>   // CppAD::OdeErrControl
# include <cppad/utility/near_equal.hpp>        // CppAD::NearEqual
# include <cppad/utility/vector.hpp>            // CppAD::vector
# include <cppad/utility/runge_45.hpp>          // CppAD::Runge45

namespace {
     // --------------------------------------------------------------
     class Fun {
     private:
          const double alpha_;
     public:
          // constructor
          Fun(double alpha) : alpha_(alpha)
          { }

          // set f = x'(t)
          void Ode(
               const double                &t,
               const CppAD::vector<double> &x,
               CppAD::vector<double>       &f)
          {     f[0] = - alpha_ * x[0];
               f[1] = 1. / x[0];
               // case where ODE does not make sense
               if( x[0] < 0. )
                    f[1] = std::numeric_limits<double>::quiet_NaN();
          }

     };

     // --------------------------------------------------------------
     class Method {
     private:
          Fun F;
     public:
          // constructor
          Method(double alpha) : F(alpha)
          { }
          void step(
               double ta,
               double tb,
               CppAD::vector<double> &xa ,
               CppAD::vector<double> &xb ,
               CppAD::vector<double> &eb )
          {     xb = CppAD::Runge45(F, 1, ta, tb, xa, eb);
          }
          size_t order(void)
          {     return 4; }
     };
}

bool OdeErrControl(void)
{     bool ok = true;     // initial return value

     double alpha = 10.;
     Method method(alpha);

     CppAD::vector<double> xi(2);
     xi[0] = 1.;
     xi[1] = 0.;

     CppAD::vector<double> eabs(2);
     eabs[0] = 1e-4;
     eabs[1] = 1e-4;

     // inputs
     double ti   = 0.;
     double tf   = 1.;
     double smin = 1e-4;
     double smax = 1.;
     double scur = 1.;
     double erel = 0.;

     // outputs
     CppAD::vector<double> ef(2);
     CppAD::vector<double> xf(2);
     CppAD::vector<double> maxabs(2);
     size_t nstep;


     xf = OdeErrControl(method,
          ti, tf, xi, smin, smax, scur, eabs, erel, ef, maxabs, nstep);

     double x0 = exp(-alpha*tf);
     ok &= CppAD::NearEqual(x0, xf[0], 1e-4, 1e-4);
     ok &= CppAD::NearEqual(0., ef[0], 1e-4, 1e-4);

     double x1 = (exp(alpha*tf) - 1) / alpha;
     ok &= CppAD::NearEqual(x1, xf[1], 1e-4, 1e-4);
     ok &= CppAD::NearEqual(0., ef[1], 1e-4, 1e-4);

     return ok;
}

Input File: example/utility/ode_err_control.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
Define @(@ X : \B{R} \rightarrow \B{R}^2 @)@ by @[@ \begin{array}{rcl} X_0 (t) & = & - \exp ( - w_0 t ) \\ X_1 (t) & = & \frac{w_0}{w_1 - w_0} [ \exp ( - w_0 t ) - \exp( - w_1 t )] \end{array} @]@ It follows that @(@ X_0 (0) = 1 @)@, @(@ X_1 (0) = 0 @)@ and @[@ \begin{array}{rcl} X_0^{(1)} (t) & = & - w_0 X_0 (t) \\ X_1^{(1)} (t) & = & + w_0 X_0 (t) - w_1 X_1 (t) \end{array} @]@ Note that @(@ X_1 (0) @)@ is zero an if @(@ w_0 t @)@ is large, @(@ X_0 (t) @)@ is near zero. This example tests OdeErrControl using the maxabs argument.

# include <cstddef>              // for size_t
# include <cmath>                // for exp
# include <cppad/utility/ode_err_control.hpp>   // CppAD::OdeErrControl
# include <cppad/utility/near_equal.hpp>    // CppAD::NearEqual
# include <cppad/utility/vector.hpp> // CppAD::vector
# include <cppad/utility/runge_45.hpp>      // CppAD::Runge45

namespace {
     // --------------------------------------------------------------
     class Fun {
     private:
           CppAD::vector<double> w;
     public:
          // constructor
          Fun(const CppAD::vector<double> &w_) : w(w_)
          { }

          // set f = x'(t)
          void Ode(
               const double                &t,
               const CppAD::vector<double> &x,
               CppAD::vector<double>       &f)
          {     f[0] = - w[0] * x[0];
               f[1] = + w[0] * x[0] - w[1] * x[1];
          }
     };

     // --------------------------------------------------------------
     class Method {
     private:
          Fun F;
     public:
          // constructor
          Method(const CppAD::vector<double> &w_) : F(w_)
          { }
          void step(
               double ta,
               double tb,
               CppAD::vector<double> &xa ,
               CppAD::vector<double> &xb ,
               CppAD::vector<double> &eb )
          {     xb = CppAD::Runge45(F, 1, ta, tb, xa, eb);
          }
          size_t order(void)
          {     return 4; }
     };
}

bool OdeErrMaxabs(void)
{     bool ok = true;     // initial return value

     CppAD::vector<double> w(2);
     w[0] = 10.;
     w[1] = 1.;
     Method method(w);

     CppAD::vector<double> xi(2);
     xi[0] = 1.;
     xi[1] = 0.;

     CppAD::vector<double> eabs(2);
     eabs[0] = 0.;
     eabs[1] = 0.;

     CppAD::vector<double> ef(2);
     CppAD::vector<double> xf(2);
     CppAD::vector<double> maxabs(2);

     double ti   = 0.;
     double tf   = 1.;
     double smin = .5;
     double smax = 1.;
     double scur = .5;
     double erel = 1e-4;

     bool accurate = false;
     while( ! accurate )
     {     xf = OdeErrControl(method,
               ti, tf, xi, smin, smax, scur, eabs, erel, ef, maxabs);
          accurate = true;
          size_t i;
          for(i = 0; i < 2; i++)
               accurate &= ef[i] <= erel * maxabs[i];
          if( ! accurate )
               smin = smin / 2;
     }

     double x0 = exp(-w[0]*tf);
     ok &= CppAD::NearEqual(x0, xf[0], erel, 0.);
     ok &= CppAD::NearEqual(0., ef[0], erel, erel);

     double x1 = w[0] * (exp(-w[0]*tf) - exp(-w[1]*tf))/(w[1] - w[0]);
     ok &= CppAD::NearEqual(x1, xf[1], erel, 0.);
     ok &= CppAD::NearEqual(0., ef[1], erel, erel);

     return ok;
}

Input File: example/utility/ode_err_maxabs.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.20: An Arbitrary Order Gear Method

8.20.a: Syntax
# include <cppad/utility/ode_gear.hpp>
OdeGear(FmnTXe)

8.20.b: Purpose
This routine applies 8.20.o: Gear's Method to solve an explicit set of ordinary differential equations. We are given @(@ f : \B{R} \times \B{R}^n \rightarrow \B{R}^n @)@ be a smooth function. This routine solves the following initial value problem @[@ \begin{array}{rcl} x( t_{m-1} ) & = & x^0 \\ x^\prime (t) & = & f[t , x(t)] \end{array} @]@ for the value of @(@ x( t_m ) @)@. If your set of ordinary differential equations are not stiff an explicit method may be better (perhaps 8.17: Runge45 .)

8.20.c: Include
The file cppad/ode_gear.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.20.d: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
This must support the following set of calls
     
F.Ode(txf)
     
F.Ode_dep(txf_x)

8.20.d.a: t
The argument t has prototype
     const 
Scalar &t
(see description of 8.20.j: Scalar below).

8.20.d.b: x
The argument x has prototype
     const 
Vector &x
and has size n (see description of 8.20.k: Vector below).

8.20.d.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size n and the input values of the elements of f do not matter. On output, f is set equal to @(@ f(t, x) @)@ (see f(t, x) in 8.20.b: Purpose ).

8.20.d.d: f_x
The argument f_x has prototype
     
Vector &f_x
On input and output, f_x is a vector of size @(@ n * n @)@ and the input values of the elements of f_x do not matter. On output, @[@ f\_x [i * n + j] = \partial_{x(j)} f_i ( t , x ) @]@

8.20.d.e: Warning
The arguments f , and f_x must have a call by reference in their prototypes; i.e., do not forget the & in the prototype for f and f_x .

8.20.e: m
The argument m has prototype
     size_t 
m
It specifies the order (highest power of @(@ t @)@) used to represent the function @(@ x(t) @)@ in the multi-step method. Upon return from OdeGear, the i-th component of the polynomial is defined by @[@ p_i ( t_j ) = X[ j * n + i ] @]@ for @(@ j = 0 , \ldots , m @)@ (where @(@ 0 \leq i < n @)@). The value of @(@ m @)@ must be greater than or equal one.

8.20.f: n
The argument n has prototype
     size_t 
n
It specifies the range space dimension of the vector valued function @(@ x(t) @)@.

8.20.g: T
The argument T has prototype
     const 
Vector &T
and size greater than or equal to @(@ m+1 @)@. For @(@ j = 0 , \ldots m @)@, @(@ T[j] @)@ is the time corresponding to time corresponding to a previous point in the multi-step method. The value @(@ T[m] @)@ is the time of the next point in the multi-step method. The array @(@ T @)@ must be monotone increasing; i.e., @(@ T[j] < T[j+1] @)@. Above and below we often use the shorthand @(@ t_j @)@ for @(@ T[j] @)@.

8.20.h: X
The argument X has the prototype
     
Vector &X
and size greater than or equal to @(@ (m+1) * n @)@. On input to OdeGear, for @(@ j = 0 , \ldots , m-1 @)@, and @(@ i = 0 , \ldots , n-1 @)@ @[@ X[ j * n + i ] = x_i ( t_j ) @]@ Upon return from OdeGear, for @(@ i = 0 , \ldots , n-1 @)@ @[@ X[ m * n + i ] \approx x_i ( t_m ) @]@

8.20.i: e
The vector e is an approximate error bound for the result; i.e., @[@ e[i] \geq | X[ m * n + i ] - x_i ( t_m ) | @]@ The order of this approximation is one less than the order of the solution; i.e., @[@ e = O ( h^m ) @]@ where @(@ h @)@ is the maximum of @(@ t_{j+1} - t_j @)@.

8.20.j: Scalar
The type Scalar must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b :
Operation Description
a < b less than operator (returns a bool object)

8.20.k: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Scalar . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.20.l: Example
The file 8.20.1: ode_gear.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

8.20.m: Source Code
The source code for this routine is in the file cppad/ode_gear.hpp.

8.20.n: Theory
For this discussion we use the shorthand @(@ x_j @)@ for the value @(@ x ( t_j ) \in \B{R}^n @)@ which is not to be confused with @(@ x_i (t) \in \B{R} @)@ in the notation above. The interpolating polynomial @(@ p(t) @)@ is given by @[@ p(t) = \sum_{j=0}^m x_j \frac{ \prod_{i \neq j} ( t - t_i ) }{ \prod_{i \neq j} ( t_j - t_i ) } @]@ The derivative @(@ p^\prime (t) @)@ is given by @[@ p^\prime (t) = \sum_{j=0}^m x_j \frac{ \sum_{i \neq j} \prod_{k \neq i,j} ( t - t_k ) }{ \prod_{k \neq j} ( t_j - t_k ) } @]@ Evaluating the derivative at the point @(@ t_\ell @)@ we have @[@ \begin{array}{rcl} p^\prime ( t_\ell ) & = & x_\ell \frac{ \sum_{i \neq \ell} \prod_{k \neq i,\ell} ( t_\ell - t_k ) }{ \prod_{k \neq \ell} ( t_\ell - t_k ) } + \sum_{j \neq \ell} x_j \frac{ \sum_{i \neq j} \prod_{k \neq i,j} ( t_\ell - t_k ) }{ \prod_{k \neq j} ( t_j - t_k ) } \\ & = & x_\ell \sum_{i \neq \ell} \frac{ 1 }{ t_\ell - t_i } + \sum_{j \neq \ell} x_j \frac{ \prod_{k \neq \ell,j} ( t_\ell - t_k ) }{ \prod_{k \neq j} ( t_j - t_k ) } \\ & = & x_\ell \sum_{k \neq \ell} ( t_\ell - t_k )^{-1} + \sum_{j \neq \ell} x_j ( t_j - t_\ell )^{-1} \prod_{k \neq \ell ,j} ( t_\ell - t_k ) / ( t_j - t_k ) \end{array} @]@ We define the vector @(@ \alpha \in \B{R}^{m+1} @)@ by @[@ \alpha_j = \left\{ \begin{array}{ll} \sum_{k \neq m} ( t_m - t_k )^{-1} & {\rm if} \; j = m \\ ( t_j - t_m )^{-1} \prod_{k \neq m,j} ( t_m - t_k ) / ( t_j - t_k ) & {\rm otherwise} \end{array} \right. @]@ It follows that @[@ p^\prime ( t_m ) = \alpha_0 x_0 + \cdots + \alpha_m x_m @]@ Gear's method determines @(@ x_m @)@ by solving the following nonlinear equation @[@ f( t_m , x_m ) = \alpha_0 x_0 + \cdots + \alpha_m x_m @]@ Newton's method for solving this equation determines iterates, which we denote by @(@ x_m^k @)@, by solving the following affine approximation of the equation above @[@ \begin{array}{rcl} f( t_m , x_m^{k-1} ) + \partial_x f( t_m , x_m^{k-1} ) ( x_m^k - x_m^{k-1} ) & = & \alpha_0 x_0^k + \alpha_1 x_1 + \cdots + \alpha_m x_m \\ \left[ \alpha_m I - \partial_x f( t_m , x_m^{k-1} ) \right] x_m & = & \left[ f( t_m , x_m^{k-1} ) - \partial_x f( t_m , x_m^{k-1} ) x_m^{k-1} - \alpha_0 x_0 - \cdots - \alpha_{m-1} x_{m-1} \right] \end{array} @]@ In order to initialize Newton's method; i.e. choose @(@ x_m^0 @)@ we define the vector @(@ \beta \in \B{R}^{m+1} @)@ by @[@ \beta_j = \left\{ \begin{array}{ll} \sum_{k \neq m-1} ( t_{m-1} - t_k )^{-1} & {\rm if} \; j = m-1 \\ ( t_j - t_{m-1} )^{-1} \prod_{k \neq m-1,j} ( t_{m-1} - t_k ) / ( t_j - t_k ) & {\rm otherwise} \end{array} \right. @]@ It follows that @[@ p^\prime ( t_{m-1} ) = \beta_0 x_0 + \cdots + \beta_m x_m @]@ We solve the following approximation of the equation above to determine @(@ x_m^0 @)@: @[@ f( t_{m-1} , x_{m-1} ) = \beta_0 x_0 + \cdots + \beta_{m-1} x_{m-1} + \beta_m x_m^0 @]@

8.20.o: Gear's Method
C. W. Gear, ``Simultaneous Numerical Solution of Differential-Algebraic Equations,'' IEEE Transactions on Circuit Theory, vol. 18, no. 1, pp. 89-95, Jan. 1971.
Input File: cppad/utility/ode_gear.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.20.1: OdeGear: Example and Test
Define @(@ x : \B{R} \rightarrow \B{R}^n @)@ by @[@ x_i (t) = t^{i+1} @]@ for @(@ i = 1 , \ldots , n-1 @)@. It follows that @[@ \begin{array}{rclr} x_i(0) & = & 0 & {\rm for \; all \;} i \\ x_i ' (t) & = & 1 & {\rm if \;} i = 0 \\ x_i '(t) & = & (i+1) t^i = (i+1) x_{i-1} (t) & {\rm if \;} i > 0 \end{array} @]@ The example tests OdeGear using the relations above:

# include <cppad/utility/ode_gear.hpp>
# include <cppad/cppad.hpp>        // For automatic differentiation

namespace {
     class Fun {
     public:
          // constructor
          Fun(bool use_x_) : use_x(use_x_)
          { }

          // compute f(t, x) both for double and AD<double>
          template <typename Scalar>
          void Ode(
               const Scalar                    &t,
               const CPPAD_TESTVECTOR(Scalar) &x,
               CPPAD_TESTVECTOR(Scalar)       &f)
          {     size_t n  = x.size();
               Scalar ti(1);
               f[0]   = Scalar(1);
               size_t i;
               for(i = 1; i < n; i++)
               {     ti *= t;
                    // convert int(size_t) to avoid warning
                    // on _MSC_VER systems
                    if( use_x )
                         f[i] = int(i+1) * x[i-1];
                    else     f[i] = int(i+1) * ti;
               }
          }

          void Ode_dep(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f_x)
          {     using namespace CppAD;

               size_t n  = x.size();
               CPPAD_TESTVECTOR(AD<double>) T(1);
               CPPAD_TESTVECTOR(AD<double>) X(n);
               CPPAD_TESTVECTOR(AD<double>) F(n);

               // set argument values
               T[0] = t;
               size_t i, j;
               for(i = 0; i < n; i++)
                    X[i] = x[i];

               // declare independent variables
               Independent(X);

               // compute f(t, x)
               this->Ode(T[0], X, F);

               // define AD function object
               ADFun<double> fun(X, F);

               // compute partial of f w.r.t x
               CPPAD_TESTVECTOR(double) dx(n);
               CPPAD_TESTVECTOR(double) df(n);
               for(j = 0; j < n; j++)
                    dx[j] = 0.;
               for(j = 0; j < n; j++)
               {     dx[j] = 1.;
                    df = fun.Forward(1, dx);
                    for(i = 0; i < n; i++)
                         f_x [i * n + j] = df[i];
                    dx[j] = 0.;
               }
          }

     private:
          const bool use_x;

     };
}

bool OdeGear(void)
{     bool ok = true; // initial return value
     size_t i, j;    // temporary indices
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     size_t  m = 4;  // index of next value in X
     size_t  n = m;  // number of components in x(t)

     // vector of times
     CPPAD_TESTVECTOR(double) T(m+1);
     double step = .1;
     T[0]        = 0.;
     for(j = 1; j <= m; j++)
     {     T[j] = T[j-1] + step;
          step = 2. * step;
     }

     // initial values for x( T[m-j] )
     CPPAD_TESTVECTOR(double) X((m+1) * n);
     for(j = 0; j < m; j++)
     {     double ti = T[j];
          for(i = 0; i < n; i++)
          {     X[ j * n + i ] = ti;
               ti *= T[j];
          }
     }

     // error bound
     CPPAD_TESTVECTOR(double) e(n);

     size_t use_x;
     for( use_x = 0; use_x < 2; use_x++)
     {     // function object depends on value of use_x
          Fun F(use_x > 0);

          // compute OdeGear approximation for x( T[m] )
          CppAD::OdeGear(F, m, n, T, X, e);

          double check = T[m];
          for(i = 0; i < n; i++)
          {     // method is exact up to order m and x[i] = t^{i+1}
               if( i + 1 <= m ) ok &= CppAD::NearEqual(
                    X[m * n + i], check, eps99, eps99
               );
               // error bound should be zero up to order m-1
               if( i + 1 < m ) ok &= CppAD::NearEqual(
                    e[i], 0., eps99, eps99
               );
               // check value for next i
               check *= T[m];
          }
     }
     return ok;
}

Input File: example/utility/ode_gear.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.21: An Error Controller for Gear's Ode Solvers

8.21.a: Syntax
# include <cppad/utility/ode_gear_control.hpp>
xf = OdeGearControl(FMtitfxi,
     
sminsmaxsinieabserelef , maxabsnstep )


8.21.b: Purpose
Let @(@ \B{R} @)@ denote the real numbers and let @(@ f : \B{R} \times \B{R}^n \rightarrow \B{R}^n @)@ be a smooth function. We define @(@ X : [ti , tf] \rightarrow \B{R}^n @)@ by the following initial value problem: @[@ \begin{array}{rcl} X(ti) & = & xi \\ X'(t) & = & f[t , X(t)] \end{array} @]@ The routine 8.20: OdeGear is a stiff multi-step method that can be used to approximate the solution to this equation. The routine OdeGearControl sets up this multi-step method and controls the error during such an approximation.

8.21.c: Include
The file cppad/ode_gear_control.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

8.21.d: Notation
The template parameter types 8.21.t: Scalar and 8.21.u: Vector are documented below.

8.21.e: xf
The return value xf has the prototype
     
Vector xf
and the size of xf is equal to n (see description of 8.20.k: Vector below). It is the approximation for @(@ X(tf) @)@.

8.21.f: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
This must support the following set of calls
     
F.Ode(txf)
     
F.Ode_dep(txf_x)

8.21.f.a: t
The argument t has prototype
     const 
Scalar &t
(see description of 8.20.j: Scalar below).

8.21.f.b: x
The argument x has prototype
     const 
Vector &x
and has size N (see description of 8.20.k: Vector below).

8.21.f.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size N and the input values of the elements of f do not matter. On output, f is set equal to @(@ f(t, x) @)@ (see f(t, x) in 8.20.b: Purpose ).

8.21.f.d: f_x
The argument f_x has prototype
     
Vector &f_x
On input and output, f_x is a vector of size @(@ N * N @)@ and the input values of the elements of f_x do not matter. On output, @[@ f\_x [i * n + j] = \partial_{x(j)} f_i ( t , x ) @]@

8.21.f.e: Warning
The arguments f , and f_x must have a call by reference in their prototypes; i.e., do not forget the & in the prototype for f and f_x .

8.21.g: M
The argument M has prototype
     size_t 
M
It specifies the order of the multi-step method; i.e., the order of the approximating polynomial (after the initialization process). The argument M must greater than or equal one.

8.21.h: ti
The argument ti has prototype
     const 
Scalar &ti
It specifies the initial time for the integration of the differential equation.

8.21.i: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for the integration of the differential equation.

8.21.j: xi
The argument xi has prototype
     const 
Vector &xi
and size n . It specifies value of @(@ X(ti) @)@.

8.21.k: smin
The argument smin has prototype
     const 
Scalar &smin
The minimum value of @(@ T[M] - T[M-1] @)@ in a call to OdeGear will be @(@ smin @)@ except for the last two calls where it may be as small as @(@ smin / 2 @)@. The value of smin must be less than or equal smax .

8.21.l: smax
The argument smax has prototype
     const 
Scalar &smax
It specifies the maximum step size to use during the integration; i.e., the maximum value for @(@ T[M] - T[M-1] @)@ in a call to OdeGear.

8.21.m: sini
The argument sini has prototype
     
Scalar &sini
The value of sini is the minimum step size to use during initialization of the multi-step method; i.e., for calls to OdeGear where @(@ m < M @)@. The value of sini must be less than or equal smax (and can also be less than smin ).

8.21.n: eabs
The argument eabs has prototype
     const 
Vector &eabs
and size n . Each of the elements of eabs must be greater than or equal zero. It specifies a bound for the absolute error in the return value xf as an approximation for @(@ X(tf) @)@. (see the 8.21.s: error criteria discussion below).

8.21.o: erel
The argument erel has prototype
     const 
Scalar &erel
and is greater than or equal zero. It specifies a bound for the relative error in the return value xf as an approximation for @(@ X(tf) @)@ (see the 8.21.s: error criteria discussion below).

8.21.p: ef
The argument value ef has prototype
     
Vector &ef
and size n . The input value of its elements does not matter. On output, it contains an estimated bound for the absolute error in the approximation xf ; i.e., @[@ ef_i > | X( tf )_i - xf_i | @]@

8.21.q: maxabs
The argument maxabs is optional in the call to OdeGearControl. If it is present, it has the prototype
     
Vector &maxabs
and size n . The input value of its elements does not matter. On output, it contains an estimate for the maximum absolute value of @(@ X(t) @)@; i.e., @[@ maxabs[i] \approx \max \left\{ | X( t )_i | \; : \; t \in [ti, tf] \right\} @]@

8.21.r: nstep
The argument nstep has the prototype
     
size_t &nstep
Its input value does not matter and its output value is the number of calls to 8.20: OdeGear used by OdeGearControl.

8.21.s: Error Criteria Discussion
The relative error criteria erel and absolute error criteria eabs are enforced during each step of the integration of the ordinary differential equations. In addition, they are inversely scaled by the step size so that the total error bound is less than the sum of the error bounds. To be specific, if @(@ \tilde{X} (t) @)@ is the approximate solution at time @(@ t @)@, ta is the initial step time, and tb is the final step time, @[@ \left| \tilde{X} (tb)_j - X (tb)_j \right| \leq \frac{tf - ti}{tb - ta} \left[ eabs[j] + erel \; | \tilde{X} (tb)_j | \right] @]@ If @(@ X(tb)_j @)@ is near zero for some @(@ tb \in [ti , tf] @)@, and one uses an absolute error criteria @(@ eabs[j] @)@ of zero, the error criteria above will force OdeGearControl to use step sizes equal to 8.21.k: smin for steps ending near @(@ tb @)@. In this case, the error relative to maxabs can be judged after OdeGearControl returns. If ef is to large relative to maxabs , OdeGearControl can be called again with a smaller value of smin .

8.21.t: Scalar
The type Scalar must satisfy the conditions for a 8.7: NumericType type. The routine 8.8: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b :
Operation Description
a <= b returns true (false) if a is less than or equal (greater than) b .
a == b returns true (false) if a is equal to b .
log(a) returns a Scalar equal to the logarithm of a
exp(a) returns a Scalar equal to the exponential of a

8.21.u: Vector
The type Vector must be a 8.9: SimpleVector class with 8.9.b: elements of type Scalar . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.21.v: Example
The file 8.21.1: ode_gear_control.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

8.21.w: Theory
Let @(@ e(s) @)@ be the error as a function of the step size @(@ s @)@ and suppose that there is a constant @(@ K @)@ such that @(@ e(s) = K s^m @)@. Let @(@ a @)@ be our error bound. Given the value of @(@ e(s) @)@, a step of size @(@ \lambda s @)@ would be ok provided that @[@ \begin{array}{rcl} a & \geq & e( \lambda s ) (tf - ti) / ( \lambda s ) \\ a & \geq & K \lambda^m s^m (tf - ti) / ( \lambda s ) \\ a & \geq & \lambda^{m-1} s^{m-1} (tf - ti) e(s) / s^m \\ a & \geq & \lambda^{m-1} (tf - ti) e(s) / s \\ \lambda^{m-1} & \leq & \frac{a}{e(s)} \frac{s}{tf - ti} \end{array} @]@ Thus if the right hand side of the last inequality is greater than or equal to one, the step of size @(@ s @)@ is ok.

8.21.x: Source Code
The source code for this routine is in the file cppad/ode_gear_control.hpp.
Input File: cppad/utility/ode_gear_control.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.21.1: OdeGearControl: Example and Test
Define @(@ X : \B{R} \rightarrow \B{R}^2 @)@ by @[@ \begin{array}{rcl} X_0 (t) & = & - \exp ( - w_0 t ) \\ X_1 (t) & = & \frac{w_0}{w_1 - w_0} [ \exp ( - w_0 t ) - \exp( - w_1 t )] \end{array} @]@ It follows that @(@ X_0 (0) = 1 @)@, @(@ X_1 (0) = 0 @)@ and @[@ \begin{array}{rcl} X_0^{(1)} (t) & = & - w_0 X_0 (t) \\ X_1^{(1)} (t) & = & + w_0 X_0 (t) - w_1 X_1 (t) \end{array} @]@ The example tests OdeGearControl using the relations above:

# include <cppad/cppad.hpp>
# include <cppad/utility/ode_gear_control.hpp>   // CppAD::OdeGearControl

namespace {
     // --------------------------------------------------------------
     class Fun {
     private:
           CPPAD_TESTVECTOR(double) w;
     public:
          // constructor
          Fun(const CPPAD_TESTVECTOR(double) &w_) : w(w_)
          { }

          // set f = x'(t)
          template <typename Scalar>
          void Ode(
               const Scalar                    &t,
               const CPPAD_TESTVECTOR(Scalar) &x,
               CPPAD_TESTVECTOR(Scalar)       &f)
          {     f[0] = - w[0] * x[0];
               f[1] = + w[0] * x[0] - w[1] * x[1];
          }

          void Ode_dep(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f_x)
          {     using namespace CppAD;

               size_t n  = x.size();
               CPPAD_TESTVECTOR(AD<double>) T(1);
               CPPAD_TESTVECTOR(AD<double>) X(n);
               CPPAD_TESTVECTOR(AD<double>) F(n);

               // set argument values
               T[0] = t;
               size_t i, j;
               for(i = 0; i < n; i++)
                    X[i] = x[i];

               // declare independent variables
               Independent(X);

               // compute f(t, x)
               this->Ode(T[0], X, F);

               // define AD function object
               ADFun<double> fun(X, F);

               // compute partial of f w.r.t x
               CPPAD_TESTVECTOR(double) dx(n);
               CPPAD_TESTVECTOR(double) df(n);
               for(j = 0; j < n; j++)
                    dx[j] = 0.;
               for(j = 0; j < n; j++)
               {     dx[j] = 1.;
                    df = fun.Forward(1, dx);
                    for(i = 0; i < n; i++)
                         f_x [i * n + j] = df[i];
                    dx[j] = 0.;
               }
          }
     };
}

bool OdeGearControl(void)
{     bool ok = true;     // initial return value
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     CPPAD_TESTVECTOR(double) w(2);
     w[0] = 10.;
     w[1] = 1.;
     Fun F(w);

     CPPAD_TESTVECTOR(double) xi(2);
     xi[0] = 1.;
     xi[1] = 0.;

     CPPAD_TESTVECTOR(double) eabs(2);
     eabs[0] = 1e-4;
     eabs[1] = 1e-4;

     // return values
     CPPAD_TESTVECTOR(double) ef(2);
     CPPAD_TESTVECTOR(double) maxabs(2);
     CPPAD_TESTVECTOR(double) xf(2);
     size_t                nstep;

     // input values
     size_t  M   = 5;
     double ti   = 0.;
     double tf   = 1.;
     double smin = 1e-8;
     double smax = 1.;
     double sini = eps99;
     double erel = 0.;

     xf = CppAD::OdeGearControl(F, M,
          ti, tf, xi, smin, smax, sini, eabs, erel, ef, maxabs, nstep);

     double x0 = exp(-w[0]*tf);
     ok &= NearEqual(x0, xf[0], 1e-4, 1e-4);
     ok &= NearEqual(0., ef[0], 1e-4, 1e-4);

     double x1 = w[0] * (exp(-w[0]*tf) - exp(-w[1]*tf))/(w[1] - w[0]);
     ok &= NearEqual(x1, xf[1], 1e-4, 1e-4);
     ok &= NearEqual(0., ef[1], 1e-4, 1e-4);

     return ok;
}

Input File: example/utility/ode_gear_control.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.22: The CppAD::vector Template Class

8.22.a: Syntax
# include <cppad/utility/vector.hpp>

8.22.b: Description
The include file cppad/vector.hpp defines the vector template class CppAD::vector. This is a 8.9: SimpleVector template class and in addition it has the features listed below:

8.22.c: Include
The file cppad/vector.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD include files.

8.22.d: capacity
If x is a CppAD::vector<Scalar> , and cap is a size_t object,
     
cap = x.capacity()
set cap to the number of Scalar objects that could fit in the memory currently allocated for x . Note that
     
x.size() <= x.capacity()

8.22.e: Assignment
If x and y are CppAD::vector<Scalar> objects,
     
y = x
has all the properties listed for a 8.9.g: simple vector assignment plus the following:

8.22.e.a: Check Size
The CppAD::vector template class will check that the size of x is either zero or the size of y before doing the assignment. If this is not the case, CppAD::vector will use 8.1: ErrorHandler to generate an appropriate error report. Allowing for assignment to a vector with size zero makes the following code work:
     CppAD::vector<
Scalary;
     
y = x;

8.22.e.b: Return Reference
A reference to the vector y is returned. An example use of this reference is in multiple assignments of the form
     
z = y = x

8.22.e.c: Move Semantics
If the C++ compiler supports move semantic rvalues using the && syntax, then it will be used during the vector assignment statement. This means that return values and other temporaries are not be copied, but rather pointers are transferred.

8.22.f: Element Access
If x is a CppAD::vector<Scalar> object and i has type size_t,
     
x[i]
has all the properties listed for a 8.9.k: simple vector element access plus the following:

The object x[i] has type Scalar (is not possibly a different type that can be converted to Scalar ).

If i is not less than the size of the x , CppAD::vector will use 8.1: ErrorHandler to generate an appropriate error report.

8.22.g: push_back
If x is a CppAD::vector<Scalar> object with size equal to n and s has type Scalar ,
     
x.push_back(s)
extends the vector x so that its new size is n plus one and x[n] is equal to s (equal in the sense of the Scalar assignment operator).

8.22.h: push_vector
If x is a CppAD::vector<Scalar> object with size equal to n and v is a 8.9: simple vector with elements of type Scalar and size m ,
     
x.push_vector(v)
extends the vector x so that its new size is n+m and x[n + i] is equal to v[i] for i = 1 , ... , m-1 (equal in the sense of the Scalar assignment operator).

8.22.i: Output
If x is a CppAD::vector<Scalar> object and os is an std::ostream, and the operation
     
os << x
will output the vector x to the standard output stream os . The elements of x are enclosed at the beginning by a { character, they are separated by , characters, and they are enclosed at the end by } character. It is assumed by this operation that if e is an object with type Scalar ,
     
os << e
will output the value e to the standard output stream os .

8.22.j: resize
The call x.resize(n) set the size of x equal to n . If n <= x.capacity() , no memory is freed or allocated, the capacity of x does not change, and the data in x is preserved. If n > x.capacity() , new memory is allocated and the data in x is lost (not copied to the new memory location).

8.22.k: clear
All memory allocated for the vector is freed and both its size and capacity are set to zero. This can be useful when using very large vectors and when checking for memory leaks (and there are global vectors) see the 8.22.n: memory discussion.

8.22.l: data
If x is a CppAD::vector<Scalar> object
     
x.data()
returns a pointer to a Scalar object such that for 0 <= i < x.size() , x[i] and x.data()[i] are the same Scalar object. If x is const, the pointer is const. If x.capacity() is zero, the value of the pointer is not defined. The pointer may no longer be valid after the following operations on x : its destructor, clear, resize, push_back, push_vector, assignment to another vector when original size of x is zero.

8.22.m: vectorBool
The file <cppad/vector.hpp> also defines the class CppAD::vectorBool. This has the same specifications as CppAD::vector<bool> with the following exceptions:

8.22.m.a: Memory
The class vectorBool conserves on memory (on the other hand, CppAD::vector<bool> is expected to be faster than vectorBool).

8.22.m.b: bit_per_unit
The static function call
     
s = vectorBool::bit_per_unit()
returns the size_t value s which is equal to the number of boolean values (bits) that are packed into one operational unit. For example, a logical or acts on this many boolean values with one operation.

8.22.m.c: data
The 8.22.l: data function is not supported by vectorBool.

8.22.m.d: Output
The CppAD::vectorBool output operator prints each boolean value as a 0 for false, a 1 for true, and does not print any other output; i.e., the vector is written a long sequence of zeros and ones with no surrounding {, } and with no separating commas or spaces.

8.22.m.e: Element Type
If x has type vectorBool and i has type size_t, the element access value x[i] has an unspecified type, referred to here as elementType , that supports the following operations:
  1. elementType can be converted to bool; e.g. the following syntax is supported:
         static_cast<bool>( 
    x[i] )
  2. elementType supports the assignment operator = where the right hand side is a bool or an elementType object; e.g., if y has type bool, the following syntax is supported:
         
    x[i] = y
  3. The result of an assignment to an elementType also has type elementType . Thus, if z has type bool, the following syntax is supported:
         
    z = x[i] = y

8.22.n: Memory and Parallel Mode
These vectors use the multi-threaded fast memory allocator 8.23: thread_alloc :
  1. The routine 8.23.2: parallel_setup must be called before these vectors can be used 8.23.4: in parallel .
  2. Using these vectors affects the amount of memory 8.23.10: in_use and 8.23.11: available .
  3. Calling 8.22.k: clear , makes the corresponding memory available (though thread_alloc) to the current thread.
  4. Available memory can then be completely freed using 8.23.8: free_available .


8.22.o: Example
The files 8.22.1: cppad_vector.cpp and 8.22.2: vector_bool.cpp each contain an example and test of this template class. They return true if they succeed and false otherwise.

8.22.p: Exercise
Create and run a program that contains the following code:
 
     CppAD::vector<double> x(3);
     size_t i;
     for(i = 0; i < 3; i++)
          x[i] = 4. - i;
     std::cout << "x = " << x << std::endl;

Input File: cppad/utility/vector.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.22.1: CppAD::vector Template Class: Example and Test

# include <cppad/utility/vector.hpp>
# include <cppad/utility/check_simple_vector.hpp>
# include <sstream> // sstream and string are used to test output operation
# include <string>

bool CppAD_vector(void)
{     bool ok = true;
     using CppAD::vector;     // so can use vector instead of CppAD::vector
     typedef double Type;     // change double to test other types

     // check Simple Vector specifications
     CppAD::CheckSimpleVector< Type, vector<Type> >();

     vector<Type> x;          // default constructor
     ok &= (x.size() == 0);

     x.resize(2);             // resize and set element assignment
     ok &= (x.size() == 2);
     x[0] = Type(1);
     x[1] = Type(2);

     vector<Type> y(2);       // sizing constructor
     ok &= (y.size() == 2);

     const vector<Type> z(x); // copy constructor and const element access
     ok &= (z.size() == 2);
     ok &= ( (z[0] == Type(1)) && (z[1] == Type(2)) );

     x[0] = Type(2);          // modify, assignment changes x
     ok &= (x[0] == Type(2));

     x = y = z;               // vector assignment
     ok &= ( (x[0] == Type(1)) && (x[1] == Type(2)) );
     ok &= ( (y[0] == Type(1)) && (y[1] == Type(2)) );
     ok &= ( (z[0] == Type(1)) && (z[1] == Type(2)) );

     // test of output
     std::string        correct= "{ 1, 2 }";
     std::string        str;
     std::ostringstream buf;
     buf << z;
     str = buf.str();
     ok &= (str == correct);

     // test resize(1), resize(0), capacity, and clear
     size_t i = x.capacity();
     ok      &= i >= 2;
     x.resize(1);
     ok      &= x[0] == Type(1);
     ok      &= i == x.capacity();
     x.resize(0);
     ok      &= i == x.capacity();
     x.clear();
     ok      &= 0 == x.capacity();

     // test of push_back scalar and capacity
     size_t N = 100;
     for(i = 0; i < N; i++)
     {     size_t old_capacity = x.capacity();
          x.push_back( Type(i) );
          ok &= (i+1) == x.size();
          ok &= i < x.capacity();
          ok &= (i == old_capacity) || old_capacity == x.capacity();
     }
     for(i = 0; i < N; i++)
          ok &= ( x[i] == Type(i) );

     // test of data
     Type* data = x.data();
     for(i = 0; i < N; i++)
     {     ok &= data[i] == Type(i);
          data[i] = Type(N - i);
          ok &= x[i] == Type(N - i);
     }

     // test of push_vector
     x.push_vector(x);
     ok &= (x.size() == 2 * N);
     for(i = 0; i < N; i++)
     {     ok &= x[i] == Type(N - i);
          ok &= x[i+N] == Type(N - i);
     }


     return ok;
}

Input File: example/utility/cppad_vector.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.22.2: CppAD::vectorBool Class: Example and Test

# include <cppad/utility/vector.hpp>
# include <cppad/utility/check_simple_vector.hpp>
# include <sstream> // sstream and string are used to test output operation
# include <string>

bool vectorBool(void)
{     bool ok = true;
     using CppAD::vectorBool;

     vectorBool x;          // default constructor
     ok &= (x.size() == 0);

     x.resize(2);             // resize and set element assignment to bool
     ok &= (x.size() == 2);
     x[0] = false;
     x[1] = true;

     vectorBool y(2);       // sizing constructor
     ok &= (y.size() == 2);

     const vectorBool z(x); // copy constructor and const element access
     ok &= (z.size() == 2);
     ok &= ( (z[0] == false) && (z[1] == true) );

     x[0] = true;           // modify, assignment changes x
     ok &= (x[0] == true);

     // resize x to zero and check that vector assignment works for both
     // size zero and mathching sizes
     x.resize(0);
     ok &= (x.size() == 0);
     ok &= (y.size() == z.size());
     //
     x = y = z;
     ok &= ( (x[0] == false) && (x[1] == true) );
     ok &= ( (y[0] == false) && (y[1] == true) );
     ok &= ( (z[0] == false) && (z[1] == true) );

     // test of push_vector
     y.push_vector(z);
     ok &= y.size() == 4;
     ok &= ( (y[0] == false) && (y[1] == true) );
     ok &= ( (y[2] == false) && (y[3] == true) );

     y[1] = false;           // element assignment to another element
     x[0] = y[1];
     ok &= (x[0] == false);

     // test of output
     std::string        correct= "01";
     std::string        str;
     std::ostringstream buf;
     buf << z;
     str = buf.str();
     ok &= (str == correct);

     // test resize(0), capacity, and clear
     size_t i = x.capacity();
     ok      &= i > 0;
     x.resize(0);
     ok      &= i == x.capacity();
     x.clear();
     ok      &= 0 == x.capacity();

     // test of push_back element
     for(i = 0; i < 100; i++)
          x.push_back( (i % 3) != 0 );
     ok &= (x.size() == 100);
     for(i = 0; i < 100; i++)
          ok &= ( x[i] == ((i % 3) != 0) );

     // is that boolvector is
     // a simple vector class with elements of type bool
     CppAD::CheckSimpleVector< bool, vectorBool >();

     return ok;
}

Input File: example/utility/vector_bool.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23: A Fast Multi-Threading Memory Allocator

8.23.a: Syntax
# include <cppad/thread_alloc.hpp>


8.23.b: Purpose
The C++ new and delete operators are thread safe, but this means that a thread may have to wait for a lock on these operations. Once memory is obtained for a thread, the thread_alloc memory allocator keeps that memory 8.23.11: available for the thread so that it can be re-used without waiting for a lock. All the CppAD memory allocations use this utility. The 8.23.8: free_available function should be used to return memory to the system (once it is no longer required by a thread).

8.23.c: Include
The routines in sections below are defined by cppad/thread_alloc.hpp. This file is included by cppad/cppad.hpp, but it can also be included separately with out the rest of the CppAD.

8.23.d: Contents
thread_alloc.cpp: 8.23.1Fast Multi-Threading Memory Allocator: Example and Test
ta_parallel_setup: 8.23.2Setup thread_alloc For Use in Multi-Threading Environment
ta_num_threads: 8.23.3Get Number of Threads
ta_in_parallel: 8.23.4Is The Current Execution in Parallel Mode
ta_thread_num: 8.23.5Get the Current Thread Number
ta_get_memory: 8.23.6Get At Least A Specified Amount of Memory
ta_return_memory: 8.23.7Return Memory to thread_alloc
ta_free_available: 8.23.8Free Memory Currently Available for Quick Use by a Thread
ta_hold_memory: 8.23.9Control When Thread Alloc Retains Memory For Future Use
ta_inuse: 8.23.10Amount of Memory a Thread is Currently Using
ta_available: 8.23.11Amount of Memory Available for Quick Use by a Thread
ta_create_array: 8.23.12Allocate An Array and Call Default Constructor for its Elements
ta_delete_array: 8.23.13Deallocate An Array and Call Destructor for its Elements
ta_free_all: 8.23.14Free All Memory That Was Allocated for Use by thread_alloc

Input File: omh/thread_alloc.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
# include <cppad/utility/thread_alloc.hpp>
# include <vector>
# include <limits>


namespace { // Begin empty namespace



bool raw_allocate(void)
{     bool ok = true;
     using CppAD::thread_alloc;
     size_t thread;

     // check that no memory is initilaly inuse
     ok &= thread_alloc::free_all();

     // amount of static memory used by thread zero
     size_t static_inuse = 0;

     // repeatedly allocate enough memory for at least two size_t values.
     size_t min_size_t = 2;
     size_t min_bytes  = min_size_t * sizeof(size_t);
     size_t n_outter   = 10;
     size_t n_inner    = 5;
     for(size_t i = 0; i < n_outter; i++)
     {     // Do not use CppAD::vector here because its use of thread_alloc
          // complicates the inuse and avaialble results.
          std::vector<void*> v_ptr(n_inner);
          // cap_bytes will be set by get_memory
          size_t cap_bytes = 0; // set here to avoid MSC warning
          for(size_t j = 0; j < n_inner; j++)
          {     // allocate enough memory for min_size_t size_t objects
               v_ptr[j]    = thread_alloc::get_memory(min_bytes, cap_bytes);
               size_t* ptr = reinterpret_cast<size_t*>(v_ptr[j]);
               // determine the number of size_t values we have obtained
               size_t  cap_size_t = cap_bytes / sizeof(size_t);
               ok                &= min_size_t <= cap_size_t;
               // use placement new to call the size_t copy constructor
               for(size_t k = 0; k < cap_size_t; k++)
                    new(ptr + k) size_t(i + j + k);
               // check that the constructor worked
               for(size_t k = 0; k < cap_size_t; k++)
                    ok &= ptr[k] == (i + j + k);
          }
          // check that n_inner * cap_bytes are inuse and none are available
          thread = thread_alloc::thread_num();
          ok &= thread_alloc::inuse(thread) == n_inner*cap_bytes + static_inuse;
          ok &= thread_alloc::available(thread) == 0;
          // return the memrory to thread_alloc
          for(size_t j = 0; j < n_inner; j++)
               thread_alloc::return_memory(v_ptr[j]);
          // check that now n_inner * cap_bytes are now available
          // and none are in use
          ok &= thread_alloc::inuse(thread) == static_inuse;
          ok &= thread_alloc::available(thread) == n_inner * cap_bytes;
     }
     thread_alloc::free_available(thread);

     // check that the tests have not held onto memory
     ok &= thread_alloc::free_all();

     return ok;
}

class my_char {
public:
     char ch_ ;
     my_char(void) : ch_(' ')
     { }
     my_char(const my_char& my_ch) : ch_(my_ch.ch_)
     { }
};

bool type_allocate(void)
{     bool ok = true;
     using CppAD::thread_alloc;
     size_t i;

     // check initial memory values
     size_t thread = thread_alloc::thread_num();
     ok &= thread == 0;
     ok &= thread_alloc::free_all();
     size_t static_inuse = 0;

     // initial allocation of an array
     size_t  size_min  = 3;
     size_t  size_one;
     my_char *array_one  =
          thread_alloc::create_array<my_char>(size_min, size_one);

     // check the values and change them to null 'x'
     for(i = 0; i < size_one; i++)
     {     ok &= array_one[i].ch_ == ' ';
          array_one[i].ch_ = 'x';
     }

     // now create a longer array
     size_t size_two;
     my_char *array_two =
          thread_alloc::create_array<my_char>(2 * size_min, size_two);

     // check the values in array one
     for(i = 0; i < size_one; i++)
          ok &= array_one[i].ch_ == 'x';

     // check the values in array two
     for(i = 0; i < size_two; i++)
          ok &= array_two[i].ch_ == ' ';

     // check the amount of inuse and available memory
     // (an extra size_t value is used for each memory block).
     size_t check = static_inuse + sizeof(my_char)*(size_one + size_two);
     ok   &= thread_alloc::inuse(thread) - check < sizeof(my_char);
     ok   &= thread_alloc::available(thread) == 0;

     // delete the arrays
     thread_alloc::delete_array(array_one);
     thread_alloc::delete_array(array_two);
     ok   &= thread_alloc::inuse(thread) == static_inuse;
     check = sizeof(my_char)*(size_one + size_two);
     ok   &= thread_alloc::available(thread) - check < sizeof(my_char);

     // free the memory for use by this thread
     thread_alloc::free_available(thread);

     // check that the tests have not held onto memory
     ok &= thread_alloc::free_all();

     return ok;
}

} // End empty namespace

bool check_alignment(void)
{     bool ok = true;
     using CppAD::thread_alloc;

     // number of binary digits in a size_t value
     size_t n_digit = std::numeric_limits<size_t>::digits;

     // must be a multiple of 8
     ok &= (n_digit % 8) == 0;

     // number of bytes in a size_t value
     size_t n_byte  = n_digit / 8;

     // check raw allocation -------------------------------------------------
     size_t min_bytes = 1;
     size_t cap_bytes;
     void* v_ptr = thread_alloc::get_memory(min_bytes, cap_bytes);

     // convert to a size_t value
     size_t v_size_t = reinterpret_cast<size_t>(v_ptr);

     // check that it is aligned
     ok &= (v_size_t % n_byte) == 0;

     // return memory to available pool
     thread_alloc::return_memory(v_ptr);

     // check array allocation ----------------------------------------------
     size_t size_min = 1;
     size_t size_out;
     my_char *array_ptr =
          thread_alloc::create_array<my_char>(size_min, size_out);

     // convert to a size_t value
     size_t array_size_t = reinterpret_cast<size_t>(array_ptr);

     // check that it is aligned
     ok &= (array_size_t % n_byte) == 0;

     // return memory to avialable pool
     thread_alloc::delete_array(array_ptr);

     return ok;
}


bool thread_alloc(void)
{     bool ok  = true;
     using CppAD::thread_alloc;

     // check that there is only on thread
     ok  &= thread_alloc::num_threads() == 1;
     // so thread number must be zero
     ok  &= thread_alloc::thread_num() == 0;
     // and we are in sequential execution mode
     ok  &= thread_alloc::in_parallel() == false;

     // Instruct thread_alloc to hold onto memory.  This makes memory
     // allocation faster (especially when there are multiple threads).
     thread_alloc::hold_memory(true);

     // run raw allocation tests
     ok &= raw_allocate();

     // run typed allocation tests
     ok &= type_allocate();

     // check alignment
     ok &= check_alignment();

     // return allocator to its default mode
     thread_alloc::hold_memory(false);
     return ok;
}


Input File: example/utility/thread_alloc.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.2: Setup thread_alloc For Use in Multi-Threading Environment

8.23.2.a: Syntax
thread_alloc::parallel_setup(num_threadsin_parallelthread_num)

8.23.2.b: Purpose
By default there is only one thread and all execution is in sequential mode, i.e., multiple threads are not sharing the same memory; i.e. not in parallel mode.

8.23.2.c: Speed
It should be faster, even when num_thread is equal to one, for thread_alloc to hold onto memory. This can be accomplished using the function call
     thread_alloc::hold_memory(true)
see 8.23.9: hold_memory .

8.23.2.d: num_threads
This argument has prototype
     size_t 
num_threads
and must be greater than zero. It specifies the number of threads that are sharing memory. The case num_threads == 1 is a special case that is used to terminate a multi-threading environment.

8.23.2.e: in_parallel
This function has prototype
     bool 
in_parallel(void)
It must return true if there is more than one thread currently executing. Otherwise it can return false.

In the special case where num_threads == 1 , the routine in_parallel is not used.

8.23.2.f: thread_num
This function has prototype
     size_t 
thread_num(void)
It must return a thread number that uniquely identifies the currently executing thread. Furthermore
     0 <= 
thread_num() < num_threads
. In the special case where num_threads == 1 , the routine thread_num is not used.

Note that this function is called by other routines so, as soon as a new thread is executing, one must be certain that thread_num() will work for that thread.

8.23.2.g: Restrictions
The function parallel_setup must be called before the program enters 8.23.4: parallel execution mode. In addition, this function cannot be called while in parallel mode.

8.23.2.h: Example
The files 7.2.4: simple_ad_openmp.cpp , 7.2.5: simple_ad_bthread.cpp , and 7.2.6: simple_ad_pthread.cpp , contain examples and tests that use this function.
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.3: Get Number of Threads

8.23.3.a: Syntax
number = thread_alloc::num_threads()

8.23.3.b: Purpose
Determine the number of threads as set during 8.23.2: parallel_setup .

8.23.3.c: number
The return value number has prototype
     size_t 
number
and is equal to the value of 8.23.2.d: num_threads in the previous call to parallel_setup . If there was no such previous call, the value one is returned.

8.23.3.d: Example
The example and test 8.23.1: thread_alloc.cpp uses this routine.
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.4: Is The Current Execution in Parallel Mode

8.23.4.a: Syntax
flag = thread_alloc::in_parallel()

8.23.4.b: Purpose
Some of the 8.23: thread_alloc allocation routines have different specifications for parallel (not sequential) execution mode. This routine enables you to determine if the current execution mode is sequential or parallel.

8.23.4.c: flag
The return value has prototype
     bool 
flag
It is true if the current execution is in parallel mode (possibly multi-threaded) and false otherwise (sequential mode).

8.23.4.d: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.5: Get the Current Thread Number

8.23.5.a: Syntax
thread = thread_alloc::thread_num()

8.23.5.b: Purpose
Some of the 8.23: thread_alloc allocation routines have a thread number. This routine enables you to determine the current thread.

8.23.5.c: thread
The return value thread has prototype
     size_t 
thread
and is the currently executing thread number.

8.23.5.d: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.6: Get At Least A Specified Amount of Memory

8.23.6.a: Syntax
v_ptr = thread_alloc::get_memory(min_bytescap_bytes)

8.23.6.b: Purpose
Use 8.23: thread_alloc to obtain a minimum number of bytes of memory (for use by the 8.23.5: current thread ).

8.23.6.c: min_bytes
This argument has prototype
     size_t 
min_bytes
It specifies the minimum number of bytes to allocate. This value must be less than
 
     std::numeric_limits<size_t>::max() / 2

8.23.6.d: cap_bytes
This argument has prototype
     size_t& 
cap_bytes
It's input value does not matter. Upon return, it is the actual number of bytes (capacity) that have been allocated for use,
     
min_bytes <= cap_bytes

8.23.6.e: v_ptr
The return value v_ptr has prototype
     void* 
v_ptr
It is the location where the cap_bytes of memory that have been allocated for use begins.

8.23.6.f: Allocation Speed
This allocation should be faster if the following conditions hold:
  1. The memory allocated by a previous call to get_memory is currently available for use.
  2. The current min_bytes is between the previous min_bytes and previous cap_bytes .


8.23.6.g: Alignment
We call a memory allocation aligned if the address is a multiple of the number of bytes in a size_t value. If the system new allocator is aligned, then v_ptr pointer is also aligned.

8.23.6.h: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.7: Return Memory to thread_alloc

8.23.7.a: Syntax
thread_alloc::return_memory(v_ptr)

8.23.7.b: Purpose
If 8.23.9: hold_memory is false, the memory is returned to the system. Otherwise, the memory is retained by 8.23: thread_alloc for quick future use by the thread that allocated to memory.

8.23.7.c: v_ptr
This argument has prototype
     void* 
v_ptr
. It must be a pointer to memory that is currently in use; i.e. obtained by a previous call to 8.23.6: get_memory and not yet returned.

8.23.7.d: Thread
Either the 8.23.5: current thread must be the same as during the corresponding call to 8.23.6: get_memory , or the current execution mode must be sequential (not 8.23.4: parallel ).

8.23.7.e: NDEBUG
If NDEBUG is defined, v_ptr is not checked (this is faster). Otherwise, a list of in use pointers is searched to make sure that v_ptr is in the list.

8.23.7.f: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.8: Free Memory Currently Available for Quick Use by a Thread

8.23.8.a: Syntax
thread_alloc::free_available(thread)

8.23.8.b: Purpose
Return to the system all the memory that is currently being 8.23.9: held for quick use by the specified thread.

8.23.8.b.a: Extra Memory
In the case where thread > 0 , some extra memory is used to track allocations by the specified thread. If
     thread_alloc::inuse(
thread) == 0
the extra memory is also returned to the system.

8.23.8.c: thread
This argument has prototype
     size_t 
thread
Either 8.23.5: thread_num must be the same as thread , or the current execution mode must be sequential (not 8.23.4: parallel ).

8.23.8.d: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.9: Control When Thread Alloc Retains Memory For Future Use

8.23.9.a: Syntax
thread_alloc::hold_memory(value)

8.23.9.b: Purpose
It should be faster, even when num_thread is equal to one, for thread_alloc to hold onto memory. Calling hold_memory with value equal to true, instructs thread_alloc to hold onto memory, and put it in the 8.23.11: available pool, after each call to 8.23.7: return_memory .

8.23.9.c: value
If value is true, thread_alloc with hold onto memory for future quick use. If it is false, future calls to 8.23.7: return_memory will return the corresponding memory to the system. By default (when hold_memory has not been called) thread_alloc does not hold onto memory.

8.23.9.d: free_available
Memory that is being held by thread_alloc can be returned to the system using 8.23.8: free_available .
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.10: Amount of Memory a Thread is Currently Using

8.23.10.a: Syntax
num_bytes = thread_alloc::inuse(thread)

8.23.10.b: Purpose
Memory being managed by 8.23: thread_alloc has two states, currently in use by the specified thread, and quickly available for future use by the specified thread. This function informs the program how much memory is in use.

8.23.10.c: thread
This argument has prototype
     size_t 
thread
Either 8.23.5: thread_num must be the same as thread , or the current execution mode must be sequential (not 8.23.4: parallel ).

8.23.10.d: num_bytes
The return value has prototype
     size_t 
num_bytes
It is the number of bytes currently in use by the specified thread.

8.23.10.e: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.11: Amount of Memory Available for Quick Use by a Thread

8.23.11.a: Syntax
num_bytes = thread_alloc::available(thread)

8.23.11.b: Purpose
Memory being managed by 8.23: thread_alloc has two states, currently in use by the specified thread, and quickly available for future use by the specified thread. This function informs the program how much memory is available.

8.23.11.c: thread
This argument has prototype
     size_t 
thread
Either 8.23.5: thread_num must be the same as thread , or the current execution mode must be sequential (not 8.23.4: parallel ).

8.23.11.d: num_bytes
The return value has prototype
     size_t 
num_bytes
It is the number of bytes currently available for use by the specified thread.

8.23.11.e: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.12: Allocate An Array and Call Default Constructor for its Elements

8.23.12.a: Syntax
array = thread_alloc::create_array<Type>(size_minsize_out) .

8.23.12.b: Purpose
Create a new raw array using 8.23: thread_alloc memory allocator (works well in a multi-threading environment) and call default constructor for each element.

8.23.12.c: Type
The type of the elements of the array.

8.23.12.d: size_min
This argument has prototype
     size_t 
size_min
This is the minimum number of elements that there can be in the resulting array .

8.23.12.e: size_out
This argument has prototype
     size_t& 
size_out
The input value of this argument does not matter. Upon return, it is the actual number of elements in array (  size_min <= size_out ).

8.23.12.f: array
The return value array has prototype
     
Typearray
It is array with size_out elements. The default constructor for Type is used to initialize the elements of array . Note that 8.23.13: delete_array should be used to destroy the array when it is no longer needed.

8.23.12.g: Delta
The amount of memory 8.23.10: inuse by the current thread, will increase delta where
     sizeof(
Type) * (size_out + 1) > delta >= sizeof(Type) * size_out
The 8.23.11: available memory will decrease by delta , (and the allocation will be faster) if a previous allocation with size_min between its current value and size_out is available.

8.23.12.h: Alignment
We call a memory allocation aligned if the address is a multiple of the number of bytes in a size_t value. If the system new allocator is aligned, then array pointer is also aligned.

8.23.12.i: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.13: Deallocate An Array and Call Destructor for its Elements

8.23.13.a: Syntax
thread_alloc::delete_array(array) .

8.23.13.b: Purpose
Returns memory corresponding to an array created by (create by 8.23.12: create_array ) to the 8.23.11: available memory pool for the current thread.

8.23.13.c: Type
The type of the elements of the array.

8.23.13.d: array
The argument array has prototype
     
Typearray
It is a value returned by 8.23.12: create_array and not yet deleted. The Type destructor is called for each element in the array.

8.23.13.e: Thread
The 8.23.5: current thread must be the same as when 8.23.12: create_array returned the value array . There is an exception to this rule: when the current execution mode is sequential (not 8.23.4: parallel ) the current thread number does not matter.

8.23.13.f: Delta
The amount of memory 8.23.10: inuse will decrease by delta , and the 8.23.11: available memory will increase by delta , where 8.23.12.g: delta is the same as for the corresponding call to create_array.

8.23.13.g: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.23.14: Free All Memory That Was Allocated for Use by thread_alloc

8.23.14.a: Syntax
ok = thread_alloc::free_all() .

8.23.14.b: Purpose
Returns all memory that was used by thread_alloc to the system.

8.23.14.c: ok
The return value ok has prototype
     bool 
ok
Its value will be true if all the memory can be freed. This requires that for all thread indices, there is no memory 8.23.10: inuse ; i.e.,
     0 == thread_alloc::inuse(
thread)
Otherwise, the return value will be false.

8.23.14.d: Restrictions
This function cannot be called while in parallel mode.

8.23.14.e: Example
8.23.1: thread_alloc.cpp
Input File: cppad/utility/thread_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.24: Returns Indices that Sort a Vector

8.24.a: Syntax
# include <cppad/utility/index_sort.hpp>
index_sort(keysind)

8.24.b: keys
The argument keys has prototype
     const 
VectorKeykeys
where VectorKey is a 8.9: SimpleVector class with elements that support the < operation.

8.24.c: ind
The argument ind has prototype
     
VectorSizeind
where VectorSize is a 8.9: SimpleVector class with elements of type size_t. The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

8.24.c.a: Input
The size of ind must be the same as the size of keys and the value of its input elements does not matter.

8.24.c.b: Return
Upon return, ind is a permutation of the set of indices that yields increasing order for keys . In other words, for all i != j ,
     
ind[i] != ind[j]
and for i = 0 , ... , size-2 ,
     ( 
keysind[i+1] ] < keysind[i] ] ) == false

8.24.d: Example
The file 8.24.1: index_sort.cpp contains an example and test of this routine. It return true if it succeeds and false otherwise.
Input File: cppad/utility/index_sort.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.24.1: Index Sort: Example and Test
# include <cppad/utility/index_sort.hpp>
# include <cppad/utility/vector.hpp>
# include <valarray>
# include <vector>


namespace{
     // class that uses < to compare a pair of size_t values
     class Key {
     public:
          size_t first_;
          size_t second_;
          //
          Key(void)
          { }
          //
          Key(size_t first, size_t second)
          : first_(first), second_(second)
          { }
          //
          bool operator<(const Key& other) const
          {     if( first_ == other.first_ )
                    return second_ < other.second_;
               return first_ < other.first_;
          }
     };

     template <class VectorKey, class VectorSize>
     bool vector_case(void)
     {     bool ok = true;
          size_t i, j;
          size_t first[]  =  { 4, 4, 3, 3, 2, 2, 1, 1};
          size_t second[] = { 0, 1, 0, 1, 0, 1, 0, 1};
          size_t size     = sizeof(first) / sizeof(first[0]);

          VectorKey keys(size);
          for(i = 0; i < size; i++)
               keys[i] = Key(first[i], second[i]);

          VectorSize ind(size);
          CppAD::index_sort(keys, ind);

          // check that all the indices are different
          for(i = 0; i < size; i++)
          {     for(j = 0; j < size; j++)
                    ok &= (i == j) | (ind[i] != ind[j]);
          }

          // check for increasing order
          for(i = 0; i < size-1; i++)
          {     if( first[ ind[i] ] == first[ ind[i+1] ] )
                    ok &= second[ ind[i] ] <= second[ ind[i+1] ];
               else     ok &= first[ ind[i] ] < first[ ind[i+1] ];
          }

          return ok;
     }
}

bool index_sort(void)
{     bool ok = true;

     // some example simple vector template classes
     ok &= vector_case<  std::vector<Key>,  std::valarray<size_t> >();
     ok &= vector_case< std::valarray<Key>, CppAD::vector<size_t> >();
     ok &= vector_case< CppAD::vector<Key>,   std::vector<size_t> >();

     return ok;
}

Input File: example/utility/index_sort.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.25: Convert Certain Types to a String

8.25.a: Syntax
# include <cppad/utility/to_string.hpp>
s = to_string(value) .

8.25.b: See Also
4.7.7: base_to_string , 4.3.3: ad_to_string

8.25.c: Purpose
This routine is similar to the C++11 routine std::to_string with the following differences:
  1. It works with C++98.
  2. It has been extended to the fundamental floating point types.
  3. It has specifications for extending to an arbitrary type; see 4.7.7: base_to_string .
  4. If <cppad/cppad.hpp> is included, and it has been extended to a Base type, it automatically extends to the 12.4.c: AD types above Base .
  5. For integer types, conversion to a string is exact. For floating point types, conversion to a string yields a value that has relative error within machine epsilon.


8.25.d: value

8.25.d.a: Integer
The argument value can have the following prototype
     const 
Integer&  value
where Integer is any of the fundamental integer types; e.g., short int and unsigned long. Note that if C++11 is supported by this compilation, unsigned long long is also a fundamental integer type.

8.25.d.b: Float
The argument value can have the following prototype
     const 
Float&  value
where Float is any of the fundamental floating point types; i.e., float, double, and long double.

8.25.e: s
The return value has prototype
     std::string 
s
and contains a representation of the specified value .

8.25.e.a: Integer
If value is an Integer , the representation is equivalent to os << value where os is an std::ostringstream.

8.25.e.b: Float
If value is a Float , enough digits are used in the representation so that the result is accurate to withing round off error.

8.25.f: Example
The file 8.25.1: to_string.cpp contains an example and test of this routine. It returns true if it succeeds and false otherwise.
Input File: cppad/utility/to_string.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.25.1: to_string: Example and Test

// Examples with fundamental types
# include <cppad/utility/to_string.hpp>
namespace {
     template <class Integer>
     Integer string2signed(const std::string& s)
     {     Integer result = 0;
          size_t index   = 0;
          if( s[0] == '-' )
               ++index;
          while( index < s.size() )
               result = Integer(10 * result + s[index++] - '0');
          if( s[0] == '-' )
               return - result;
          return result;
     }
     template <class Integer>
     Integer string2unsigned(const std::string& s)
     {     Integer result = 0;
          size_t index   = 0;
          while( index < s.size() )
               result = Integer(10 * result + s[index++] - '0');
          return result;
     }
     template <class Integer>
     bool signed_integer(void)
     {     bool ok = true;
          //
          Integer max    = std::numeric_limits<Integer>::max();
          std::string s  = CppAD::to_string(max);
          Integer check  = string2signed<Integer>(s);
          ok            &= max == check;
          //
          Integer min    = std::numeric_limits<Integer>::min();
          s              = CppAD::to_string(min);
          check          = string2signed<Integer>(s);
          ok            &= min == check;
          //
          return ok;
     }
     template <class Integer>
     bool unsigned_integer(void)
     {     bool ok = true;
          //
          Integer max    = std::numeric_limits<Integer>::max();
          std::string s  = CppAD::to_string(max);
          Integer check  = string2unsigned<Integer>(s);
          ok            &= max == check;
          ok            &= std::numeric_limits<Integer>::min() == 0;
          //
          return ok;
     }
     template <class Float>
     bool floating(void)
     {     bool  ok  = true;
          Float eps = std::numeric_limits<Float>::epsilon();
          Float pi  = Float( 4.0 * std::atan(1.0) );
          //
          std::string s = CppAD::to_string( pi );
          Float check   = Float( std::atof( s.c_str() ) );
          ok           &= std::fabs( check / pi - 1.0 ) <= 2.0 * eps;
          //
          return ok;
     }
}

// Examples with AD types
# include <cppad/cppad.hpp>
namespace {
     template <class Base>
     bool ad_floating(void)
     {     bool  ok  = true;
          Base eps  = std::numeric_limits<Base>::epsilon();
          Base pi   = Base( 4.0 * std::atan(1.0) );
          //
          std::string s = CppAD::to_string( CppAD::AD<Base>( pi ) );
          Base check    = Base( std::atof( s.c_str() ) );
          ok           &= fabs( check / pi - Base(1.0) ) <= Base( 2.0 ) * eps;
          //
          return ok;
     }
}

// Test driver
bool to_string(void)
{     bool ok = true;

     ok &= unsigned_integer<unsigned short>();
     ok &= signed_integer<signed int>();
     //
     ok &= unsigned_integer<unsigned long>();
     ok &= signed_integer<signed long>();
# if CPPAD_USE_CPLUSPLUS_2011
     ok &= unsigned_integer<unsigned long long>();
     ok &= signed_integer<signed long long>();
# endif
     //
     ok &= floating<float>();
     ok &= floating<double>();
     ok &= floating<long double>();
     //
     ok &= ad_floating<float>();
     ok &= ad_floating<double>();
     //
     return ok;
}

Input File: example/utility/to_string.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.26: Union of Standard Sets

8.26.a: Syntax
result = set_union(leftright)

8.26.b: Purpose
This is a simplified (and restricted) interface to the std::union operation.

8.26.c: Element
This is the type of the elements of the sets.

8.26.d: left
This argument has prototype
     const std::set<
Element>& left

8.26.e: right
This argument has prototype
     const std::set<
Element>& right

8.26.f: result
The return value has prototype
     std::set<
Element>& result
It contains the union of left and right . Note that C++11 detects that the return value is a temporary and uses it for the result instead of making a separate copy.

8.26.g: Example
The file 8.26.1: set_union.cpp contains an example and test of this operation. It returns true if the test passes and false otherwise.
Input File: cppad/utility/set_union.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.26.1: Set Union: Example and Test
# include <cppad/utility/set_union.hpp>

bool set_union(void)
{     bool ok = true;

     // create empty sets
     std::set<size_t> left, right, result;

     // set left = {1, 2}
     left.insert(1);
     left.insert(2);

     // set right = {2, 3}
     right.insert(2);
     right.insert(3);

     // set result = {1, 2} U {2, 3}
     result = CppAD::set_union(left, right);

     // expected result
     size_t check_vec[] = {1, 2, 3};
     std::set<size_t> check_set(check_vec, check_vec + 3);

     // check result
     ok &= result == check_set;

     return ok;
}

Input File: example/utility/set_union.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.27: Row and Column Index Sparsity Patterns

8.27.a: Syntax
# include <cppad/utility/sparse_rc.hpp>
sparse_rc<SizeVector>  empty
sparse_rc<SizeVector>  pattern(nrncnnz)
target = pattern
resize(nrncnnz)
pattern.set(krc)
pattern.nr()
pattern.nc()
pattern.nnz()
const SizeVectorrowpattern.row() )
const SizeVectorcolpattern.col() )
row_major = pattern.row_major()
col_major = pattern.col_major()

8.27.b: SizeVector
We use SizeVector to denote 8.9: SimpleVector class 8.9.b: with elements of type size_t.

8.27.c: empty
This is an empty sparsity pattern. To be specific, the corresponding number of rows nr , number of columns nc , and number of possibly non-zero values nnz , are all zero.

8.27.d: pattern
This object is used to hold a sparsity pattern for a matrix. The sparsity pattern is const except during its constructor, resize, and set.

8.27.e: target
The target of the assignment statement must have prototype
     sparse_rc<
SizeVector>  target
After this assignment statement, target is an independent copy of pattern ; i.e. it has all the same values as pattern and changes to target do not affect pattern .

8.27.f: nr
This argument has prototype
     size_t 
nr
It specifies the number of rows in the sparsity pattern. The function call nr() returns the value of nr .

8.27.g: nc
This argument has prototype
     size_t 
nc
It specifies the number of columns in the sparsity pattern. The function call nc() returns the value of nc .

8.27.h: nnz
This argument has prototype
     size_t 
nnz
It specifies the number of possibly non-zero index pairs in the sparsity pattern. The function call nnz() returns the value of nnz .

8.27.i: resize
The current sparsity pattern is lost and a new one is started with the specified parameters. The elements in the row and col vectors should be assigned using set.

8.27.j: set
This function sets the values
     
row[k] = r
     
col[k] = c

8.27.j.a: k
This argument has type
     size_t 
k
and must be less than nnz .

8.27.j.b: r
This argument has type
     size_t 
r
It specifies the value assigned to row[k] and must be less than nr .

8.27.j.c: c
This argument has type
     size_t 
c
It specifies the value assigned to col[k] and must be less than nc .

8.27.k: row
This vector has size nnz and row[k] is the row index of the k-th possibly non-zero index pair in the sparsity pattern.

8.27.l: col
This vector has size nnz and col[k] is the column index of the k-th possibly non-zero index pair in the sparsity pattern.

8.27.m: row_major
This vector has prototype
     
SizeVector row_major
and its size nnz . It sorts the sparsity pattern in row-major order. To be specific,
     
colrow_major[k] ] <= colrow_major[k+1] ]
and if colrow_major[k] ] == colrow_major[k+1] ] ,
     
rowrow_major[k] ] < rowrow_major[k+1] ]
This routine generates an assert if there are two entries with the same row and column values (if NDEBUG is not defined).

8.27.n: col_major
This vector has prototype
     
SizeVector col_major
and its size nnz . It sorts the sparsity pattern in column-major order. To be specific,
     
rowcol_major[k] ] <= rowcol_major[k+1] ]
and if rowcol_major[k] ] == rowcol_major[k+1] ] ,
     
colcol_major[k] ] < colcol_major[k+1] ]
This routine generates an assert if there are two entries with the same row and column values (if NDEBUG is not defined).

8.27.o: Example
The file 8.27.1: sparse_rc.cpp contains an example and test of this class. It returns true if it succeeds and false otherwise.
Input File: cppad/utility/sparse_rc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.27.1: sparse_rc: Example and Test
# include <cppad/utility/sparse_rc.hpp>
# include <vector>

bool sparse_rc(void)
{     bool ok = true;
     typedef std::vector<size_t> SizeVector;

     // 3 by 3 identity matrix
     size_t nr  = 3;
     size_t nc  = 3;
     size_t nnz = 3;
     CppAD::sparse_rc<SizeVector> pattern(nr, nc, nnz);
     for(size_t k = 0; k < nnz; k++)
          pattern.set(k, k, k);

     // row and column vectors corresponding to pattern
     const SizeVector& row( pattern.row() );
     const SizeVector& col( pattern.row() );

     // check pattern
     ok &= pattern.nnz() == nnz;
     ok &= pattern.nr()  == nr;
     ok &= pattern.nc()  == nc;
     for(size_t k = 0; k < nnz; k++)
     {     ok &= row[k] == k;
          ok &= col[k] == k;
     }

     // change to sparsity pattern for a 5 by 5 diagonal matrix
     nr  = 5;
     nc  = 5;
     nnz = 5;
     pattern.resize(nr, nc, nnz);
     for(size_t k = 0; k < nnz; k++)
     {     size_t r = nnz - k - 1; // reverse or row-major order
          size_t c = nnz - k - 1;
          pattern.set(k, r, c);
     }
     SizeVector row_major = pattern.row_major();

     // check row and column
     for(size_t k = 0; k < nnz; k++)
     {     ok &= row[ row_major[k] ] == k;
          ok &= col[ row_major[k] ] == k;
     }

     // create an empty pattern
     CppAD::sparse_rc<SizeVector> target;
     ok &= target.nnz() == 0;
     ok &= target.nr()  == 0;
     ok &= target.nc()  == 0;

     // now use it as the target for an assignment statement
     target = pattern;
     ok    &= target.nr()  == pattern.nr();
     ok    &= target.nc()  == pattern.nc();
     ok    &= target.nnz() == pattern.nnz();
     for(size_t k = 0; k < nnz; k++)
     {     ok &= target.row()[k] == row[k];
          ok &= target.col()[k] == col[k];
     }
     return ok;
}

Input File: example/utility/sparse_rc.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.28: Sparse Matrix Row, Column, Value Representation

8.28.a: Syntax
# include <cppad/utility/sparse_rcv.hpp>
sparse_rcv<SizeVectorValueVector>  empty
sparse_rcv<SizeVectorValueVector>  matrix(pattern)
target = matrix
matrix.set(kv)
nr = matrix.nr()
nc = matrix.nc()
nnz = matrix.nnz()
const SizeVectorrowmatrix.row() )
const SizeVectorcolmatrix.col() )
const ValueVectorvalmatrix.val() )
row_major = matrix.row_major()
col_major = matrix.col_major()

8.28.b: SizeVector
We use 8.27.b: SizeVector to denote the 8.9: SimpleVector class corresponding to pattern .

8.28.c: ValueVector
We use ValueVector to denote the 8.9: SimpleVector class corresponding to val .

8.28.d: empty
This is an empty sparse matrix object. To be specific, the corresponding number of rows nr , number of columns nc , and number of possibly non-zero values nnz , are all zero.

8.28.e: pattern
This argument has prototype
     const sparse_rc<
SizeVector>& pattern
It specifies the number of rows, number of columns and the possibly non-zero entries in the matrix .

8.28.f: matrix
This is a sparse matrix object with the sparsity specified by pattern . Only the val vector can be changed. All other values returned by matrix are fixed during the constructor and constant there after. The val vector is only changed by the constructor and the set function. There is one exception to the rule, where matrix corresponds to target for an assignment statement.

8.28.g: target
The target of the assignment statement must have prototype
     sparse_rcv<
SizeVectorValueVector>  target
After this assignment statement, target is an independent copy of matrix ; i.e. it has all the same values as matrix and changes to target do not affect matrix .

8.28.h: nr
This return value has prototype
     size_t 
nr
and is the number of rows in matrix .

8.28.i: nc
This argument and return value has prototype
     size_t 
nc
and is the number of columns in matrix .

8.28.j: nnz
We use the notation nnz to denote the number of possibly non-zero entries in matrix .

8.28.k: set
This function sets the value
     
val[k] = v

8.28.k.a: k
This argument has type
     size_t 
k
and must be less than nnz .

8.28.k.b: v
This argument has type
     const 
ValueVector::value_type& v
It specifies the value assigned to val[k] .

8.28.l: row
This vector has size nnz and row[k] is the row index of the k-th possibly non-zero element in matrix .

8.28.m: col
This vector has size nnz and col[k] is the column index of the k-th possibly non-zero element in matrix

8.28.n: val
This vector has size nnz and val[k] is value of the k-th possibly non-zero entry in the sparse matrix (the value may be zero).

8.28.o: row_major
This vector has prototype
     
SizeVector row_major
and its size nnz . It sorts the sparsity pattern in row-major order. To be specific,
     
colrow_major[k] ] <= colrow_major[k+1] ]
and if colrow_major[k] ] == colrow_major[k+1] ] ,
     
rowrow_major[k] ] < rowrow_major[k+1] ]
This routine generates an assert if there are two entries with the same row and column values (if NDEBUG is not defined).

8.28.p: col_major
This vector has prototype
     
SizeVector col_major
and its size nnz . It sorts the sparsity pattern in column-major order. To be specific,
     
rowcol_major[k] ] <= rowcol_major[k+1] ]
and if rowcol_major[k] ] == rowcol_major[k+1] ] ,
     
colcol_major[k] ] < colcol_major[k+1] ]
This routine generates an assert if there are two entries with the same row and column values (if NDEBUG is not defined).

8.28.q: Example
The file 8.28.1: sparse_rcv.cpp contains an example and test of this class. It returns true if it succeeds and false otherwise.
Input File: cppad/utility/sparse_rcv.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
8.28.1: sparse_rcv: Example and Test
# include <cppad/utility/sparse_rcv.hpp>
# include <vector>

bool sparse_rcv(void)
{     bool ok = true;
     typedef std::vector<size_t> SizeVector;
     typedef std::vector<double> ValueVector;

     // sparsity pattern for a 5 by 5 diagonal matrix
     size_t nr  = 5;
     size_t nc  = 5;
     size_t nnz = 5;
     CppAD::sparse_rc<SizeVector> pattern(nr, nc, nnz);
     for(size_t k = 0; k < nnz; k++)
     {     size_t r = nnz - k - 1; // reverse or column-major order
          size_t c = nnz - k - 1;
          pattern.set(k, r, c);
     }

     // sparse matrix
     CppAD::sparse_rcv<SizeVector, ValueVector> matrix(pattern);
     for(size_t k = 0; k < nnz; k++)
     {     double v = double(k);
          matrix.set(nnz - k - 1, v);
     }

     // row, column, and value vectors
     const SizeVector&  row( matrix.row() );
     const SizeVector&  col( matrix.row() );
     const ValueVector& val( matrix.val() );
     SizeVector col_major = matrix.col_major();

     // check row,  column, and value
     for(size_t k = 0; k < nnz; k++)
     {     ok &= row[ col_major[k] ] == k;
          ok &= col[ col_major[k] ] == k;
          ok &= val[ col_major[k] ] == double(k);
     }

     // create an empty matrix
     CppAD::sparse_rcv<SizeVector, ValueVector> target;
     ok &= target.nnz() == 0;
     ok &= target.nr()  == 0;
     ok &= target.nc()  == 0;

     // now use it as the target for an assignment statement
     target = matrix;
     ok    &= target.nr()  == matrix.nr();
     ok    &= target.nc()  == matrix.nc();
     ok    &= target.nnz() == matrix.nnz();
     for(size_t k = 0; k < nnz; k++)
     {     ok &= target.row()[k] == row[k];
          ok &= target.col()[k] == col[k];
          ok &= target.val()[k] == val[k];
     }
     return ok;
}

Input File: example/utility/sparse_rcv.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
9: Use Ipopt to Solve a Nonlinear Programming Problem

9.a: Syntax
# include <cppad/ipopt/solve.hpp>
ipopt::solve(
     
optionsxixlxuglgufg_evalsolution
)


9.b: Purpose
The function ipopt::solve solves nonlinear programming problems of the form @[@ \begin{array}{rll} {\rm minimize} & f (x) \\ {\rm subject \; to} & gl \leq g(x) \leq gu \\ & xl \leq x \leq xu \end{array} @]@ This is done using Ipopt (http://www.coin-or.org/projects/Ipopt.xml) optimizer and CppAD for the derivative and sparsity calculations.

9.c: Include File
Currently, this routine 9: ipopt::solve is not included by the command
     # include <cppad/cppad.hpp>
(Doing so would require the ipopt library to link the corresponding program (even if ipopt::solve) was not used.) For this reason, if you are using ipopt::solve you should use
     # include <cppad/ipopt/solve.hpp>
which in turn will also include <cppad/cppad.hpp>.

9.d: Bvector
The type Bvector must be a 8.9: SimpleVector class with 8.9.b: elements of type bool.

9.e: Dvector
The type DVector must be a 8.9: SimpleVector class with 8.9.b: elements of type double.

9.f: options
The argument options has prototype
     const std::string 
options
It contains a list of options. Each option, including the last option, is terminated by the '\n' character. Each line consists of two or three tokens separated by one or more spaces.

9.f.a: Retape
You can set the retape flag with the following syntax:
     Retape 
value
If the value is true, ipopt::solve with retape the 12.4.g.b: operation sequence for each new value of x . If the value is false, ipopt::solve will tape the operation sequence at the value of xi and use that sequence for the entire optimization process. The default value is false.

9.f.b: Sparse
You can set the sparse Jacobian and Hessian flag with the following syntax:
     Sparse 
value direction
If the value is true, ipopt::solve will use a sparse matrix representation for the computation of Jacobians and Hessians. Otherwise, it will use a full matrix representation for these calculations. The default for value is false. If sparse is true, retape must be false.

It is unclear if 5.6.2: sparse_jacobian would be faster user forward or reverse mode so you are able to choose the direction. If
     
value == true && direction == forward
the Jacobians will be calculated using SparseJacobianForward. If
     
value == true && direction == reverse
the Jacobians will be calculated using SparseJacobianReverse.

9.f.c: String
You can set any Ipopt string option using a line with the following syntax:
     String 
name value
Here name is any valid Ipopt string option and value is its setting.

9.f.d: Numeric
You can set any Ipopt numeric option using a line with the following syntax:
     Numeric 
name value
Here name is any valid Ipopt numeric option and value is its setting.

9.f.e: Integer
You can set any Ipopt integer option using a line with the following syntax:
     Integer 
name value
Here name is any valid Ipopt integer option and value is its setting.

9.g: xi
The argument xi has prototype
     const 
Vectorxi
and its size is equal to nx . It specifies the initial point where Ipopt starts the optimization process.

9.h: xl
The argument xl has prototype
     const 
Vectorxl
and its size is equal to nx . It specifies the lower limits for the argument in the optimization problem.

9.i: xu
The argument xu has prototype
     const 
Vectorxu
and its size is equal to nx . It specifies the upper limits for the argument in the optimization problem.

9.j: gl
The argument gl has prototype
     const 
Vectorgl
and its size is equal to ng . It specifies the lower limits for the constraints in the optimization problem.

9.k: gu
The argument gu has prototype
     const 
Vectorgu
and its size is equal to ng . It specifies the upper limits for the constraints in the optimization problem.

9.l: fg_eval
The argument fg_eval has prototype
     
FG_eval fg_eval
where the class FG_eval is unspecified except for the fact that it supports the syntax
     
FG_eval::ADvector
     
fg_eval(fgx)
The type ADvector and the arguments to fg , x have the following meaning:

9.l.a: ADvector
The type FG_eval::ADvector must be a 8.9: SimpleVector class with 8.9.b: elements of type AD<double>.

9.l.b: x
The fg_eval argument x has prototype
     const 
ADvectorx
where nx = x.size() .

9.l.c: fg
The fg_eval argument fg has prototype
     
ADvectorfg
where 1 + ng = fg.size() . The input value of the elements of fg does not matter. Upon return from fg_eval ,
     
fg[0] =
@(@ f (x) @)@
and for @(@ i = 0, \ldots , ng-1 @)@,
     
fg[1 + i] =
@(@ g_i (x) @)@

9.m: solution
The argument solution has prototype
     ipopt::solve_result<
Dvector>& solution
After the optimization process is completed, solution contains the following information:

9.m.a: status
The status field of solution has prototype
     ipopt::solve_result<
Dvector>::status_type solution.status
It is the final Ipopt status for the optimizer. Here is a list of the possible values for the status:
status Meaning
not_defined The optimizer did not return a final status for this problem.
unknown The status returned by the optimizer is not defined in the Ipopt documentation for finalize_solution.
success Algorithm terminated successfully at a point satisfying the convergence tolerances (see Ipopt options).
maxiter_exceeded The maximum number of iterations was exceeded (see Ipopt options).
stop_at_tiny_step Algorithm terminated because progress was very slow.
stop_at_acceptable_point Algorithm stopped at a point that was converged, not to the 'desired' tolerances, but to 'acceptable' tolerances (see Ipopt options).
local_infeasibility Algorithm converged to a non-feasible point (problem may have no solution).
user_requested_stop This return value should not happen.
diverging_iterates It the iterates are diverging.
restoration_failure Restoration phase failed, algorithm doesn't know how to proceed.
error_in_step_computation An unrecoverable error occurred while Ipopt tried to compute the search direction.
invalid_number_detected Algorithm received an invalid number (such as nan or inf) from the users function fg_info.eval or from the CppAD evaluations of its derivatives (see the Ipopt option check_derivatives_for_naninf).
internal_error An unknown Ipopt internal error occurred. Contact the Ipopt authors through the mailing list.

9.m.b: x
The x field of solution has prototype
     
Vector solution.x
and its size is equal to nx . It is the final @(@ x @)@ value for the optimizer.

9.m.c: zl
The zl field of solution has prototype
     
Vector solution.zl
and its size is equal to nx . It is the final Lagrange multipliers for the lower bounds on @(@ x @)@.

9.m.d: zu
The zu field of solution has prototype
     
Vector solution.zu
and its size is equal to nx . It is the final Lagrange multipliers for the upper bounds on @(@ x @)@.

9.m.e: g
The g field of solution has prototype
     
Vector solution.g
and its size is equal to ng . It is the final value for the constraint function @(@ g(x) @)@.

9.m.f: lambda
The lambda field of solution has prototype
     
Vectorsolution.lambda
and its size is equal to ng . It is the final value for the Lagrange multipliers corresponding to the constraint function.

9.m.g: obj_value
The obj_value field of solution has prototype
     double 
solution.obj_value
It is the final value of the objective function @(@ f(x) @)@.

9.n: Example
All the examples return true if it succeeds and false otherwise.

9.n.a: get_started
The file 9.1: example/ipopt_solve/get_started.cpp is an example and test of ipopt::solve taken from the Ipopt manual.

9.n.b: retape
The file 9.2: example/ipopt_solve/retape.cpp demonstrates when it is necessary to specify 9.f.a: retape as true.

9.n.c: ode_inverse
The file 9.3: example/ipopt_solve/ode_inverse.cpp demonstrates using Ipopt to solve for parameters in an ODE model.
Input File: cppad/ipopt/solve.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test

9.1.a: Purpose
This example program demonstrates how to use 9: ipopt_solve to solve the example problem in the Ipopt documentation; i.e., the problem @[@ \begin{array}{lc} {\rm minimize \; } & x_1 * x_4 * (x_1 + x_2 + x_3) + x_3 \\ {\rm subject \; to \; } & x_1 * x_2 * x_3 * x_4 \geq 25 \\ & x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40 \\ & 1 \leq x_1, x_2, x_3, x_4 \leq 5 \end{array} @]@

9.1.b: Configuration Requirement
This example will be compiled and tested provided that 2.2.5: ipopt_prefix is specified on the 2.2: cmake command line.
# include <cppad/ipopt/solve.hpp>

namespace {
     using CppAD::AD;

     class FG_eval {
     public:
          typedef CPPAD_TESTVECTOR( AD<double> ) ADvector;
          void operator()(ADvector& fg, const ADvector& x)
          {     assert( fg.size() == 3 );
               assert( x.size()  == 4 );

               // Fortran style indexing
               AD<double> x1 = x[0];
               AD<double> x2 = x[1];
               AD<double> x3 = x[2];
               AD<double> x4 = x[3];
               // f(x)
               fg[0] = x1 * x4 * (x1 + x2 + x3) + x3;
               // g_1 (x)
               fg[1] = x1 * x2 * x3 * x4;
               // g_2 (x)
               fg[2] = x1 * x1 + x2 * x2 + x3 * x3 + x4 * x4;
               //
               return;
          }
     };
}

bool get_started(void)
{     bool ok = true;
     size_t i;
     typedef CPPAD_TESTVECTOR( double ) Dvector;

     // number of independent variables (domain dimension for f and g)
     size_t nx = 4;
     // number of constraints (range dimension for g)
     size_t ng = 2;
     // initial value of the independent variables
     Dvector xi(nx);
     xi[0] = 1.0;
     xi[1] = 5.0;
     xi[2] = 5.0;
     xi[3] = 1.0;
     // lower and upper limits for x
     Dvector xl(nx), xu(nx);
     for(i = 0; i < nx; i++)
     {     xl[i] = 1.0;
          xu[i] = 5.0;
     }
     // lower and upper limits for g
     Dvector gl(ng), gu(ng);
     gl[0] = 25.0;     gu[0] = 1.0e19;
     gl[1] = 40.0;     gu[1] = 40.0;

     // object that computes objective and constraints
     FG_eval fg_eval;

     // options
     std::string options;
     // turn off any printing
     options += "Integer print_level  0\n";
     options += "String  sb           yes\n";
     // maximum number of iterations
     options += "Integer max_iter     10\n";
     // approximate accuracy in first order necessary conditions;
     // see Mathematical Programming, Volume 106, Number 1,
     // Pages 25-57, Equation (6)
     options += "Numeric tol          1e-6\n";
     // derivative testing
     options += "String  derivative_test            second-order\n";
     // maximum amount of random pertubation; e.g.,
     // when evaluation finite diff
     options += "Numeric point_perturbation_radius  0.\n";

     // place to return solution
     CppAD::ipopt::solve_result<Dvector> solution;

     // solve the problem
     CppAD::ipopt::solve<Dvector, FG_eval>(
          options, xi, xl, xu, gl, gu, fg_eval, solution
     );
     //
     // Check some of the solution values
     //
     ok &= solution.status == CppAD::ipopt::solve_result<Dvector>::success;
     //
     double check_x[]  = { 1.000000, 4.743000, 3.82115, 1.379408 };
     double check_zl[] = { 1.087871, 0.,       0.,      0.       };
     double check_zu[] = { 0.,       0.,       0.,      0.       };
     double rel_tol    = 1e-6;  // relative tolerance
     double abs_tol    = 1e-6;  // absolute tolerance
     for(i = 0; i < nx; i++)
     {     ok &= CppAD::NearEqual(
               check_x[i],  solution.x[i],   rel_tol, abs_tol
          );
          ok &= CppAD::NearEqual(
               check_zl[i], solution.zl[i], rel_tol, abs_tol
          );
          ok &= CppAD::NearEqual(
               check_zu[i], solution.zu[i], rel_tol, abs_tol
          );
     }

     return ok;
}

Input File: example/ipopt_solve/get_started.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
9.2: Nonlinear Programming Retaping: Example and Test

9.2.a: Purpose
This example program demonstrates a case were the ipopt::solve argument 9.f.a: retape should be true.
# include <cppad/ipopt/solve.hpp>

namespace {
     using CppAD::AD;

     class FG_eval {
     public:
          typedef CPPAD_TESTVECTOR( AD<double> ) ADvector;
          void operator()(ADvector& fg, const ADvector& x)
          {     assert( fg.size() == 1 );
               assert( x.size()  == 1 );

               // compute the Huber function using a conditional
               // statement that depends on the value of x.
               double eps = 0.1;
               if( fabs(x[0]) <= eps )
                    fg[0] = x[0] * x[0] / (2.0 * eps);
               else
                    fg[0] = fabs(x[0]) - eps / 2.0;

               return;
          }
     };
}

bool retape(void)
{     bool ok = true;
     typedef CPPAD_TESTVECTOR( double ) Dvector;

     // number of independent variables (domain dimension for f and g)
     size_t nx = 1;
     // number of constraints (range dimension for g)
     size_t ng = 0;
     // initial value, lower and upper limits, for the independent variables
     Dvector xi(nx), xl(nx), xu(nx);
     xi[0] = 2.0;
     xl[0] = -1e+19;
     xu[0] = +1e+19;
     // lower and upper limits for g
     Dvector gl(ng), gu(ng);

     // object that computes objective and constraints
     FG_eval fg_eval;

     // options
     std::string options;
     // retape operation sequence for each new x
     options += "Retape  true\n";
     // turn off any printing
     options += "Integer print_level   0\n";
     options += "String  sb          yes\n";
     // maximum number of iterations
     options += "Integer max_iter      10\n";
     // approximate accuracy in first order necessary conditions;
     // see Mathematical Programming, Volume 106, Number 1,
     // Pages 25-57, Equation (6)
     options += "Numeric tol           1e-9\n";
     // derivative testing
     options += "String  derivative_test            second-order\n";
     // maximum amount of random pertubation; e.g.,
     // when evaluation finite diff
     options += "Numeric point_perturbation_radius  0.\n";

     // place to return solution
     CppAD::ipopt::solve_result<Dvector> solution;

     // solve the problem
     CppAD::ipopt::solve<Dvector, FG_eval>(
          options, xi, xl, xu, gl, gu, fg_eval, solution
     );
     //
     // Check some of the solution values
     //
     ok &= solution.status == CppAD::ipopt::solve_result<Dvector>::success;
     double rel_tol    = 1e-6;  // relative tolerance
     double abs_tol    = 1e-6;  // absolute tolerance
     ok &= CppAD::NearEqual( solution.x[0], 0.0,  rel_tol, abs_tol);

     return ok;
}

Input File: example/ipopt_solve/retape.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
9.3: ODE Inverse Problem Definitions: Source Code

9.3.a: Purpose
This example demonstrates how to invert for parameters in a ODE where the solution of the ODE is numerically approximated.

9.3.b: Forward Problem
We consider the following ordinary differential equation: @[@ \begin{array}{rcl} \partial_t y_0 ( t , a ) & = & - a_1 * y_0 (t, a ) \\ \partial_t y_1 (t , a ) & = & + a_1 * y_0 (t, a ) - a_2 * y_1 (t, a ) \end{array} @]@ with the initial conditions @[@ y_0 (0 , a) = ( a_0 , 0 )^\R{T} @]@ Our forward problem is stated as follows: Given @(@ a \in \B{R}^3 @)@ determine the value of @(@ y ( t , a ) @)@, for @(@ t \in R @)@, that solves the initial value problem above.

9.3.c: Measurements
Suppose we are also given measurement times @(@ s \in \B{R}^5 @)@ and a measurement vector @(@ z \in \B{R}^4 @)@ and for @(@ i = 0, \ldots, 3 @)@, we model @(@ z_i @)@ by @[@ z_i = y_1 ( s_{i+1} , a) + e_i @]@ where @(@ e_{i-1} \sim {\bf N} (0 , \sigma^2 ) @)@ is the measurement noise, and @(@ \sigma > 0 @)@ is the standard deviation of the noise.

9.3.c.a: Simulation Analytic Solution
The following analytic solution to the forward problem is used to simulate a data set: @[@ \begin{array}{rcl} y_0 (t , a) & = & a_0 * \exp( - a_1 * t ) \\ y_1 (t , a) & = & a_0 * a_1 * \frac{\exp( - a_2 * t ) - \exp( -a_1 * t )}{ a_1 - a_2 } \end{array} @]@

9.3.c.b: Simulation Parameter Values
@(@ \bar{a}_0 = 1 @)@   initial value of @(@ y_0 (t, a) @)@
@(@ \bar{a}_1 = 2 @)@   transfer rate from compartment zero to compartment one
@(@ \bar{a}_2 = 1 @)@   transfer rate from compartment one to outside world
@(@ \sigma = 0 @)@   standard deviation of measurement noise
@(@ e_i = 0 @)@   simulated measurement noise, @(@ i = 1 , \ldots , Nz @)@
@(@ s_i = i * .5 @)@   time corresponding to the i-th measurement, @(@ i = 0 , \ldots , 3 @)@

9.3.c.c: Simulated Measurement Values
The simulated measurement values are given by the equation @[@ \begin{array}{rcl} z_i & = & e_i + y_1 ( s_{i+1} , \bar{a} ) \\ & = & \bar{a}_0 * \bar{a}_1 * \frac{\exp( - \bar{a}_2 * s_i ) - \exp( -\bar{a}_1 * s_i )} { \bar{a}_1 - \bar{a}_2 } \end{array} @]@ for @(@ i = 0, \ldots , 3 @)@.

9.3.d: Inverse Problem
The maximum likelihood estimate for @(@ a @)@ given @(@ z @)@ solves the following optimization problem @[@ \begin{array}{rcl} {\rm minimize} \; & \sum_{i=0}^3 ( z_i - y_1 ( s_{i+1} , a ) )^2 & \;{\rm w.r.t} \; a \in \B{R}^3 \end{array} @]@

9.3.e: Trapezoidal Approximation
We are given a number of approximation points per measurement interval @(@ np @)@ and define the time grid @(@ t \in \B{R}^{4 \cdot np + 1} @)@ as follows: @(@ t_0 = s_0 @)@ and for @(@ i = 0 , 1 , 2, 3 @)@, @(@ j = 1 , \ldots , np @)@ @[@ t_{i \cdot np + j} = s_i + (s_{i+1} - s{i}) \frac{i}{np} @]@ We note that for @(@ i = 1 , \ldots , 4 @)@, @(@ t_{i \cdot np} = s_i @)@. This example uses a trapezoidal approximation to solve the ODE. Given @(@ a \in \B{R}^3 @)@ and @(@ y^{k-1} \approx y( t_{k-1} , a ) @)@, the a trapezoidal method approximates @(@ y ( t_j , a ) @)@ by the value @(@ y^k \in \B{R}^2 @)@ ) that solves the equation @[@ y^k = y^{k-1} + \frac{G( y^k , a ) + G( y^{k-1} , a ) }{2} * (t_k - t_{k-1}) @]@ where @(@ G : \B{R}^2 \times \B{R}^3 \rightarrow \B{R}^2 @)@ is defined by @[@ \begin{array}{rcl} G_0 ( y , a ) & = & - a_1 * y_0 \\ G_1 ( y , a ) & = & + a_1 * y_0 - a_2 * y_1 \end{array} @]@

9.3.f: Solution Method
We use constraints to embed the forward problem in the inverse problem. To be specific, we solve the optimization problem @[@ \begin{array}{rcl} {\rm minimize} & \sum_{i=0}^3 ( z_i - y_1^{(i+1) \cdot np} )^2 & \; {\rm w.r.t} \; a \in \B{R}^3 \; y^0 \in \B{R}^2 , \ldots , y^{3 \cdot np -1} \in \B{R}^2 \\ {\rm subject \; to} 0 = y^0 - ( a_0 , 0 )^\R{T} \\ & 0 = y^k - y^{k-1} - \frac{G( y^k , a ) + G( y^{k-1} , a ) }{2} (t_k - t_{k-1}) & \; {\rm for} \; k = 1 , \ldots , 4 \cdot np \end{array} @]@ The code below we using the notation @(@ x \in \B{3 + (4 \cdot np + 1) \cdot 2} @)@ defined by @[@ x = \left( a_0, a_1, a_2 , y_0^0, y_1^0, \ldots , y_0^{4 \cdot np}, y_1^{4 \cdots np} \right) @]@

9.3.g: Source
The following source code implements the ODE inversion method proposed above:
# include <cppad/ipopt/solve.hpp>

namespace {
     using CppAD::AD;

     // value of a during simulation a[0], a[1], a[2]
     double a_[] =                   {2.0,  1.0, 0.5};
     // number of components in a
     size_t na_ = sizeof(a_) / sizeof(a_[0]);

     // function used to simulate data
     double yone(double t)
     {     return
               a_[0]*a_[1] * (exp(-a_[2]*t) - exp(-a_[1]*t)) / (a_[1] - a_[2]);
     }

     // time points were we have data (no data at first point)
     double s_[] = {0.0,   0.5,        1.0,          1.5,         2.0 };

     // Simulated data for case with no noise (first point is not used)
     double z_[] = {yone(s_[1]), yone(s_[2]), yone(s_[3]), yone(s_[4])};
     size_t nz_  = sizeof(z_) / sizeof(z_[0]);

     // number of trapozoidal approximation points per measurement interval
     size_t np_  = 40;


     class FG_eval
     {
     private:
     public:
          // derived class part of constructor
          typedef CPPAD_TESTVECTOR( AD<double> ) ADvector;

          // Evaluation of the objective f(x), and constraints g(x)
          void operator()(ADvector& fg, const ADvector& x)
          {     CPPAD_TESTVECTOR( AD<double> ) a(na_);
               size_t i, j, k;

               // extract the vector a
               for(i = 0; i < na_; i++)
                    a[i] = x[i];

               // compute the object f(x)
               fg[0] = 0.0;
               for(i = 0; i < nz_; i++)
               {     k = (i + 1) * np_;
                    AD<double> y_1 = x[na_ + 2 * k + 1];
                    AD<double> dif = z_[i] - y_1;
                    fg[0]         += dif * dif;
               }

               // constraint corresponding to initial value y(0, a)
               // Note that this constraint is invariant with size of dt
               fg[1] = x[na_+0] - a[0];
               fg[2] = x[na_+1] - 0.0;

               // constraints corresponding to trapozoidal approximation
               for(i = 0; i < nz_; i++)
               {     // spacing between grid point
                    double dt = (s_[i+1] - s_[i]) / static_cast<double>(np_);
                    for(j = 1; j <= np_; j++)
                    {     k = i * np_ + j;
                         // compute derivative at y^k
                         AD<double> y_0  = x[na_ + 2 * k + 0];
                         AD<double> y_1  = x[na_ + 2 * k + 1];
                         AD<double> G_0  = - a[1] * y_0;
                         AD<double> G_1  = + a[1] * y_0 - a[2] * y_1;

                         // compute derivative at y^{k-1}
                         AD<double> ym_0  = x[na_ + 2 * (k-1) + 0];
                         AD<double> ym_1  = x[na_ + 2 * (k-1) + 1];
                         AD<double> Gm_0  = - a[1] * ym_0;
                         AD<double> Gm_1  = + a[1] * ym_0 - a[2] * ym_1;

                         // constraint should be zero
                         fg[1 + 2*k ] = y_0  - ym_0 - dt*(G_0 + Gm_0)/2.;
                         fg[2 + 2*k ] = y_1  - ym_1 - dt*(G_1 + Gm_1)/2.;

                         // scale g(x) so it has similar size as f(x)
                         fg[1 + 2*k ] /= dt;
                         fg[2 + 2*k ] /= dt;
                    }
               }
          }
     };
}
bool ode_inverse(void)
{     bool ok = true;
     size_t i;
     typedef CPPAD_TESTVECTOR( double ) Dvector;

     // number of components in the function g
     size_t ng = (np_ * nz_ + 1) * 2;
     // number of independent variables
     size_t nx = na_ + ng;
     // initial vlaue for the variables we are optimizing w.r.t
     Dvector xi(nx), xl(nx), xu(nx);
     for(i = 0; i < nx; i++)
     {     xi[i] =   0.0; // initial value
          xl[i] = -1e19; // no lower limit
          xu[i] = +1e19; // no upper limit
     }
     for(i = 0; i < na_; i++)
          xi[0] = 1.5;   // initial value for a

     // all the difference equations are constrainted to be zero
     Dvector gl(ng), gu(ng);
     for(i = 0; i < ng; i++)
     {     gl[i] = 0.0;
          gu[i] = 0.0;
     }
     // object defining both f(x) and g(x)
     FG_eval fg_eval;

     // options
     std::string options;
     // Use sparse matrices for calculation of Jacobians and Hessians
     // with forward mode for Jacobian (seems to be faster for this case).
     options += "Sparse  true        forward\n";
     // turn off any printing
     options += "Integer print_level 0\n";
     options += "String  sb        yes\n";
     // maximum number of iterations
     options += "Integer max_iter    30\n";
     // approximate accuracy in first order necessary conditions;
     // see Mathematical Programming, Volume 106, Number 1,
     // Pages 25-57, Equation (6)
     options += "Numeric tol         1e-6\n";

     // place to return solution
     CppAD::ipopt::solve_result<Dvector> solution;

     // solve the problem
     CppAD::ipopt::solve<Dvector, FG_eval>(
          options, xi, xl, xu, gl, gu, fg_eval, solution
     );
     //
     // Check some of the solution values
     //
     ok &= solution.status == CppAD::ipopt::solve_result<Dvector>::success;
     //
     double rel_tol    = 1e-4;  // relative tolerance
     double abs_tol    = 1e-4;  // absolute tolerance
     for(i = 0; i < na_; i++)
          ok &= CppAD::NearEqual( a_[i],  solution.x[i],   rel_tol, abs_tol);

     return ok;
}

Input File: example/ipopt_solve/ode_inverse.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10: Examples

10.a: Introduction
This section organizes the information related to the CppAD examples. Each CppAD operation has its own specific example, for example 4.4.1.3.1: add.cpp is an example for 4.4.1.3: addition . Some of the examples are of a more general nature (not connected of a specific feature of CppAD). In addition, there are some utility functions that are used by the examples.

10.b: get_started
The 10.1: get_started.cpp example is a good place to start using CppAD.

10.c: Running Examples
The 2: installation instructions show how the examples can be run on your system.

10.d: The CppAD Test Vector Template Class
Many of the examples use the 10.5: CPPAD_TESTVECTOR preprocessor symbol to determine which 8.9: SimpleVector template class is used with the examples.

10.e: Contents
10.1: Getting Started Using CppAD to Compute Derivatives
10.2: General Examples
10.3: Utility Routines used by CppAD Examples
10.4: List All (Except Deprecated) CppAD Examples
10.5: Using The CppAD Test Vector Template Class
10.6: Suppress Suspect Implicit Conversion Warnings

Input File: omh/example.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.1: Getting Started Using CppAD to Compute Derivatives

10.1.a: Purpose
Demonstrate the use of CppAD by computing the derivative of a simple example function.

10.1.b: Function
The example function @(@ f : \B{R} \rightarrow \B{R} @)@ is defined by @[@ f(x) = a_0 + a_1 * x^1 + \cdots + a_{k-1} * x^{k-1} @]@ where a is a fixed vector of length k .

10.1.c: Derivative
The derivative of @(@ f(x) @)@ is given by @[@ f' (x) = a_1 + 2 * a_2 * x + \cdots + (k-1) * a_{k-1} * x^{k-2} @]@

10.1.d: Value
For the particular case in this example, @(@ k @)@ is equal to 5, @(@ a = (1, 1, 1, 1, 1) @)@, and @(@ x = 3 @)@. If follows that @[@ f' ( 3 ) = 1 + 2 * 3 + 3 * 3^2 + 4 * 3^3 = 142 @]@

10.1.e: Poly
The routine Poly is defined below for this particular application. A general purpose polynomial evaluation routine is documented and distributed with CppAD (see 8.13: Poly ).

10.1.f: Exercises
Modify the program below to accomplish the following tasks using CppAD:
  1. Compute and print the derivative of @(@ f(x) = 1 + x + x^2 + x^3 + x^4 @)@ at the point @(@ x = 2 @)@.
  2. Compute and print the derivative of @(@ f(x) = 1 + x + x^2 / 2 @)@ at the point @(@ x = .5 @)@.
  3. Compute and print the derivative of @(@ f(x) = \exp (x) - 1 - x - x^2 / 2 @)@ at the point @(@ x = .5 @)@.


10.1.g: Program
#include <iostream>      // standard input/output
#include <vector>        // standard vector
#include <cppad/cppad.hpp> // the CppAD package http://www.coin-or.org/CppAD/

namespace {
      // define y(x) = Poly(a, x) in the empty namespace
      template <class Type>
      Type Poly(const std::vector<double> &a, const Type &x)
      {     size_t k  = a.size();
            Type y   = 0.;  // initialize summation
            Type x_i = 1.;  // initialize x^i
            size_t i;
            for(i = 0; i < k; i++)
            {     y   += a[i] * x_i;  // y   = y + a_i * x^i
                  x_i *= x;           // x_i = x_i * x
            }
            return y;
      }
}
// main program
int main(void)
{     using CppAD::AD;           // use AD as abbreviation for CppAD::AD
      using std::vector;         // use vector as abbreviation for std::vector
      size_t i;                  // a temporary index

      // vector of polynomial coefficients
      size_t k = 5;              // number of polynomial coefficients
      vector<double> a(k);       // vector of polynomial coefficients
      for(i = 0; i < k; i++)
            a[i] = 1.;           // value of polynomial coefficients

      // domain space vector
      size_t n = 1;              // number of domain space variables
      vector< AD<double> > X(n); // vector of domain space variables
      X[0] = 3.;                 // value corresponding to operation sequence

      // declare independent variables and start recording operation sequence
      CppAD::Independent(X);

      // range space vector
      size_t m = 1;              // number of ranges space variables
      vector< AD<double> > Y(m); // vector of ranges space variables
      Y[0] = Poly(a, X[0]);      // value during recording of operations

      // store operation sequence in f: X -> Y and stop recording
      CppAD::ADFun<double> f(X, Y);

      // compute derivative using operation sequence stored in f
      vector<double> jac(m * n); // Jacobian of f (m by n matrix)
      vector<double> x(n);       // domain space vector
      x[0] = 3.;                 // argument value for derivative
      jac  = f.Jacobian(x);      // Jacobian for operation sequence

      // print the results
      std::cout << "f'(3) computed by CppAD = " << jac[0] << std::endl;

      // check if the derivative is correct
      int error_code;
      if( jac[0] == 142. )
            error_code = 0;      // return code for correct case
      else  error_code = 1;      // return code for incorrect case

      return error_code;
}

10.1.h: Output
Executing the program above will generate the following output:
 
     f'(3) computed by CppAD = 142

10.1.i: Running
To build and run this program see 2.3: cmake_check .
Input File: example/get_started/get_started.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2: General Examples

10.2.a: Description
Most of the examples in CppAD are part of the documentation for a specific feature; for example, 4.4.1.3.1: add.cpp is an example using the 4.4.1.3: addition operator . The examples list in this section are of a more general nature.

10.2.b: Contents
ad_fun.cpp: 10.2.1Creating Your Own Interface to an ADFun Object
ad_in_c.cpp: 10.2.2Example and Test Linking CppAD to Languages Other than C++
conj_grad.cpp: 10.2.3Differentiate Conjugate Gradient Algorithm: Example and Test
cppad_eigen.hpp: 10.2.4Enable Use of Eigen Linear Algebra Package with CppAD
hes_minor_det.cpp: 10.2.5Gradient of Determinant Using Expansion by Minors: Example and Test
hes_lu_det.cpp: 10.2.6Gradient of Determinant Using LU Factorization: Example and Test
interface2c.cpp: 10.2.7Interfacing to C: Example and Test
jac_minor_det.cpp: 10.2.8Gradient of Determinant Using Expansion by Minors: Example and Test
jac_lu_det.cpp: 10.2.9Gradient of Determinant Using Lu Factorization: Example and Test
mul_level: 10.2.10Using Multiple Levels of AD
ode_stiff.cpp: 10.2.11A Stiff Ode: Example and Test
mul_level_ode.cpp: 10.2.12Taylor's Ode Solver: A Multi-Level AD Example and Test
mul_level_adolc_ode.cpp: 10.2.13Taylor's Ode Solver: A Multi-Level Adolc Example and Test
ode_taylor.cpp: 10.2.14Taylor's Ode Solver: An Example and Test
stack_machine.cpp: 10.2.15Example Differentiating a Stack Machine Interpreter

Input File: omh/example_list.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.1: Creating Your Own Interface to an ADFun Object

# include <cppad/cppad.hpp>

namespace {

     // This class is an example of a different interface to an AD function object
     template <class Base>
     class my_ad_fun {

     private:
          CppAD::ADFun<Base> f;

     public:
          // default constructor
          my_ad_fun(void)
          { }

          // destructor
          ~ my_ad_fun(void)
          { }

          // Construct an my_ad_fun object with an operation sequence.
          // This is the same as for ADFun<Base> except that no zero
          // order forward sweep is done. Note Hessian and Jacobian do
          // their own zero order forward mode sweep.
          template <class ADvector>
          my_ad_fun(const ADvector& x, const ADvector& y)
          {     f.Dependent(x, y); }

          // same as ADFun<Base>::Jacobian
          template <class VectorBase>
          VectorBase jacobian(const VectorBase& x)
          {     return f.Jacobian(x); }

          // same as ADFun<Base>::Hessian
             template <typename VectorBase>
          VectorBase hessian(const VectorBase &x, const VectorBase &w)
          {     return f.Hessian(x, w); }
     };

} // End empty namespace

bool ad_fun(void)
{     // This example is similar to example/jacobian.cpp, except that it
     // uses my_ad_fun instead of ADFun.

     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::exp;
     using CppAD::sin;
     using CppAD::cos;

     // domain space vector
     size_t n = 2;
     CPPAD_TESTVECTOR(AD<double>)  X(n);
     X[0] = 1.;
     X[1] = 2.;

     // declare independent variables and starting recording
     CppAD::Independent(X);

     // a calculation between the domain and range values
     AD<double> Square = X[0] * X[0];

     // range space vector
     size_t m = 3;
     CPPAD_TESTVECTOR(AD<double>)  Y(m);
     Y[0] = Square * exp( X[1] );
     Y[1] = Square * sin( X[1] );
     Y[2] = Square * cos( X[1] );

     // create f: X -> Y and stop tape recording
     my_ad_fun<double> f(X, Y);

     // new value for the independent variable vector
     CPPAD_TESTVECTOR(double) x(n);
     x[0] = 2.;
     x[1] = 1.;

     // compute the derivative at this x
     CPPAD_TESTVECTOR(double) jac( m * n );
     jac = f.jacobian(x);

     /*
     F'(x) = [ 2 * x[0] * exp(x[1]) ,  x[0] * x[0] * exp(x[1]) ]
             [ 2 * x[0] * sin(x[1]) ,  x[0] * x[0] * cos(x[1]) ]
             [ 2 * x[0] * cos(x[1]) , -x[0] * x[0] * sin(x[i]) ]
     */
     ok &=  NearEqual( 2.*x[0]*exp(x[1]), jac[0*n+0], eps99, eps99);
     ok &=  NearEqual( 2.*x[0]*sin(x[1]), jac[1*n+0], eps99, eps99);
     ok &=  NearEqual( 2.*x[0]*cos(x[1]), jac[2*n+0], eps99, eps99);

     ok &=  NearEqual( x[0] * x[0] *exp(x[1]), jac[0*n+1], eps99, eps99);
     ok &=  NearEqual( x[0] * x[0] *cos(x[1]), jac[1*n+1], eps99, eps99);
     ok &=  NearEqual(-x[0] * x[0] *sin(x[1]), jac[2*n+1], eps99, eps99);

     return ok;
}


Input File: example/general/ad_fun.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.2: Example and Test Linking CppAD to Languages Other than C++
# include <cstdio>
# include <cppad/cppad.hpp>
# include <list>

namespace { // Begin empty namespace *****************************************

/*
void debug_print(const char *label, double d)
{     using std::printf;

     unsigned char *byte = reinterpret_cast<unsigned char *>(&d);
     size_t n_byte = sizeof(d);
     printf("%s", label);
     for(size_t i = 0; i < n_byte; i++)
          printf("%x", byte[i]);
     printf("\n");
}
*/

// type in C corresponding to an AD<double> object
typedef struct { void*  p_void; } cad;

// type in C corresponding to a an ADFun<double>
typedef struct { void* p_void; } cad_fun;

// type in C corresponding to a C AD binary operator
typedef enum { op_add, op_sub, op_mul, op_div } cad_binary_op;

// type in C corresponding to a C AD unary operator
typedef enum {
     op_abs, op_acos, op_asin, op_atan, op_cos, op_cosh,
     op_exp, op_log,  op_sin,  op_sinh, op_sqrt
} cad_unary_op;

// --------------------------------------------------------------------------
// helper code not intended for use by C code  ------------------------------
using CppAD::AD;
using CppAD::ADFun;
using CppAD::vector;
using CppAD::NearEqual;

void cad2vector(size_t n, cad* p_cad, vector< AD<double> >& v)
{     assert( n == v.size() );
     for(size_t j = 0; j < n; j++)
     {     AD<double>* p_ad =
               reinterpret_cast< AD<double>* > (p_cad[j].p_void);
          v[j] = *p_ad;
     }
}

void vector2cad(size_t n, vector< AD<double> >& v, cad* p_cad)
{     assert( n == v.size() );
     for(size_t j = 0; j < n; j++)
     {     AD<double>* p_ad =
               reinterpret_cast< AD<double>* > (p_cad[j].p_void);
          *p_ad = v[j];
     }
}

void double2vector(size_t n, double* p_dbl, vector<double>& v)
{     assert( n == v.size() );
     for(size_t j = 0; j < n; j++)
          v[j] = p_dbl[j];
}

void vector2double(size_t n, vector<double>& v, double *p_dbl)
{     assert( n == v.size() );
     for(size_t j = 0; j < n; j++)
          p_dbl[j] = v[j];
}

std::list<void*> allocated;
# ifdef NDEBUG
inline void push_allocated(void *p)
{ }
inline void pop_allocated(void *p)
{ }
# else
inline void push_allocated(void *p)
{     assert( p != 0 );
     allocated.push_front(p);
}
inline void pop_allocated(void *p)
{     std::list<void*>::iterator i;
     for(i = allocated.begin(); i != allocated.end(); ++i)
     {     if( *i == p )
          {     allocated.erase(i);
               return;
          }
     }
     assert( 0 );
}

# endif
// --------------------------------------------------------------------------
// Here is the code that links C to CppAD. You will have to add more
// functions and operators to make a complete language link.
//
extern "C"
bool cad_near_equal(double x, double y)
{     double eps = 10. * std::numeric_limits<double>::epsilon();
     return NearEqual(x, y, eps, 0.);
}

// create a C++ AD object
// value is the value that the C++ AD object will have
// p_cad->p_void: on input is 0, on output points to C++ AD object
extern "C"
void cad_new_ad(cad *p_cad, double value)
{     // make sure pointer is not currently allocated
     assert( p_cad->p_void == 0 );

     AD<double>* p_ad   = new AD<double>(value);
     p_cad->p_void      = reinterpret_cast<void*>(p_ad);

     // put in list of allocate pointers
     push_allocated( p_cad->p_void );
}

// delete a C++ AD object
// p_cad->value: not used
// p_cad->p_void: on input points to C++ AD object, on output is 0
extern "C"
void cad_del_ad(cad* p_cad)
{     // make sure that p_cad has been allocated
     pop_allocated( p_cad->p_void );

     AD<double>* p_ad   = reinterpret_cast< AD<double>* >( p_cad->p_void );
     delete p_ad;

     // special value for pointers that are not allocated
     p_cad->p_void = 0;
}

// extract the value from a C++ AD object
// extern "C"
double cad_value(cad* p_cad)
{     AD<double>* p_ad = reinterpret_cast< AD<double>* > (p_cad->p_void);
     return Value( Var2Par(*p_ad) );
}

// preform a C AD unary operation
extern "C"
void cad_unary(cad_unary_op op, cad* p_operand, cad* p_result)
{     AD<double> *operand, *result;
     result  = reinterpret_cast< AD<double>* > (p_result->p_void);
     operand = reinterpret_cast< AD<double>* > (p_operand->p_void);
     switch(op)
     {
          case op_abs:
          *result = fabs( *operand );
          break;

          case op_acos:
          *result = acos( *operand );
          break;

          case op_asin:
          *result = asin( *operand );
          break;

          case op_atan:
          *result = atan( *operand );
          break;

          case op_cos:
          *result = cos( *operand );
          break;

          case op_cosh:
          *result = cosh( *operand );
          break;

          case op_exp:
          *result = exp( *operand );
          break;

          case op_log:
          *result = log( *operand );
          break;

          case op_sin:
          *result = sin( *operand );
          break;

          case op_sinh:
          *result = sinh( *operand );
          break;

          case op_sqrt:
          *result = sqrt( *operand );
          break;

          default:
          // not a unary operator
          assert(0);
          break;

     }
     return;
}

// perform a C AD binary operation
extern "C"
void cad_binary(cad_binary_op op, cad* p_left, cad* p_right, cad* p_result)
{     AD<double> *result, *left, *right;
     result = reinterpret_cast< AD<double>* > (p_result->p_void);
     left   = reinterpret_cast< AD<double>* > (p_left->p_void);
     right  = reinterpret_cast< AD<double>* > (p_right->p_void);
     assert( result != 0 );
     assert( left != 0 );
     assert( right != 0 );

     switch(op)
     {     case op_add:
          *result         = *left + (*right);
          break;

          case op_sub:
          *result         = *left - (*right);
          break;

          case op_mul:
          *result         = *left * (*right);
          break;

          case op_div:
          *result         = *left / (*right);
          break;

          default:
          // not a binary operator
          assert(0);
     }
     return;
}

// declare the independent variables in C++
extern "C"
void cad_independent(size_t n, cad* px_cad)
{     vector< AD<double> > x(n);
     cad2vector(n, px_cad, x);
     CppAD::Independent(x);
     vector2cad(n, x, px_cad);
}

// create an ADFun object in C++
extern "C"
cad_fun cad_new_fun(size_t n, size_t m, cad* px_cad, cad* py_cad)
{     cad_fun fun;

     ADFun<double>* p_adfun = new ADFun<double>;
     vector< AD<double> > x(n);
     vector< AD<double> > y(m);
     cad2vector(n, px_cad, x);
     cad2vector(m, py_cad, y);
     p_adfun->Dependent(x, y);

     fun.p_void = reinterpret_cast<void*>( p_adfun );

     // put in list of allocate pointers
     push_allocated( fun.p_void );

     return fun;
}

// delete an AD function object in C
extern "C"
void cad_del_fun(cad_fun *fun)
{     // make sure this pointer has been allocated
     pop_allocated( fun->p_void );

     ADFun<double>* p_adfun
          = reinterpret_cast< ADFun<double>* > (fun->p_void);
     delete p_adfun;

     // special value for pointers that are not allocated
     fun->p_void = 0;
}

// evaluate the Jacobian corresponding to a function object
extern "C"
void cad_jacobian(cad_fun fun,
     size_t n, size_t m, double* px, double* pjac )
{     assert( fun.p_void != 0 );

     ADFun<double>* p_adfun =
          reinterpret_cast< ADFun<double>* >(fun.p_void);
     vector<double> x(n), jac(n * m);

     double2vector(n, px, x);
     jac = p_adfun->Jacobian(x);
     vector2double(n * m, jac, pjac);
}

// forward mode
extern "C"
void cad_forward(cad_fun fun,
     size_t order, size_t n, size_t m, double* px, double* py )
{     assert( fun.p_void != 0 );

     ADFun<double>* p_adfun =
          reinterpret_cast< ADFun<double>* >(fun.p_void);
     vector<double> x(n), y(m);

     double2vector(n, px, x);
     y = p_adfun->Forward(order, x);
     vector2double(m, y, py);
}

// check that allocated list has been completely freed
extern "C"
bool cad_allocated_empty(void)
{     return allocated.empty();
}

} // End empty namespace ****************************************************

# include <math.h> // used to check results in c code below

# define N 2       // number of independent variables in example
# define M 5       // number of dependent variables in example

// -------------------------------------------------------------------------
// Here is the C code that uses the CppAD link above
bool ad_in_c(void)
{     // This routine is intentionally coded as if it were written in C
     // as an example of how you can link C, and other languages to CppAD
     bool ok = true;

     // x vector of AD objects in C
     double value;
     size_t j, n = N;
     cad X[N];
     for(j = 0; j < n; j++)
     {     value       = (double) (j+1) / (double) n;
          X[j].p_void = 0;
          cad_new_ad(X + j, value);
     }

     // y vector of AD objects in C
     size_t i, m = M;
     cad Y[M];
     for(i = 0; i < m; i++)
     {     value       = 0.; // required, but not used
          Y[i].p_void = 0;
          cad_new_ad(Y + i, value);
     }

     // declare X as the independent variable vector
     cad_independent(n, X);

     // y[0] = x[0] + x[1]
     cad_binary(op_add, X+0, X+1, Y+0);
     ok &= cad_near_equal( cad_value(Y+0), cad_value(X+0)+cad_value(X+1) );

     // y[1] = x[0] - x[1]
     cad_binary(op_sub, X+0, X+1, Y+1);
     ok &= cad_near_equal( cad_value(Y+1), cad_value(X+0)-cad_value(X+1) );

     // y[2] = x[0] * x[1]
     cad_binary(op_mul, X+0, X+1, Y+2);
     ok &= cad_near_equal( cad_value(Y+2), cad_value(X+0)*cad_value(X+1) );

     // y[3] = x[0] * x[1]
     cad_binary(op_div, X+0, X+1, Y+3);
     ok &= cad_near_equal( cad_value(Y+3), cad_value(X+0)/cad_value(X+1) );

     // y[4] = sin(x[0]) + asin(sin(x[0]))
     cad sin_x0 = { 0 };       // initialize p_void as zero
     cad_new_ad( &sin_x0, 0.);
     cad_unary(op_sin, X+0, &sin_x0);
     ok &= cad_near_equal(cad_value(&sin_x0), sin(cad_value(X+0)) );

     cad asin_sin_x0 = { 0 };  // initialize p_void as zero
     cad_new_ad( &asin_sin_x0, 0.);
     cad_unary(op_asin, &sin_x0, &asin_sin_x0);
     ok &= cad_near_equal(
          cad_value(&asin_sin_x0),
          asin( cad_value(&sin_x0) )
     );

     cad_binary(op_add, &sin_x0, &asin_sin_x0, Y+4);
     ok &= cad_near_equal(
          cad_value(Y+4),
          cad_value(&sin_x0) + cad_value(&asin_sin_x0)
     );

     // declare y as the dependent variable vector and stop recording
     // and store function object in f
     cad_fun f = cad_new_fun(n, m, X, Y);

     // now use the function object
     double x[N], jac[N * M];
     x[0] = 1.;
     x[1] = .5;

     // compute the Jacobian
     cad_jacobian(f, n, m, x, jac);

     // check the Jacobian values
     size_t k = 0;
     // partial y[0] w.r.t. x[0]
     ok &= cad_near_equal(jac[k++], 1.);
     // partial y[0] w.r.t. x[1]
     ok &= cad_near_equal(jac[k++], 1.);
     // partial y[1] w.r.t. x[0]
     ok &= cad_near_equal(jac[k++], 1.);
     // partial y[1] w.r.t. x[1]
     ok &= cad_near_equal(jac[k++], -1.);
     // partial y[2] w.r.t. x[0]
     ok &= cad_near_equal(jac[k++], x[1]);
     // partial y[2] w.r.t. x[1]
     ok &= cad_near_equal(jac[k++], x[0]);
     // partial y[3] w.r.t. x[0]
     ok &= cad_near_equal(jac[k++], 1./x[1]);
     // partial y[3] w.r.t. x[1]
     ok &= cad_near_equal(jac[k++], -x[0]/(x[1]*x[1]));
     // partial y[4] w.r.t x[0]
     ok &= cad_near_equal(jac[k++],  cos(x[0]) + 1.);
     // partial y[4] w.r.t x[1]
     ok &= cad_near_equal(jac[k++],  0.);

     // evaluate the function f at a different x
     size_t order = 0;
     double y[M];
     x[0] = .5;
     x[1] = 1.;
     cad_forward(f, order, n, m, x, y);

     // check the function values
     ok &= cad_near_equal(y[0] , x[0] + x[1] );
     ok &= cad_near_equal(y[1] , x[0] - x[1] );
     ok &= cad_near_equal(y[2] , x[0] * x[1] );
     ok &= cad_near_equal(y[3] , x[0] / x[1] );
     ok &= cad_near_equal(y[4] , sin(x[0]) + asin(sin(x[0])) );

     // delete All C++ copies of the AD objects
     cad_del_fun( &f );
     cad_del_ad( &sin_x0 );
     cad_del_ad( &asin_sin_x0 );
     for(j = 0; j < n; j++)
          cad_del_ad(X + j);
     for(i = 0; i < m; i++)
          cad_del_ad(Y + i);

     ok     &= cad_allocated_empty();
     return ok;
}

Input File: example/general/ad_in_c.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test

10.2.3.a: Purpose
The conjugate gradient algorithm is sparse linear solver and a good example where checkpointing can be applied (for each iteration). This example is a preliminary version of a new library routine for the conjugate gradient algorithm.

10.2.3.b: Algorithm
Given a positive definite matrix @(@ A \in \B{R}^{n \times n} @)@, a vector @(@ b \in \B{R}^n @)@, and tolerance @(@ \varepsilon @)@, the conjugate gradient algorithm finds an @(@ x \in \B{R}^n @)@ such that @(@ \| A x - b \|^2 / n \leq \varepsilon^2 @)@ (or it terminates at a specified maximum number of iterations).
  1. Input:
    The matrix @(@ A \in \B{R}^{n \times n} @)@, the vector @(@ b \in \B{R}^n @)@, a tolerance @(@ \varepsilon \geq 0 @)@, a maximum number of iterations @(@ m @)@, and the initial approximate solution @(@ x^0 \in \B{R}^n @)@ (can use zero for @(@ x^0 @)@).
  2. Initialize:
    @(@ g^0 = A * x^0 - b @)@, @(@ d^0 = - g^0 @)@, @(@ s_0 = ( g^0 )^\R{T} g^0 @)@, @(@ k = 0 @)@.
  3. Convergence Check:
    if @(@ k = m @)@ or @(@ \sqrt{ s_k / n } < \varepsilon @)@, return @(@ k @)@ as the number of iterations and @(@ x^k @)@ as the approximate solution.
  4. Next @(@ x @)@:
    @(@ \mu_{k+1} = s_k / [ ( d^k )^\R{T} A d^k ] @)@, @(@ x^{k+1} = x^k + \mu_{k+1} d^k @)@.
  5. Next @(@ g @)@:
    @(@ g^{k+1} = g^k + \mu_{k+1} A d^k @)@, @(@ s_{k+1} = ( g^{k+1} )^\R{T} g^{k+1} @)@.
  6. Next @(@ d @)@:
    @(@ d^{k+1} = - g^k + ( s_{k+1} / s_k ) d^k @)@.
  7. Iterate:
    @(@ k = k + 1 @)@, goto Convergence Check.
# include <cppad/cppad.hpp>
# include <cstdlib>
# include <cmath>

namespace { // Begin empty namespace
     using CppAD::AD;

     // A simple matrix multiply c = a * b , where a has n columns
     // and b has n rows. This should be changed to a function so that
     // it can efficiently handle the case were A is large and sparse.
     template <class Vector> // a simple vector class
     void mat_mul(size_t n, const Vector& a, const Vector& b, Vector& c)
     {     typedef typename Vector::value_type scalar;

          size_t m, p;
          m = a.size() / n;
          p = b.size() / n;

          assert( m * n == a.size() );
          assert( n * p == b.size() );
          assert( m * p == c.size() );

          size_t i, j, k, ij;
          for(i = 0; i < m; i++)
          {     for(j = 0; j < p; j++)
               {     ij    = i * p + j;
                    c[ij] = scalar(0);
                    for(k = 0; k < n; k++)
                         c[ij] = c[ij] + a[i * m + k] * b[k * p + j];
               }
          }
          return;
     }

     // Solve A * x == b to tolerance epsilon or terminate at m interations.
     template <class Vector> // a simple vector class
     size_t conjugate_gradient(
          size_t         m       , // input
          double         epsilon , // input
          const Vector&  A       , // input
          const Vector&  b       , // input
          Vector&        x       ) // input / output
     {     typedef typename Vector::value_type scalar;
          scalar mu, s_previous;
          size_t i, k;

          size_t n = x.size();
          assert( A.size() == n * n );
          assert( b.size() == n );

          Vector g(n), d(n), s(1), Ad(n), dAd(1);

          // g = A * x
          mat_mul(n, A, x, g);
          for(i = 0; i < n; i++)
          {     // g = A * x - b
               g[i] = g[i] - b[i];

               // d = - g
               d[i] = -g[i];
          }
          // s = g^T * g
          mat_mul(n, g, g, s);

          for(k = 0; k < m; k++)
          {     s_previous = s[0];
               if( s_previous < epsilon )
                    return k;

               // Ad = A * d
               mat_mul(n, A, d, Ad);

               // dAd = d^T * A * d
               mat_mul(n, d, Ad, dAd);

               // mu = s / d^T * A * d
               mu = s_previous / dAd[0];

               // g = g + mu * A * d
               for(i = 0; i < n; i++)
               {     x[i] = x[i] + mu * d[i];
                    g[i] = g[i] + mu * Ad[i];
               }

               // s = g^T * g
               mat_mul(n, g, g, s);

               // d = - g + (s / s_previous) * d
               for(i = 0; i < n; i++)
                    d[i] = - g[i] + ( s[0] / s_previous) * d[i];
          }
          return m;
     }

} // End empty namespace

bool conj_grad(void)
{     bool ok = true;

     // ----------------------------------------------------------------------
     // Setup
     // ----------------------------------------------------------------------
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::vector;
     using std::cout;
     using std::endl;
     size_t i, j;


     // size of the vectors
     size_t n  = 40;
     vector<double> D(n * n), Dt(n * n), A(n * n), x(n), b(n), c(n);
     vector< AD<double> > a_A(n * n), a_x(n), a_b(n);

     // D = diagonally dominant matrix
     // c = vector of ones
     for(i = 0; i < n; i++)
     {     c[i] = 1.;
          double sum = 0;
          for(j = 0; j < n; j++) if( i != j )
          {     D[ i * n + j ] = std::rand() / double(RAND_MAX);
               Dt[j * n + i ] = D[i * n + j ];
               sum           += D[i * n + j ];
          }
          Dt[ i * n + i ] = D[ i * n + i ] = sum * 1.1;
     }

     // A = D^T * D
     mat_mul(n, Dt, D, A);

     // b = D^T * c
     mat_mul(n, Dt, c, b);

     // copy from double to AD<double>
     for(i = 0; i < n; i++)
     {     a_b[i] = b[i];
          for(j = 0; j < n; j++)
               a_A[ i * n + j ] = A[ i * n + j ];
     }

     // ---------------------------------------------------------------------
     // Record the function f : b -> x
     // ---------------------------------------------------------------------
     // Make b the independent variable vector
     Independent(a_b);

     // Solve A * x = b using conjugate gradient method
     double epsilon = 1e-7;
     for(i = 0; i < n; i++)
               a_x[i] = AD<double>(0);
     size_t m = n + 1;
     size_t k = conjugate_gradient(m, epsilon, a_A, a_b, a_x);

     // create f_cg: b -> x and stop tape recording
     CppAD::ADFun<double> f(a_b, a_x);

     // ---------------------------------------------------------------------
     // Check for correctness
     // ---------------------------------------------------------------------

     // conjugate gradient should converge with in n iterations
     ok &= (k <= n);

     // accuracy to which we expect values to agree
     double delta = 10. * epsilon * std::sqrt( double(n) );

     // copy x from AD<double> to double
     for(i = 0; i < n; i++)
          x[i] = Value( a_x[i] );

     // check c = A * x
     mat_mul(n, A, x, c);
     for(i = 0; i < n; i++)
          ok &= NearEqual(c[i] , b[i],  delta , delta);

     // forward computation of partials w.r.t. b[0]
     vector<double> db(n), dx(n);
     for(j = 0; j < n; j++)
          db[j] = 0.;
     db[0] = 1.;

     // check db = A * dx
     delta = 5. * delta;
     dx = f.Forward(1, db);
     mat_mul(n, A, dx, c);
     for(i = 0; i < n; i++)
          ok   &= NearEqual(c[i], db[i], delta, delta);

     return ok;
}

Input File: example/sparse/conj_grad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD

10.2.4.a: Syntax
# include <cppad/example/cppad_eigen.hpp>

10.2.4.b: Purpose
Enables the use of the 2.2.3: eigen linear algebra package with the type AD<Base> ; see custom scalar types (https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html) .

10.2.4.c: Example
The files 10.2.4.2: eigen_array.cpp and 10.2.4.3: eigen_det.cpp contain an example and test of this include file. They return true if they succeed and false otherwise.

10.2.4.d: Include Files
The file cppad_eigen.hpp includes both <cppad/cppad.hpp> and <Eigen/Core>. The file 10.2.4.1: eigen_plugin.hpp defines value_type in the Eigen matrix class so its vectors are 8.9: simple vectors (not necessary for eigen-3.3.3 and later).

# define EIGEN_MATRIXBASE_PLUGIN <cppad/example/eigen_plugin.hpp>
# include <Eigen/Core>
# include <cppad/cppad.hpp>

10.2.4.e: Eigen NumTraits
Eigen needs the following definitions to work properly with AD<Base> scalars:
namespace Eigen {
     template <class Base> struct NumTraits< CppAD::AD<Base> >
     {     // type that corresponds to the real part of an AD<Base> value
          typedef CppAD::AD<Base>   Real;
          // type for AD<Base> operations that result in non-integer values
          typedef CppAD::AD<Base>   NonInteger;
          //  type to use for numeric literals such as "2" or "0.5".
          typedef CppAD::AD<Base>   Literal;
          // type for nested value inside an AD<Base> expression tree
          typedef CppAD::AD<Base>   Nested;

          enum {
               // does not support complex Base types
               IsComplex             = 0 ,
               // does not support integer Base types
               IsInteger             = 0 ,
               // only support signed Base types
               IsSigned              = 1 ,
               // must initialize an AD<Base> object
               RequireInitialization = 1 ,
               // computational cost of the corresponding operations
               ReadCost              = 1 ,
               AddCost               = 2 ,
               MulCost               = 2
          };

          // machine epsilon with type of real part of x
          // (use assumption that Base is not complex)
          static CppAD::AD<Base> epsilon(void)
          {     return CppAD::numeric_limits< CppAD::AD<Base> >::epsilon(); }

          // relaxed version of machine epsilon for comparison of different
          // operations that should result in the same value
          static CppAD::AD<Base> dummy_precision(void)
          {     return 100. *
                    CppAD::numeric_limits< CppAD::AD<Base> >::epsilon();
          }

          // minimum normalized positive value
          static CppAD::AD<Base> lowest(void)
          {     return CppAD::numeric_limits< CppAD::AD<Base> >::min(); }

          // maximum finite value
          static CppAD::AD<Base> highest(void)
          {     return CppAD::numeric_limits< CppAD::AD<Base> >::max(); }

          // number of decimal digits that can be represented without change.
          static int digits10(void)
          {     return CppAD::numeric_limits< CppAD::AD<Base> >::digits10; }
     };
}

10.2.4.f: CppAD Namespace
Eigen also needs the following definitions to work properly with AD<Base> scalars:
namespace CppAD {
          // functions that return references
          template <class Base> const AD<Base>& conj(const AD<Base>& x)
          {     return x; }
          template <class Base> const AD<Base>& real(const AD<Base>& x)
          {     return x; }

          // functions that return values (note abs is defined by cppad.hpp)
          template <class Base> AD<Base> imag(const AD<Base>& x)
          {     return CppAD::AD<Base>(0.); }
          template <class Base> AD<Base> abs2(const AD<Base>& x)
          {     return x * x; }
}

Input File: cppad/example/cppad_eigen.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.4.1: Source Code for eigen_plugin.hpp

// Declaration needed, before eigen-3.3.3, so Eigen vector is a simple vector
typedef Scalar value_type;

Input File: cppad/example/eigen_plugin.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.4.2: Using Eigen Arrays: Example and Test
# include <cppad/cppad.hpp>
# include <cppad/example/cppad_eigen.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <Eigen/Dense>

bool eigen_array(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     using Eigen::Matrix;
     using Eigen::Dynamic;
     //
     typedef Matrix< AD<double> , Dynamic, 1 > a_vector;
     //
     // some temporary indices
     size_t i, j;

     // domain and range space vectors
     size_t n  = 10, m = n;
     a_vector a_x(n), a_y(m);

     // set and declare independent variables and start tape recording
     for(j = 0; j < n; j++)
          a_x[j] = double(1 + j);
     CppAD::Independent(a_x);

     // evaluate a component wise function
     a_y = a_x.array() + a_x.array().sin();

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // compute the derivative of y w.r.t x using CppAD
     CPPAD_TESTVECTOR(double) x(n);
     for(j = 0; j < n; j++)
          x[j] = double(j) + 1.0 / double(j+1);
     CPPAD_TESTVECTOR(double) jac = f.Jacobian(x);

     // check Jacobian
     double eps = 100. * CppAD::numeric_limits<double>::epsilon();
     for(i = 0; i < m; i++)
     {     for(j = 0; j < n; j++)
          {     double check = 1.0 + cos(x[i]);
               if( i != j )
                    check = 0.0;
               ok &= NearEqual(jac[i * n + j], check, eps, eps);
          }
     }

     return ok;
}

Input File: example/general/eigen_array.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.4.3: Using Eigen To Compute Determinant: Example and Test
# include <cppad/example/cppad_eigen.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <Eigen/Dense>

bool eigen_det(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     using Eigen::Matrix;
     using Eigen::Dynamic;
     //
     typedef Matrix< double     , Dynamic, Dynamic > matrix;
     typedef Matrix< AD<double> , Dynamic, Dynamic > a_matrix;
     //
     typedef Matrix< double ,     Dynamic , 1>       vector;
     typedef Matrix< AD<double> , Dynamic , 1>       a_vector;
     // some temporary indices
     size_t i, j;

     // domain and range space vectors
     size_t size = 3, n  = size * size, m = 1;
     a_vector a_x(n), a_y(m);
     vector x(n);

     // set and declare independent variables and start tape recording
     for(i = 0; i < size; i++)
     {     for(j = 0; j < size; j++)
          {     // lower triangular matrix
               a_x[i * size + j] = x[i * size + j] = 0.0;
               if( j <= i )
                    a_x[i * size + j] = x[i * size + j] = double(1 + i + j);
          }
     }
     CppAD::Independent(a_x);

     // copy independent variable vector to a matrix
     a_matrix a_X(size, size);
     matrix X(size, size);
     for(i = 0; i < size; i++)
     {     for(j = 0; j < size; j++)
          {     X(i, j)   = x[i * size + j];
               // If we used a_X(i, j) = X(i, j), a_X would not depend on a_x.
               a_X(i, j) = a_x[i * size + j];
          }
     }

     // Compute the log of determinant of X
     a_y[0] = log( a_X.determinant() );

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f(a_x, a_y);

     // check function value
     double eps = 100. * CppAD::numeric_limits<double>::epsilon();
     CppAD::det_by_minor<double> det(size);
     ok &= NearEqual(Value(a_y[0]) , log(det(x)), eps, eps);

     // compute the derivative of y w.r.t x using CppAD
     vector jac = f.Jacobian(x);

     // check the derivative using the formula
     // d/dX log(det(X)) = transpose( inv(X) )
     matrix inv_X = X.inverse();
     for(i = 0; i < size; i++)
     {     for(j = 0; j < size; j++)
               ok &= NearEqual(jac[i * size + j], inv_X(j, i), eps, eps);
     }

     return ok;
}

Input File: example/general/eigen_det.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
// Complex examples should supppress conversion warnings
# include <cppad/wno_conversion.hpp>

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <complex>

typedef std::complex<double>     Complex;
typedef CppAD::AD<Complex>       ADComplex;
typedef CPPAD_TESTVECTOR(ADComplex)   ADVector;

// ----------------------------------------------------------------------------

bool HesMinorDet(void)
{     bool ok = true;

     using namespace CppAD;

     size_t n = 2;

     // object for computing determinants
     det_by_minor<ADComplex> Det(n);

     // independent and dependent variable vectors
     CPPAD_TESTVECTOR(ADComplex)  X(n * n);
     CPPAD_TESTVECTOR(ADComplex)  D(1);

     // value of the independent variable
     size_t i;
     for(i = 0; i < n * n; i++)
          X[i] = Complex(int(i), -int(i));

     // set the independent variables
     Independent(X);

     // comupute the determinant
     D[0] = Det(X);

     // create the function object
     ADFun<Complex> f(X, D);

     // argument value
     CPPAD_TESTVECTOR(Complex)     x( n * n );
     for(i = 0; i < n * n; i++)
          x[i] = Complex(2 * i, i);

     // first derivative of the determinant
     CPPAD_TESTVECTOR(Complex) H( n * n * n * n);
     H = f.Hessian(x, 0);

     /*
     f(x)     = x[0] * x[3] - x[1] * x[2]
     f'(x)    = ( x[3], -x[2], -x[1], x[0] )
     */
     Complex zero(0., 0.);
     Complex one(1., 0.);
     Complex Htrue[]  = {
          zero, zero, zero,  one,
          zero, zero, -one, zero,
          zero, -one, zero, zero,
           one, zero, zero, zero
     };
     for( i = 0; i < n*n*n*n; i++)
          ok &= Htrue[i] == H[i];

     return ok;

}

Input File: example/general/hes_minor_det.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
// Complex examples should supppress conversion warnings
# include <cppad/wno_conversion.hpp>

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_lu.hpp>

// The AD complex case is used by this example so must
// define a specializatgion of LeqZero,AbsGeq for the AD<Complex> case
namespace CppAD {
     CPPAD_BOOL_BINARY( std::complex<double> ,  AbsGeq   )
     CPPAD_BOOL_UNARY(  std::complex<double> ,  LeqZero )
}


bool HesLuDet(void)
{     bool ok = true;

     using namespace CppAD;
     typedef std::complex<double> Complex;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     size_t n = 2;

     // object for computing determinants
     det_by_lu< AD<Complex> > Det(n);

     // independent and dependent variable vectors
     CPPAD_TESTVECTOR(AD<Complex>)  X(n * n);
     CPPAD_TESTVECTOR(AD<Complex>)  D(1);

     // value of the independent variable
     size_t i;
     for(i = 0; i < n * n; i++)
          X[i] = Complex(int(i), -int(i) );

     // set the independent variables
     Independent(X);

     D[0]  = Det(X);

     // create the function object
     ADFun<Complex> f(X, D);

     // argument value
     CPPAD_TESTVECTOR(Complex)     x( n * n );
     for(i = 0; i < n * n; i++)
          x[i] = Complex(2 * i, i);

     // first derivative of the determinant
     CPPAD_TESTVECTOR(Complex) H( n * n * n * n );
     H = f.Hessian(x, 0);

     /*
     f(x)     = x[0] * x[3] - x[1] * x[2]
     f'(x)    = ( x[3], -x[2], -x[1], x[0] )
     */
     Complex zero(0., 0.);
     Complex one(1., 0.);
     Complex Htrue[]  = {
          zero, zero, zero,  one,
          zero, zero, -one, zero,
          zero, -one, zero, zero,
           one, zero, zero, zero
     };
     for( i = 0; i < n*n*n*n; i++)
          ok &= NearEqual( Htrue[i], H[i], eps99 , eps99 );

     return ok;
}

Input File: example/general/hes_lu_det.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.7: Interfacing to C: Example and Test
# include <cppad/cppad.hpp>  // CppAD utilities
# include <cassert>        // assert macro

namespace { // Begin empty namespace
/*
Compute the value of a sum of Gaussians defined by a and evaluated at x
     y = sum_{i=1}^n a[3*i] exp( (x - a[3*i+1])^2 / a[3*i+2])^2 )
where the floating point type is a template parameter
*/
template <class Float>
Float sumGauss(const Float &x, const CppAD::vector<Float> &a)
{
     // number of components in a
     size_t na = a.size();

     // number of Gaussians
     size_t n = na / 3;

     // check the restricitons on na
     assert( na == n * 3 );

     // declare temporaries used inside of loop
     Float ex, arg;

     // initialize sum
     Float y = 0.;

     // loop with respect to Gaussians
     size_t i;
     for(i = 0; i < n; i++)
     {
          arg =   (x - a[3*i+1]) / a[3*i+2];
          ex  =   exp(-arg * arg);
          y  +=   a[3*i] * ex;
     }
     return y;
}
/*
Create a C function interface that computes both
     y = sum_{i=1}^n a[3*i] exp( (x - a[3*i+1])^2 / a[3*i+2])^2 )
and its derivative with respect to the parameter vector a.
*/
extern "C"
void sumGauss(float x, float a[], float *y, float dyda[], size_t na)
{     // Note that any simple vector could replace CppAD::vector;
     // for example, std::vector, std::valarray

     // check the restrictions on na
     assert( na % 3 == 0 );  // mod(na, 3) = 0

     // use the shorthand ADfloat for the type CppAD::AD<float>
     typedef CppAD::AD<float> ADfloat;

     // vector for indpendent variables
     CppAD::vector<ADfloat> A(na);      // used with template function above
     CppAD::vector<float>   acopy(na);  // used for derivative calculations

     // vector for the dependent variables (there is only one)
     CppAD::vector<ADfloat> Y(1);

     // copy the independent variables from C vector to CppAD vectors
     size_t i;
     for(i = 0; i < na; i++)
          A[i] = acopy[i] = a[i];

     // declare that A is the independent variable vector
     CppAD::Independent(A);

     // value of x as an ADfloat object
     ADfloat X = x;

     // Evaluate template version of sumGauss with ADfloat as the template
     // parameter. Set the independent variable to the resulting value
     Y[0] = sumGauss(X, A);

     // create the AD function object F : A -> Y
     CppAD::ADFun<float> F(A, Y);

     // use Value to convert Y[0] to float and return y = F(a)
     *y = CppAD::Value(Y[0]);

     // evaluate the derivative F'(a)
     CppAD::vector<float> J(na);
     J = F.Jacobian(acopy);

     // return the value of dyda = F'(a) as a C vector
     for(i = 0; i < na; i++)
          dyda[i] = J[i];

     return;
}
/*
Link CppAD::NearEqual so do not have to use namespace notation in Interface2C
*/
bool NearEqual(float x, float y, float r, float a)
{     return CppAD::NearEqual(x, y, r, a);
}

} // End empty namespace

bool Interface2C(void)
{     // This routine is intentionally coded as if it were a C routine
     // except for the fact that it uses the predefined type bool.
     bool ok = true;

     // declare variables
     float x, a[6], y, dyda[6], tmp[6];
     size_t na, i;

     // number of parameters (3 for each Gaussian)
     na = 6;

     // number of Gaussians: n  = na / 3;

     // value of x
     x = 1.;

     // value of the parameter vector a
     for(i = 0; i < na; i++)
          a[i] = (float) (i+1);

     // evaulate function and derivative
     sumGauss(x, a, &y, dyda, na);

     // compare dyda to central difference approximation for deriative
     for(i = 0; i < na; i++)
     {     // local variables
          float small, ai, yp, ym, dy_da;

          // We assume that the type float has at least 7 digits of
          // precision, so we choose small to be about pow(10., -7./2.).
          small  = (float) 3e-4;

          // value of this component of a
          ai    = a[i];

          // evaluate F( a + small * ei )
          a[i]  = ai + small;
          sumGauss(x, a, &yp, tmp, na);

          // evaluate F( a - small * ei )
          a[i]  = ai - small;
          sumGauss(x, a, &ym, tmp, na);

          // evaluate central difference approximates for partial
          dy_da = (yp - ym) / (2 * small);

          // restore this component of a
          a[i]  = ai;

          ok   &= NearEqual(dyda[i], dy_da, small, small);
     }
     return ok;
}

Input File: example/general/interface2c.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
// Complex examples should supppress conversion warnings
# include <cppad/wno_conversion.hpp>

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <complex>


typedef std::complex<double>     Complex;
typedef CppAD::AD<Complex>       ADComplex;
typedef CPPAD_TESTVECTOR(ADComplex)   ADVector;

// ----------------------------------------------------------------------------

bool JacMinorDet(void)
{     bool ok = true;

     using namespace CppAD;

     size_t n = 2;

     // object for computing determinant
     det_by_minor<ADComplex> Det(n);

     // independent and dependent variable vectors
     CPPAD_TESTVECTOR(ADComplex)  X(n * n);
     CPPAD_TESTVECTOR(ADComplex)  D(1);

     // value of the independent variable
     size_t i;
     for(i = 0; i < n * n; i++)
          X[i] = Complex(int(i), -int(i));

     // set the independent variables
     Independent(X);

     // comupute the determinant
     D[0] = Det(X);

     // create the function object
     ADFun<Complex> f(X, D);

     // argument value
     CPPAD_TESTVECTOR(Complex)     x( n * n );
     for(i = 0; i < n * n; i++)
          x[i] = Complex(2 * i, i);

     // first derivative of the determinant
     CPPAD_TESTVECTOR(Complex) J( n * n );
     J = f.Jacobian(x);

     /*
     f(x)     = x[0] * x[3] - x[1] * x[2]
     f'(x)    = ( x[3], -x[2], -x[1], x[0] )
     */
     Complex Jtrue[] = { x[3], -x[2], -x[1], x[0] };
     for(i = 0; i < n * n; i++)
          ok &= Jtrue[i] == J[i];

     return ok;

}

Input File: example/general/jac_minor_det.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
// Complex examples should supppress conversion warnings
# include <cppad/wno_conversion.hpp>

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_lu.hpp>

// The AD complex case is used by this example so must
// define a specializatgion of LeqZero,AbsGeq for the AD<Complex> case
namespace CppAD {
     CPPAD_BOOL_BINARY( std::complex<double> ,  AbsGeq   )
     CPPAD_BOOL_UNARY(  std::complex<double> ,  LeqZero )
}

bool JacLuDet(void)
{     bool ok = true;
     using namespace CppAD;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     typedef std::complex<double> Complex;
     typedef AD<Complex>          ADComplex;

     size_t n = 2;

     // object for computing determinants
     det_by_lu<ADComplex> Det(n);

     // independent and dependent variable vectors
     CPPAD_TESTVECTOR(ADComplex)  X(n * n);
     CPPAD_TESTVECTOR(ADComplex)  D(1);

     // value of the independent variable
     size_t i;
     for(i = 0; i < n * n; i++)
          X[i] = Complex(int(i), -int(i));

     // set the independent variables
     Independent(X);

     // compute the determinant
     D[0]  = Det(X);

     // create the function object
     ADFun<Complex> f(X, D);

     // argument value
     CPPAD_TESTVECTOR(Complex)     x( n * n );
     for(i = 0; i < n * n; i++)
          x[i] = Complex(2 * i, i);

     // first derivative of the determinant
     CPPAD_TESTVECTOR(Complex) J( n * n );
     J = f.Jacobian(x);

     /*
     f(x)     = x[0] * x[3] - x[1] * x[2]
     */
     Complex Jtrue[]  = { x[3], -x[2], -x[1], x[0] };
     for( i = 0; i < n*n; i++)
          ok &= NearEqual( Jtrue[i], J[i], eps99 , eps99 );

     return ok;
}

Input File: example/general/jac_lu_det.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.10: Using Multiple Levels of AD

10.2.10.a: Background
If f is an ADFun<Base> object, the vectors returned by 5.3: f.Forward , and 5.4: f.Reverse , have values of type Base and not AD<Base> . This reflects the fact that operations used to calculate these function values are not recorded by the tape corresponding to AD<Base> operations.

10.2.10.b: Motivation
Suppose that you use derivatives of one or more inner functions as part of the operations needed to compute an outer function. For example, the derivatives returned by f.Forward might be used as part of Taylor's method for solving ordinary differential equations. In addition, we might want to differentiate the solution of a differential equation with respect to parameters in the equation. This can be accomplished in the following way:
  1. The function defining the differential equation could be calculated using the class AD< AD<double> > .
  2. The operations during the calculation of Taylor's method could be done using the AD<double> class.
  3. Derivatives of the solution of the differential equation could then be calculated using the double class.


10.2.10.c: Procedure

10.2.10.c.a: First Start AD<double>
If some of the 12.4.h: parameters in the AD< AD<double> > recording depend on the 12.4.m: variables in the AD<double> recording, we must first declaring these variables; i.e.,
     Independent(
a1x)
where a1x is a 8.9: SimpleVector with elements of type AD<double> . This will start recording a new tape of operations performed using AD<double> class objects.

10.2.10.c.b: Start AD< AD<double> > Recording
The next step is to declare the independent variables using
     Independent(
a2x)
where a2x is a 8.9: SimpleVector with elements of type AD< AD<double> > . This will start recording a new tape of operations performed using AD< AD<double> > class objects.

10.2.10.c.c: Inner Function
The next step is to calculate the inner function using AD< AD<double> > class objects. We then stop the recording using
     
a1f.Dependent(a2xa2y)
where a2y is a 8.9: SimpleVector with elements of type AD< AD<double> > and a1f is an ADFun< AD<double> > object.

10.2.10.c.d: Second Start AD< AD<double> >
If none of the 12.4.h: parameters in the AD< AD<double> > recording depend on the 12.4.m: variables in the AD<double> recording, it is preferred to delay declaring these variables to this point; i.e.,
     Independent(
a1x)
where a1x is a 8.9: SimpleVector with elements of type AD<double> . This will start recording a new tape of operations performed using AD<double> class objects.

10.2.10.c.e: Outer Function
The next step is to calculate the outer function using AD<double> class objects. Note that derivatives of the inner function can be included in the calculation of the outer function using a1f . We then stop the recording of AD<double> operations using
     
g.Dependent(a1xa1y)
where a1y is a 8.9: SimpleVector with elements of type AD<double> and g is an ADFun<double> object.

10.2.10.c.f: Derivatives of Outer Function
The AD function object g can then be used to calculate the derivatives of the outer function.

10.2.10.d: Example
The files 10.2.10.1: mul_level.cpp and 10.2.10.2: change_param.cpp contain an examples and tests of this procedure. They return true if they succeed and false otherwise. The file 10.2.12: mul_level_ode.cpp is a more complex example use of multiple tapes.
Input File: omh/mul_level.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.10.1: Multiple Level of AD: Example and Test

10.2.10.1.a: Purpose
In this example, we use AD< AD<double> > (level two taping), the compute values of the function @(@ f : \B{R}^n \rightarrow \B{R} @)@ where @[@ f(x) = \frac{1}{2} \left( x_0^2 + \cdots + x_{n-1}^2 \right) @]@ We then use AD<double> (level one taping) to compute the directional derivative @[@ f^{(1)} (x) * v = x_0 v_0 + \cdots + x_{n-1} v_{n-1} @]@. where @(@ v \in \B{R}^n @)@. We then use double (no taping) to compute @[@ \frac{d}{dx} \left[ f^{(1)} (x) * v \right] = v @]@ This is only meant as an example of multiple levels of taping. The example 5.4.2.2: hes_times_dir.cpp computes the same value more efficiently by using the identity: @[@ \frac{d}{dx} \left[ f^{(1)} (x) * v \right] = f^{(2)} (x) * v @]@ The example 4.7.9.3.1: mul_level_adolc.cpp computes the same values using Adolc's type adouble and CppAD's type AD<adouble>.

10.2.10.1.b: Source


# include <cppad/cppad.hpp>

namespace {
     // f(x) = |x|^2 / 2 = .5 * ( x[0]^2 + ... + x[n-1]^2 )
     template <class Type>
     Type f(const CPPAD_TESTVECTOR(Type)& x)
     {     Type sum;

          sum  = 0.;
          size_t i = size_t(x.size());
          for(i = 0; i < size_t(x.size()); i++)
               sum += x[i] * x[i];

          return .5 * sum;
     }
}

bool mul_level(void)
{     bool ok = true;                          // initialize test result

     typedef CppAD::AD<double>   a1type;    // for one level of taping
     typedef CppAD::AD<a1type>    a2type;    // for two levels of taping
     size_t n = 5;                           // dimension for example
     size_t j;                               // a temporary index variable

     // 10 times machine epsilon
     double eps = 10. * std::numeric_limits<double>::epsilon();

     CPPAD_TESTVECTOR(double) x(n);
     CPPAD_TESTVECTOR(a1type)  a1x(n), a1v(n), a1dy(1) ;
     CPPAD_TESTVECTOR(a2type)  a2x(n), a2y(1);

     // Values for the independent variables while taping the function f(x)
     for(j = 0; j < n; j++)
          a2x[j] = a1x[j] = x[j] = double(j);
     // Declare the independent variable for taping f(x)
     CppAD::Independent(a2x);

     // Use AD< AD<double> > to tape the evaluation of f(x)
     a2y[0] = f(a2x);

     // Declare a1f as the corresponding ADFun< AD<double> >
     // (make sure we do not run zero order forward during constructor)
     CppAD::ADFun<a1type> a1f;
     a1f.Dependent(a2x, a2y);

     // Values for the independent variables while taping f'(x) * v
     // Declare the independent variable for taping f'(x) * v
     // (Note we did not have to tape the creationg of a1f.)
     CppAD::Independent(a1x);

     // set the argument value x for computing f'(x) * v
     a1f.Forward(0, a1x);
     // compute f'(x) * v
     for(j = 0; j < n; j++)
          a1v[j] = double(n - j);
     a1dy = a1f.Forward(1, a1v);

     // declare g as ADFun<double> function corresponding to f'(x) * v
     CppAD::ADFun<double> g;
     g.Dependent(a1x, a1dy);

     // optimize out operations not necessary for function f'(x) * v
     g.optimize();

     // Evaluate f'(x) * v
     g.Forward(0, x);

     // compute the d/dx of f'(x) * v = f''(x) * v = v
     CPPAD_TESTVECTOR(double) w(1);
     CPPAD_TESTVECTOR(double) dw(n);
     w[0] = 1.;
     dw   = g.Reverse(1, w);

     for(j = 0; j < n; j++)
          ok &= CppAD::NearEqual(dw[j], a1v[j], eps, eps);

     return ok;
}

Input File: example/general/mul_level.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.10.2: Computing a Jacobian With Constants that Change

10.2.10.2.a: Purpose
In this example we use two levels of taping so that a derivative can have constant parameters that can be changed. To be specific, we consider the function @(@ f : \B{R}^2 \rightarrow \B{R}^2 @)@ @[@ f(x) = p \left( \begin{array}{c} \sin( x_0 ) \\ \sin( x_1 ) \end{array} \right) @]@ were @(@ p \in \B{R} @)@ is a parameter. The Jacobian of this function is @[@ g(x,p) = p \left( \begin{array}{cc} \cos( x_0 ) & 0 \\ 0 & \cos( x_1 ) \end{array} \right) @]@ In this example we use two levels of AD to avoid computing the partial of @(@ f(x) @)@ with respect to @(@ p @)@, but still allow for the evaluation of @(@ g(x, p) @)@ at different values of @(@ p @)@.
Input File: example/general/change_param.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.11: A Stiff Ode: Example and Test
Define @(@ x : \B{R} \rightarrow \B{R}^2 @)@ by @[@ \begin{array}{rcl} x_0 (0) & = & 1 \\ x_1 (0) & = & 0 \\ x_0^\prime (t) & = & - a_0 x_0 (t) \\ x_1^\prime (t) & = & + a_0 x_0 (t) - a_1 x_1 (t) \end{array} @]@ If @(@ a_0 \gg a_1 > 0 @)@, this is a stiff Ode and the analytic solution is @[@ \begin{array}{rcl} x_0 (t) & = & \exp( - a_0 t ) \\ x_1 (t) & = & a_0 [ \exp( - a_1 t ) - \exp( - a_0 t ) ] / ( a_0 - a_1 ) \end{array} @]@ The example tests Rosen34 using the relations above:

# include <cppad/cppad.hpp>

// To print the comparision, change the 0 to 1 on the next line.
# define CPPAD_ODE_STIFF_PRINT 0

namespace {
     // --------------------------------------------------------------
     class Fun {
     private:
          CPPAD_TESTVECTOR(double) a;
     public:
          // constructor
          Fun(const CPPAD_TESTVECTOR(double)& a_) : a(a_)
          { }
          // compute f(t, x)
          void Ode(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f)
          {     f[0]  = - a[0] * x[0];
               f[1]  = + a[0] * x[0] - a[1] * x[1];
          }
          // compute partial of f(t, x) w.r.t. t
          void Ode_ind(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f_t)
          {     f_t[0] = 0.;
               f_t[1] = 0.;
          }
          // compute partial of f(t, x) w.r.t. x
          void Ode_dep(
               const double                    &t,
               const CPPAD_TESTVECTOR(double) &x,
               CPPAD_TESTVECTOR(double)       &f_x)
          {     f_x[0] = -a[0];
               f_x[1] = 0.;
               f_x[2] = +a[0];
               f_x[3] = -a[1];
          }
     };
     // --------------------------------------------------------------
     class RungeMethod {
     private:
          Fun F;
     public:
          // constructor
          RungeMethod(const CPPAD_TESTVECTOR(double) &a_) : F(a_)
          { }
          void step(
               double                     ta ,
               double                     tb ,
               CPPAD_TESTVECTOR(double) &xa ,
               CPPAD_TESTVECTOR(double) &xb ,
               CPPAD_TESTVECTOR(double) &eb )
          {     xb = CppAD::Runge45(F, 1, ta, tb, xa, eb);
          }
          size_t order(void)
          {     return 5; }
     };
     class RosenMethod {
     private:
          Fun F;
     public:
          // constructor
          RosenMethod(const CPPAD_TESTVECTOR(double) &a_) : F(a_)
          { }
          void step(
               double                     ta ,
               double                     tb ,
               CPPAD_TESTVECTOR(double) &xa ,
               CPPAD_TESTVECTOR(double) &xb ,
               CPPAD_TESTVECTOR(double) &eb )
          {     xb = CppAD::Rosen34(F, 1, ta, tb, xa, eb);
          }
          size_t order(void)
          {     return 4; }
     };
}

bool OdeStiff(void)
{     bool ok = true;     // initial return value

     CPPAD_TESTVECTOR(double) a(2);
     a[0] = 1e3;
     a[1] = 1.;
     RosenMethod rosen(a);
     RungeMethod runge(a);
     Fun          gear(a);

     CPPAD_TESTVECTOR(double) xi(2);
     xi[0] = 1.;
     xi[1] = 0.;

     CPPAD_TESTVECTOR(double) eabs(2);
     eabs[0] = 1e-6;
     eabs[1] = 1e-6;

     CPPAD_TESTVECTOR(double) ef(2);
     CPPAD_TESTVECTOR(double) xf(2);
     CPPAD_TESTVECTOR(double) maxabs(2);
     size_t                nstep;

     size_t k;
     for(k = 0; k < 3; k++)
     {
          size_t M    = 5;
          double ti   = 0.;
          double tf   = 1.;
          double smin = 1e-7;
          double sini = 1e-7;
          double smax = 1.;
          double scur = .5;
          double erel = 0.;

          if( k == 0 )
          {     xf = CppAD::OdeErrControl(rosen, ti, tf,
               xi, smin, smax, scur, eabs, erel, ef, maxabs, nstep);
          }
          else if( k == 1 )
          {     xf = CppAD::OdeErrControl(runge, ti, tf,
               xi, smin, smax, scur, eabs, erel, ef, maxabs, nstep);
          }
          else if( k == 2 )
          {     xf = CppAD::OdeGearControl(gear, M, ti, tf,
               xi, smin, smax, sini, eabs, erel, ef, maxabs, nstep);
          }
          double x0 = exp(-a[0]*tf);
          ok &= CppAD::NearEqual(x0, xf[0], 0., eabs[0]);
          ok &= CppAD::NearEqual(0., ef[0], 0., eabs[0]);

          double x1 = a[0] *
               (exp(-a[1]*tf) - exp(-a[0]*tf))/(a[0] - a[1]);
          ok &= CppAD::NearEqual(x1, xf[1], 0., eabs[1]);
          ok &= CppAD::NearEqual(0., ef[1], 0., eabs[0]);
# if CPPAD_ODE_STIFF_PRINT
          const char* method[]={ "Rosen34", "Runge45", "Gear5" };
          std::cout << std::endl;
          std::cout << "method     = " << method[k] << std::endl;
          std::cout << "nstep      = " << nstep  << std::endl;
          std::cout << "x0         = " << x0 << std::endl;
          std::cout << "xf[0]      = " << xf[0] << std::endl;
          std::cout << "x0 - xf[0] = " << x0 - xf[0] << std::endl;
          std::cout << "ef[0]      = " << ef[0] << std::endl;
          std::cout << "x1         = " << x1 << std::endl;
          std::cout << "xf[1]      = " << xf[1] << std::endl;
          std::cout << "x1 - xf[1] = " << x1 - xf[1] << std::endl;
          std::cout << "ef[1]      = " << ef[1] << std::endl;
# endif
     }

     return ok;
}

Input File: example/general/ode_stiff.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test

10.2.12.a: Purpose
This is a realistic example using two levels of AD; see 10.2.10: mul_level . The first level uses AD<double> to tape the solution of an ordinary differential equation. This solution is then differentiated with respect to a parameter vector. The second level uses AD< AD<double> > to take derivatives during the solution of the differential equation. These derivatives are used in the application of Taylor's method to the solution of the ODE. The example 10.2.13: mul_level_adolc_ode.cpp computes the same values using Adolc's type adouble and CppAD's type AD<adouble>. The example 10.2.14: ode_taylor.cpp is a simpler applications of Taylor's method for solving an ODE.

10.2.12.b: ODE
For this example the ODE's are defined by the function @(@ h : \B{R}^n \times \B{R}^n \rightarrow \B{R}^n @)@ where @[@ h[ x, y(t, x) ] = \left( \begin{array}{c} x_0 \\ x_1 y_0 (t, x) \\ \vdots \\ x_{n-1} y_{n-2} (t, x) \end{array} \right) = \left( \begin{array}{c} \partial_t y_0 (t , x) \\ \partial_t y_1 (t , x) \\ \vdots \\ \partial_t y_{n-1} (t , x) \end{array} \right) @]@ and the initial condition @(@ y(0, x) = 0 @)@. The value of @(@ x @)@ is fixed during the solution of the ODE and the function @(@ g : \B{R}^n \rightarrow \B{R}^n @)@ is used to define the ODE where @[@ g(y) = \left( \begin{array}{c} x_0 \\ x_1 y_0 \\ \vdots \\ x_{n-1} y_{n-2} \end{array} \right) @]@

10.2.12.c: ODE Solution
The solution for this example can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain @[@ y(t, x ) = \left( \begin{array}{c} x_0 t \\ x_1 x_0 t^2 / 2 \\ \vdots \\ x_{n-1} x_{n-2} \ldots x_0 t^n / n ! \end{array} \right) @]@

10.2.12.d: Derivative of ODE Solution
Differentiating the solution above, with respect to the parameter vector @(@ x @)@, we notice that @[@ \partial_x y(t, x ) = \left( \begin{array}{cccc} y_0 (t,x) / x_0 & 0 & \cdots & 0 \\ y_1 (t,x) / x_0 & y_1 (t,x) / x_1 & 0 & \vdots \\ \vdots & \vdots & \ddots & 0 \\ y_{n-1} (t,x) / x_0 & y_{n-1} (t,x) / x_1 & \cdots & y_{n-1} (t,x) / x_{n-1} \end{array} \right) @]@

10.2.12.e: Taylor's Method Using AD
An m-th order Taylor method for approximating the solution of an ordinary differential equations is @[@ y(t + \Delta t , x) \approx \sum_{k=0}^p \partial_t^k y(t , x ) \frac{ \Delta t^k }{ k ! } = y^{(0)} (t , x ) + y^{(1)} (t , x ) \Delta t + \cdots + y^{(p)} (t , x ) \Delta t^p @]@ where the Taylor coefficients @(@ y^{(k)} (t, x) @)@ are defined by @[@ y^{(k)} (t, x) = \partial_t^k y(t , x ) / k ! @]@ We define the function @(@ z(t, x) @)@ by the equation @[@ z ( t , x ) = g[ y ( t , x ) ] = h [ x , y( t , x ) ] @]@ It follows that @[@ \begin{array}{rcl} \partial_t y(t, x) & = & z (t , x) \\ \partial_t^{k+1} y(t , x) & = & \partial_t^k z (t , x) \\ y^{(k+1)} ( t , x) & = & z^{(k)} (t, x) / (k+1) \end{array} @]@ where @(@ z^{(k)} (t, x) @)@ is the k-th order Taylor coefficient for @(@ z(t, x) @)@. In the example below, the Taylor coefficients @[@ y^{(0)} (t , x) , \ldots , y^{(k)} ( t , x ) @]@ are used to calculate the Taylor coefficient @(@ z^{(k)} ( t , x ) @)@ which in turn gives the value for @(@ y^{(k+1)} y ( t , x) @)@.

10.2.12.f: Source


# include <cppad/cppad.hpp>

// =========================================================================
// define types for each level
namespace { // BEGIN empty namespace
typedef CppAD::AD<double>   a1type;
typedef CppAD::AD<a1type>   a2type;

// -------------------------------------------------------------------------
// class definition for C++ function object that defines ODE
class Ode {
private:
     // copy of a that is set by constructor and used by g(y)
     CPPAD_TESTVECTOR(a1type) a1x_;
public:
     // constructor
     Ode(const CPPAD_TESTVECTOR(a1type)& a1x) : a1x_(a1x)
     { }
     // the function g(y) is evaluated with two levels of taping
     CPPAD_TESTVECTOR(a2type) operator()
     ( const CPPAD_TESTVECTOR(a2type)& a2y) const
     {     size_t n = a2y.size();
          CPPAD_TESTVECTOR(a2type) a2g(n);
          size_t i;
          a2g[0] = a1x_[0];
          for(i = 1; i < n; i++)
               a2g[i] = a1x_[i] * a2y[i-1];

          return a2g;
     }
};

// -------------------------------------------------------------------------
// Routine that uses Taylor's method to solve ordinary differential equaitons
// and allows for algorithmic differentiation of the solution.
CPPAD_TESTVECTOR(a1type) taylor_ode(
     Ode                            G       ,  // function that defines the ODE
     size_t                         order   ,  // order of Taylor's method used
     size_t                         nstep   ,  // number of steps to take
     const a1type&                  a1dt    ,  // Delta t for each step
     const CPPAD_TESTVECTOR(a1type)& a1y_ini)  // y(t) at the initial time
{
     // some temporary indices
     size_t i, k, ell;

     // number of variables in the ODE
     size_t n = a1y_ini.size();

     // copies of x and g(y) with two levels of taping
     CPPAD_TESTVECTOR(a2type)   a2y(n), a2z(n);

     // y, y^{(k)} , z^{(k)}, and y^{(k+1)}
     CPPAD_TESTVECTOR(a1type)  a1y(n), a1y_k(n), a1z_k(n), a1y_kp(n);

     // initialize x
     for(i = 0; i < n; i++)
          a1y[i] = a1y_ini[i];

     // loop with respect to each step of Taylors method
     for(ell = 0; ell < nstep; ell++)
     {     // prepare to compute derivatives using a1type
          for(i = 0; i < n; i++)
               a2y[i] = a1y[i];
          CppAD::Independent(a2y);

          // evaluate ODE in a2type
          a2z = G(a2y);

          // define differentiable version of a1g: y -> z
          // that computes its derivatives using a1type objects
          CppAD::ADFun<a1type> a1g(a2y, a2z);

          // Use Taylor's method to take a step
          a1y_k            = a1y;     // initialize y^{(k)}
          a1type   a1dt_kp = a1dt;  // initialize dt^(k+1)
          for(k = 0; k <= order; k++)
          {     // evaluate k-th order Taylor coefficient of y
               a1z_k = a1g.Forward(k, a1y_k);

               for(i = 0; i < n; i++)
               {     // convert to (k+1)-Taylor coefficient for x
                    a1y_kp[i] = a1z_k[i] / a1type(k + 1);

                    // add term for to this Taylor coefficient
                    // to solution for y(t, x)
                    a1y[i]    += a1y_kp[i] * a1dt_kp;
               }
               // next power of t
               a1dt_kp *= a1dt;
               // next Taylor coefficient
               a1y_k   = a1y_kp;
          }
     }
     return a1y;
}
} // END empty namespace
// ==========================================================================
// Routine that tests alogirhtmic differentiation of solutions computed
// by the routine taylor_ode.
bool mul_level_ode(void)
{     bool ok = true;
     double eps = 100. * std::numeric_limits<double>::epsilon();

     // number of components in differential equation
     size_t n = 4;

     // some temporary indices
     size_t i, j;

     // parameter vector in both double and a1type
     CPPAD_TESTVECTOR(double)  x(n);
     CPPAD_TESTVECTOR(a1type)  a1x(n);
     for(i = 0; i < n; i++)
          a1x[i] = x[i] = double(i + 1);

     // declare the parameters as the independent variable
     CppAD::Independent(a1x);

     // arguments to taylor_ode
     Ode G(a1x);                // function that defines the ODE
     size_t   order = n;      // order of Taylor's method used
     size_t   nstep = 2;      // number of steps to take
     a1type   a1dt  = double(1.);     // Delta t for each step
     // value of y(t, x) at the initial time
     CPPAD_TESTVECTOR(a1type) a1y_ini(n);
     for(i = 0; i < n; i++)
          a1y_ini[i] = 0.;

     // integrate the differential equation
     CPPAD_TESTVECTOR(a1type) a1y_final(n);
     a1y_final = taylor_ode(G, order, nstep, a1dt, a1y_ini);

     // define differentiable fucntion object f : x -> y_final
     // that computes its derivatives in double
     CppAD::ADFun<double> f(a1x, a1y_final);

     // check function values
     double check = 1.;
     double t     = double(nstep) * Value(a1dt);
     for(i = 0; i < n; i++)
     {     check *= x[i] * t / double(i + 1);
          ok &= CppAD::NearEqual(Value(a1y_final[i]), check, eps, eps);
     }

     // evaluate the Jacobian of h at a
     CPPAD_TESTVECTOR(double) jac ( f.Jacobian(x) );
     // There appears to be a bug in g++ version 4.4.2 because it generates
     // a warning for the equivalent form
     // CPPAD_TESTVECTOR(double) jac = f.Jacobian(x);

     // check Jacobian
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     double jac_ij = jac[i * n + j];
               if( i < j )
                    check = 0.;
               else     check = Value( a1y_final[i] ) / x[j];
               ok &= CppAD::NearEqual(jac_ij, check, eps, eps);
          }
     }
     return ok;
}

Input File: example/general/mul_level_ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test

10.2.13.a: Purpose
This is a realistic example using two levels of AD; see 10.2.10: mul_level . The first level uses Adolc's adouble type to tape the solution of an ordinary differential equation. This solution is then differentiated with respect to a parameter vector. The second level uses CppAD's type AD<adouble> to take derivatives during the solution of the differential equation. These derivatives are used in the application of Taylor's method to the solution of the ODE. The example 10.2.12: mul_level_ode.cpp computes the same values using AD<double> and AD< AD<double> >. The example 10.2.14: ode_taylor.cpp is a simpler applications of Taylor's method for solving an ODE.

10.2.13.b: ODE
For this example the ODE's are defined by the function @(@ h : \B{R}^n \times \B{R}^n \rightarrow \B{R}^n @)@ where @[@ h[ x, y(t, x) ] = \left( \begin{array}{c} x_0 \\ x_1 y_0 (t, x) \\ \vdots \\ x_{n-1} y_{n-2} (t, x) \end{array} \right) = \left( \begin{array}{c} \partial_t y_0 (t , x) \\ \partial_t y_1 (t , x) \\ \vdots \\ \partial_t y_{n-1} (t , x) \end{array} \right) @]@ and the initial condition @(@ y(0, x) = 0 @)@. The value of @(@ x @)@ is fixed during the solution of the ODE and the function @(@ g : \B{R}^n \rightarrow \B{R}^n @)@ is used to define the ODE where @[@ g(y) = \left( \begin{array}{c} x_0 \\ x_1 y_0 \\ \vdots \\ x_{n-1} y_{n-2} \end{array} \right) @]@

10.2.13.c: ODE Solution
The solution for this example can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain @[@ y(t, x ) = \left( \begin{array}{c} x_0 t \\ x_1 x_0 t^2 / 2 \\ \vdots \\ x_{n-1} x_{n-2} \ldots x_0 t^n / n ! \end{array} \right) @]@

10.2.13.d: Derivative of ODE Solution
Differentiating the solution above, with respect to the parameter vector @(@ x @)@, we notice that @[@ \partial_x y(t, x ) = \left( \begin{array}{cccc} y_0 (t,x) / x_0 & 0 & \cdots & 0 \\ y_1 (t,x) / x_0 & y_1 (t,x) / x_1 & 0 & \vdots \\ \vdots & \vdots & \ddots & 0 \\ y_{n-1} (t,x) / x_0 & y_{n-1} (t,x) / x_1 & \cdots & y_{n-1} (t,x) / x_{n-1} \end{array} \right) @]@

10.2.13.e: Taylor's Method Using AD
An m-th order Taylor method for approximating the solution of an ordinary differential equations is @[@ y(t + \Delta t , x) \approx \sum_{k=0}^p \partial_t^k y(t , x ) \frac{ \Delta t^k }{ k ! } = y^{(0)} (t , x ) + y^{(1)} (t , x ) \Delta t + \cdots + y^{(p)} (t , x ) \Delta t^p @]@ where the Taylor coefficients @(@ y^{(k)} (t, x) @)@ are defined by @[@ y^{(k)} (t, x) = \partial_t^k y(t , x ) / k ! @]@ We define the function @(@ z(t, x) @)@ by the equation @[@ z ( t , x ) = g[ y ( t , x ) ] = h [ x , y( t , x ) ] @]@ It follows that @[@ \begin{array}{rcl} \partial_t y(t, x) & = & z (t , x) \\ \partial_t^{k+1} y(t , x) & = & \partial_t^k z (t , x) \\ y^{(k+1)} ( t , x) & = & z^{(k)} (t, x) / (k+1) \end{array} @]@ where @(@ z^{(k)} (t, x) @)@ is the k-th order Taylor coefficient for @(@ z(t, x) @)@. In the example below, the Taylor coefficients @[@ y^{(0)} (t , x) , \ldots , y^{(k)} ( t , x ) @]@ are used to calculate the Taylor coefficient @(@ z^{(k)} ( t , x ) @)@ which in turn gives the value for @(@ y^{(k+1)} y ( t , x) @)@.

10.2.13.f: base_adolc.hpp
The file 4.7.9.3: base_adolc.hpp is implements the 4.7: Base type requirements where Base is adolc.

10.2.13.g: Memory Management
Adolc uses raw memory arrays that depend on the number of dependent and independent variables. The 8.23: thread_alloc memory management utilities 8.23.12: create_array and 8.23.13: delete_array are used to manage this memory allocation.

10.2.13.h: Configuration Requirement
This example will be compiled and tested provided that the value 2.2.5: ipopt_prefix is specified on the 2.2: cmake command line.

10.2.13.i: Source

// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//

# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/drivers/drivers.h>

// definitions not in Adolc distribution and required to use CppAD::AD<adouble>
# include <cppad/example/base_adolc.hpp>

# include <cppad/cppad.hpp>
// ==========================================================================
namespace { // BEGIN empty namespace
// define types for each level
typedef adouble           a1type;
typedef CppAD::AD<a1type> a2type;

// -------------------------------------------------------------------------
// class definition for C++ function object that defines ODE
class Ode {
private:
     // copy of a that is set by constructor and used by g(y)
     CPPAD_TESTVECTOR(a1type) a1x_;
public:
     // constructor
     Ode(const CPPAD_TESTVECTOR(a1type)& a1x) : a1x_(a1x)
     { }
     // the function g(y) is evaluated with two levels of taping
     CPPAD_TESTVECTOR(a2type) operator()
     ( const CPPAD_TESTVECTOR(a2type)& a2y) const
     {     size_t n = a2y.size();
          CPPAD_TESTVECTOR(a2type) a2g(n);
          size_t i;
          a2g[0] = a1x_[0];
          for(i = 1; i < n; i++)
               a2g[i] = a1x_[i] * a2y[i-1];

          return a2g;
     }
};

// -------------------------------------------------------------------------
// Routine that uses Taylor's method to solve ordinary differential equaitons
// and allows for algorithmic differentiation of the solution.
CPPAD_TESTVECTOR(a1type) taylor_ode_adolc(
     Ode                            G       ,  // function that defines the ODE
     size_t                         order   ,  // order of Taylor's method used
     size_t                         nstep   ,  // number of steps to take
     const a1type                   &a1dt   ,  // Delta t for each step
     const CPPAD_TESTVECTOR(a1type) &a1y_ini)  // y(t) at the initial time
{
     // some temporary indices
     size_t i, k, ell;

     // number of variables in the ODE
     size_t n = a1y_ini.size();

     // copies of x and g(y) with two levels of taping
     CPPAD_TESTVECTOR(a2type)   a2y(n), Z(n);

     // y, y^{(k)} , z^{(k)}, and y^{(k+1)}
     CPPAD_TESTVECTOR(a1type)  a1y(n), a1y_k(n), a1z_k(n), a1y_kp(n);

     // initialize x
     for(i = 0; i < n; i++)
          a1y[i] = a1y_ini[i];

     // loop with respect to each step of Taylors method
     for(ell = 0; ell < nstep; ell++)
     {     // prepare to compute derivatives using a1type
          for(i = 0; i < n; i++)
               a2y[i] = a1y[i];
          CppAD::Independent(a2y);

          // evaluate ODE using a2type
          Z = G(a2y);

          // define differentiable version of g: X -> Y
          // that computes its derivatives using a1type
          CppAD::ADFun<a1type> a1g(a2y, Z);

          // Use Taylor's method to take a step
          a1y_k            = a1y;     // initialize y^{(k)}
          a1type dt_kp = a1dt;    // initialize dt^(k+1)
          for(k = 0; k <= order; k++)
          {     // evaluate k-th order Taylor coefficient of y
               a1z_k = a1g.Forward(k, a1y_k);

               for(i = 0; i < n; i++)
               {     // convert to (k+1)-Taylor coefficient for x
                    a1y_kp[i] = a1z_k[i] / a1type(k + 1);

                    // add term for to this Taylor coefficient
                    // to solution for y(t, x)
                    a1y[i]    += a1y_kp[i] * dt_kp;
               }
               // next power of t
               dt_kp *= a1dt;
               // next Taylor coefficient
               a1y_k   = a1y_kp;
          }
     }
     return a1y;
}
} // END empty namespace
// ==========================================================================
// Routine that tests algorithmic differentiation of solutions computed
// by the routine taylor_ode.
bool mul_level_adolc_ode(void)
{     bool ok = true;
     double eps = 100. * std::numeric_limits<double>::epsilon();

     // number of components in differential equation
     size_t n = 4;

     // some temporary indices
     size_t i, j;

     // set up for thread_alloc memory allocator
     using CppAD::thread_alloc; // the allocator
     size_t capacity;           // capacity of an allocation

     // the vector x with length n (or greater) in double
     double* x = thread_alloc::create_array<double>(n, capacity);

     // the vector x with length n in a1type
     CPPAD_TESTVECTOR(a1type) a1x(n);
     for(i = 0; i < n; i++)
          a1x[i] = x[i] = double(i + 1);

     // declare the parameters as the independent variable
     int tag = 0;                     // Adolc setup
     int keep = 1;
     trace_on(tag, keep);
     for(i = 0; i < n; i++)
          a1x[i] <<= double(i + 1);  // a1x is independent for adouble type

     // arguments to taylor_ode_adolc
     Ode G(a1x);                // function that defines the ODE
     size_t   order = n;      // order of Taylor's method used
     size_t   nstep = 2;      // number of steps to take
     a1type   a1dt  = 1.;     // Delta t for each step
     // value of y(t, x) at the initial time
     CPPAD_TESTVECTOR(a1type) a1y_ini(n);
     for(i = 0; i < n; i++)
          a1y_ini[i] = 0.;

     // integrate the differential equation
     CPPAD_TESTVECTOR(a1type) a1y_final(n);
     a1y_final = taylor_ode_adolc(G, order, nstep, a1dt, a1y_ini);

     // declare the differentiable fucntion f : x -> y_final
     // (corresponding to the tape of adouble operations)
     double* y_final = thread_alloc::create_array<double>(n, capacity);
     for(i = 0; i < n; i++)
          a1y_final[i] >>= y_final[i];
     trace_off();

     // check function values
     double check = 1.;
     double t     = nstep * a1dt.value();
     for(i = 0; i < n; i++)
     {     check *= x[i] * t / double(i + 1);
          ok &= CppAD::NearEqual(y_final[i], check, eps, eps);
     }

     // memory where Jacobian will be returned
     double* jac_ = thread_alloc::create_array<double>(n * n, capacity);
     double** jac = thread_alloc::create_array<double*>(n, capacity);
     for(i = 0; i < n; i++)
          jac[i] = jac_ + i * n;

     // evaluate Jacobian of h at a
     size_t m = n;              // # dependent variables
     jacobian(tag, int(m), int(n), x, jac);

     // check Jacobian
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     if( i < j )
                    check = 0.;
               else     check = y_final[i] / x[j];
               ok &= CppAD::NearEqual(jac[i][j], check, eps, eps);
          }
     }

     // make memroy avaiable for other use by this thread
     thread_alloc::delete_array(x);
     thread_alloc::delete_array(y_final);
     thread_alloc::delete_array(jac_);
     thread_alloc::delete_array(jac);
     return ok;
}

Input File: example/general/mul_level_adolc_ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.14: Taylor's Ode Solver: An Example and Test

10.2.14.a: Purpose
This example solves an ordinary differential equation using Taylor's method; i.e., @[@ Z(t + \Delta t) \approx Z^{(0)} (t) + \frac{ Z^{(1)} (t) }{ 1 !} \Delta t + \cdots + \frac{ Z^{(p)} (t) }{ p !} ( \Delta t )^p ) @]@

10.2.14.b: ODE
The ODE is defined by the function @(@ h : \B{R}^n \rightarrow \B{R}^n @)@, which for this example is given by @[@ Z^{(1)} (t) = H[ Z(t) ] = \left( \begin{array}{c} 1 \\ Z_1 (t) \\ \vdots \\ Z_{n-1} (t) \end{array} \right) @]@ and the initial condition is @(@ z(0) = 0 @)@.

10.2.14.c: ODE Solution
The solution for this example can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain @[@ Z(t) = \left( \begin{array}{c} t \\ t^2 / 2 \\ \vdots \\ t^n / n ! \end{array} \right) @]@

10.2.14.d: Forward Mode
Given the Taylor coefficients for @(@ k = 0 , \ldots , K @)@ @[@ z^{(k)} = \frac{ Z^{(k)} }{ k !} (t) @]@ we note that @[@ \begin{array}{rcl} Z^{(1)} (t) & = & H( z^{(0)} + z^{(1)} t + \cdots + z^{(K)} t^K ) + O( t^{K+1} ) \\ & = & h^{(0)} + h^{(1)} t + \cdots + h^{(K)} t^K + O( t^{K+1} ) \end{array} @]@ where @(@ h^{(k)} @)@ is the k-th order Taylor coefficient for @(@ H( Z(t) ) @)@. Taking K-th order derivatives of both sides we obtain @[@ \begin{array}{rcl} Z^{(K+1)} (t) & = & K ! h^{(K)} \\ z^{(K+1)} & = & h^{(K)} / K \end{array} @]@ The code below uses this relationship to implement Taylor's method for approximating the solution of an ODE.

# include <cppad/cppad.hpp>

// =========================================================================
// define types for each level
namespace { // BEGIN empty namespace
     using CppAD::AD;

     CPPAD_TESTVECTOR( AD<double> ) ode(
          const CPPAD_TESTVECTOR( AD<double> )& Z )
     {     size_t n = Z.size();
          CPPAD_TESTVECTOR( AD<double> ) y(n);
          y[0] = 1;
          for(size_t k = 1; k < n; k++)
               y[k] = Z[k-1];
          return y;
     }

}

// -------------------------------------------------------------------------
// Example that uses Taylor's method to solve ordinary differential equaitons
bool ode_taylor(void)
{     // initialize the return value as true
     bool ok = true;

     // some temporary indices
     size_t i, j, k;

     // The ODE does not depend on the arugment values
     // so only tape once, also note that ode does not depend on t
     size_t n = 5;    // number of independent and dependent variables
     CPPAD_TESTVECTOR( AD<double> ) a_x(n), a_y(n);
     CppAD::Independent( a_x );
     a_y = ode(a_x);
     CppAD::ADFun<double> H(a_x, a_y);

     // initialize the solution vector at time zero
     CPPAD_TESTVECTOR( double ) z(n);
     for(j = 0; j < n; j++)
          z[j] = 0.0;

     size_t order   = n;   // order of the Taylor method
     size_t n_step  = 4;   // number of time steps
     double dt      = 0.5; // step size in time
     // Taylor coefficients of order k
     CPPAD_TESTVECTOR( double ) hk(n), zk(n);

     // loop with respect to each step of Taylor's method
     for(size_t i_step = 0; i_step < n_step; i_step++)
     {     // Use Taylor's method to take a step
          zk           = z;     // initialize z^{(k)}  for k = 0
          double dt_kp = dt;    // initialize dt^(k+1) for k = 0
          for(k = 0; k < order; k++)
          {     // evaluate k-th order Taylor coefficient of H
               hk = H.Forward(k, zk);

               for(j = 0; j < n; j++)
               {     // convert to (k+1)-Taylor coefficient for z
                    zk[j] = hk[j] / double(k + 1);

                    // add term for to this Taylor coefficient
                    // to solution for y(t, x)
                    z[j] += zk[j] * dt_kp;
               }
               // next power of t
               dt_kp *= dt;
          }
     }

     // check solution of the ODE,
     // Taylor's method should have no truncation error for this case
     double eps   = 100. * std::numeric_limits<double>::epsilon();
     double check = 1.;
     double t     = double(n_step) * dt;
     for(i = 0; i < n; i++)
     {     check *= t / double(i + 1);
          ok &= CppAD::NearEqual(z[i], check, eps, eps);
     }

     return ok;
}

Input File: example/general/ode_taylor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.2.15: Example Differentiating a Stack Machine Interpreter

# include <cstring>
# include <cstddef>
# include <cstdlib>
# include <cctype>
# include <cassert>
# include <stack>

# include <cppad/cppad.hpp>

namespace {
// Begin empty namespace ------------------------------------------------

bool is_number( const std::string &s )
{     char ch = s[0];
     bool number = (std::strchr("0123456789.", ch) != 0);
     return number;
}
bool is_binary( const std::string &s )
{     char ch = s[0];
     bool binary = (strchr("+-*/.", ch) != 0);
     return binary;
}
bool is_variable( const std::string &s )
{     char ch = s[0];
     bool variable = ('a' <= ch) & (ch <= 'z');
     return variable;
}

void StackMachine(
     std::stack< std::string >          &token_stack  ,
     CppAD::vector< CppAD::AD<double> > &variable     )
{     using std::string;
     using std::stack;

     using CppAD::AD;

     stack< AD<double> > value_stack;
     string              token;
     AD<double>          value_one;
     AD<double>          value_two;

     while( ! token_stack.empty() )
     {     string s = token_stack.top();
          token_stack.pop();

          if( is_number(s) )
          {     value_one = std::atof( s.c_str() );
               value_stack.push( value_one );
          }
          else if( is_variable(s) )
          {     value_one = variable[ size_t(s[0]) - size_t('a') ];
               value_stack.push( value_one );
          }
          else if( is_binary(s) )
          {     assert( value_stack.size() >= 2 );
               value_one = value_stack.top();
               value_stack.pop();
               value_two = value_stack.top();
               value_stack.pop();

               switch( s[0] )
               {
                    case '+':
                    value_stack.push(value_one + value_two);
                    break;

                    case '-':
                    value_stack.push(value_one - value_two);
                    break;

                    case '*':
                    value_stack.push(value_one * value_two);
                    break;

                    case '/':
                    value_stack.push(value_one / value_two);
                    break;

                    default:
                    assert(0);
               }
          }
          else if( s[0] == '=' )
          {     assert( value_stack.size() >= 1 );
               assert( token_stack.size() >= 1 );
               //
               s = token_stack.top();
               token_stack.pop();
               //
               assert( is_variable( s ) );
               value_one = value_stack.top();
               value_stack.pop();
               //
               variable[ size_t(s[0]) - size_t('a') ] = value_one;
          }
          else assert(0);
     }
     return;
}

// End empty namespace -------------------------------------------------------
}

bool StackMachine(void)
{     bool ok = true;

     using std::string;
     using std::stack;

     using CppAD::AD;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
     using CppAD::vector;

     // The users program in that stack machine language
     const char *program[] = {
          "1.0", "a", "+", "=", "b",  // b = a + 1
          "2.0", "b", "*", "=", "c",  // c = b * 2
          "3.0", "c", "-", "=", "d",  // d = c - 3
          "4.0", "d", "/", "=", "e"   // e = d / 4
     };
     size_t n_program = sizeof( program ) / sizeof( program[0] );

     // put the program in the token stack
     stack< string > token_stack;
     size_t i = n_program;
     while(i--)
          token_stack.push( program[i] );

     // domain space vector
     size_t n = 1;
     vector< AD<double> > X(n);
     X[0] = 0.;

     // declare independent variables and start tape recording
     CppAD::Independent(X);

     // x[0] corresponds to a in the stack machine
     vector< AD<double> > variable(26);
     variable[0] = X[0];

     // calculate the resutls of the program
     StackMachine( token_stack , variable);

     // range space vector
     size_t m = 4;
     vector< AD<double> > Y(m);
     Y[0] = variable[1];   // b = a + 1
     Y[1] = variable[2];   // c = (a + 1) * 2
     Y[2] = variable[3];   // d = (a + 1) * 2 - 3
     Y[3] = variable[4];   // e = ( (a + 1) * 2 - 3 ) / 4

     // create f : X -> Y and stop tape recording
     CppAD::ADFun<double> f(X, Y);

     // use forward mode to evaluate function at different argument value
     size_t p = 0;
     vector<double> x(n);
     vector<double> y(m);
     x[0] = 1.;
     y    = f.Forward(p, x);

     // check function values
     ok &= (y[0] == x[0] + 1.);
     ok &= (y[1] == (x[0] + 1.) * 2.);
     ok &= (y[2] == (x[0] + 1.) * 2. - 3.);
     ok &= (y[3] == ( (x[0] + 1.) * 2. - 3.) / 4.);

     // Use forward mode (because x is shorter than y) to calculate Jacobian
     p = 1;
     vector<double> dx(n);
     vector<double> dy(m);
     dx[0] = 1.;
     dy    = f.Forward(p, dx);
     ok   &= NearEqual(dy[0], 1., eps99, eps99);
     ok   &= NearEqual(dy[1], 2., eps99, eps99);
     ok   &= NearEqual(dy[2], 2., eps99, eps99);
     ok   &= NearEqual(dy[3], .5, eps99, eps99);

     // Use Jacobian routine (which automatically decides which mode to use)
     dy = f.Jacobian(x);
     ok   &= NearEqual(dy[0], 1., eps99, eps99);
     ok   &= NearEqual(dy[1], 2., eps99, eps99);
     ok   &= NearEqual(dy[2], 2., eps99, eps99);
     ok   &= NearEqual(dy[3], .5, eps99, eps99);

     return ok;
}

Input File: example/general/stack_machine.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.3: Utility Routines used by CppAD Examples

10.3.a: Contents
general.cpp: 10.3.1CppAD Examples and Tests
speed_example.cpp: 10.3.2Run the Speed Examples
lu_vec_ad.cpp: 10.3.3Lu Factor and Solve with Recorded Pivoting

Input File: omh/example_list.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.3.1: CppAD Examples and Tests

10.3.1.a: Running Tests
To build this program and run its correctness tests see 2.3: cmake_check .

// CPPAD_HAS_* defines
# include <cppad/configure.hpp>

// system include files used for I/O
# include <iostream>

// C style asserts
# include <cassert>

// standard string
# include <string>

// memory utility
# include <cppad/utility/thread_alloc.hpp>

// test runner
# include <cppad/utility/test_boolofvoid.hpp>

// prototype external compiled tests (this line expected by bin/new_test.sh)
extern bool abort_recording(void);
extern bool fabs(void);
extern bool acosh(void);
extern bool acos(void);
extern bool ad_assign(void);
extern bool ad_ctor(void);
extern bool AddEq(void);
extern bool Add(void);
extern bool ad_fun(void);
extern bool ad_in_c(void);
extern bool ad_input(void);
extern bool ad_output(void);
extern bool asinh(void);
extern bool asin(void);
extern bool atan2(void);
extern bool atanh(void);
extern bool atan(void);
extern bool azmul(void);
extern bool base_require(void);
extern bool BenderQuad(void);
extern bool BoolFun(void);
extern bool capacity_order(void);
extern bool change_param(void);
extern bool check_for_nan(void);
extern bool compare_change(void);
extern bool Compare(void);
extern bool complex_poly(void);
extern bool CondExp(void);
extern bool Cosh(void);
extern bool Cos(void);
extern bool DivEq(void);
extern bool Div(void);
extern bool eigen_array(void);
extern bool eigen_det(void);
extern bool EqualOpSeq(void);
extern bool Erf(void);
extern bool expm1(void);
extern bool exp(void);
extern bool ForOne(void);
extern bool ForTwo(void);
extern bool forward_dir(void);
extern bool forward_order(void);
extern bool Forward(void);
extern bool fun_assign(void);
extern bool FunCheck(void);
extern bool HesLagrangian(void);
extern bool HesLuDet(void);
extern bool HesMinorDet(void);
extern bool Hessian(void);
extern bool HesTimesDir(void);
extern bool Independent(void);
extern bool Integer(void);
extern bool Interface2C(void);
extern bool interp_onetape(void);
extern bool interp_retape(void);
extern bool JacLuDet(void);
extern bool JacMinorDet(void);
extern bool Jacobian(void);
extern bool log10(void);
extern bool log1p(void);
extern bool log(void);
extern bool LuRatio(void);
extern bool LuVecADOk(void);
extern bool MulEq(void);
extern bool mul_level_adolc_ode(void);
extern bool mul_level_adolc(void);
extern bool mul_level_ode(void);
extern bool mul_level(void);
extern bool Mul(void);
extern bool NearEqualExt(void);
extern bool number_skip(void);
extern bool NumericType(void);
extern bool num_limits(void);
extern bool OdeStiff(void);
extern bool ode_taylor(void);
extern bool opt_val_hes(void);
extern bool ParVar(void);
extern bool Poly(void);
extern bool pow_int(void);
extern bool pow(void);
extern bool print_for(void);
extern bool reverse_any(void);
extern bool reverse_one(void);
extern bool reverse_three(void);
extern bool reverse_two(void);
extern bool RevOne(void);
extern bool RevTwo(void);
extern bool Rosen34(void);
extern bool runge_45_2(void);
extern bool seq_property(void);
extern bool sign(void);
extern bool Sinh(void);
extern bool Sin(void);
extern bool Sqrt(void);
extern bool StackMachine(void);
extern bool SubEq(void);
extern bool Sub(void);
extern bool Tanh(void);
extern bool Tan(void);
extern bool TapeIndex(void);
extern bool UnaryMinus(void);
extern bool UnaryPlus(void);
extern bool Value(void);
extern bool Var2Par(void);
extern bool vec_ad(void);

// main program that runs all the tests
int main(void)
{     std::string group = "example/general";
     size_t      width = 20;
     CppAD::test_boolofvoid Run(group, width);

     // This line is used by test_one.sh

     // run external compiled tests (this line expected by bin/new_test.sh)
     Run( abort_recording,   "abort_recording"  );
     Run( fabs,              "fabs"             );
     Run( acos,              "acos"             );
     Run( acosh,             "acosh"            );
     Run( ad_assign,         "ad_assign"        );
     Run( ad_ctor,           "ad_ctor"          );
     Run( Add,               "Add"              );
     Run( AddEq,             "AddEq"            );
     Run( ad_fun,            "ad_fun"           );
     Run( ad_in_c,           "ad_in_c"          );
     Run( ad_input,          "ad_input"         );
     Run( ad_output,         "ad_output"        );
     Run( asin,              "asin"             );
     Run( asinh,             "asinh"            );
     Run( atan2,             "atan2"            );
     Run( atan,              "atan"             );
     Run( atanh,             "atanh"            );
     Run( azmul,             "azmul"            );
     Run( BenderQuad,        "BenderQuad"       );
     Run( BoolFun,           "BoolFun"          );
     Run( capacity_order,    "capacity_order"   );
     Run( change_param,      "change_param"     );
     Run( compare_change,    "compare_change"   );
     Run( Compare,           "Compare"          );
     Run( complex_poly,      "complex_poly"     );
     Run( CondExp,           "CondExp"          );
     Run( Cos,               "Cos"              );
     Run( Cosh,              "Cosh"             );
     Run( Div,               "Div"              );
     Run( DivEq,             "DivEq"            );
     Run( EqualOpSeq,        "EqualOpSeq"       );
     Run( Erf,               "Erf"              );
     Run( exp,               "exp"              );
     Run( expm1,             "expm1"            );
     Run( ForOne,            "ForOne"           );
     Run( ForTwo,            "ForTwo"           );
     Run( forward_dir,       "forward_dir"      );
     Run( Forward,           "Forward"          );
     Run( forward_order,     "forward_order"    );
     Run( fun_assign,        "fun_assign"       );
     Run( FunCheck,          "FunCheck"         );
     Run( HesLagrangian,     "HesLagrangian"    );
     Run( HesLuDet,          "HesLuDet"         );
     Run( HesMinorDet,       "HesMinorDet"      );
     Run( Hessian,           "Hessian"          );
     Run( HesTimesDir,       "HesTimesDir"      );
     Run( Independent,       "Independent"      );
     Run( Integer,           "Integer"          );
     Run( Interface2C,       "Interface2C"      );
     Run( interp_onetape,    "interp_onetape"   );
     Run( interp_retape,     "interp_retape"    );
     Run( JacLuDet,          "JacLuDet"         );
     Run( JacMinorDet,       "JacMinorDet"      );
     Run( Jacobian,          "Jacobian"         );
     Run( log10,             "log10"            );
     Run( log1p,             "log1p"            );
     Run( log,               "log"              );
     Run( LuRatio,           "LuRatio"          );
     Run( LuVecADOk,         "LuVecADOk"        );
     Run( MulEq,             "MulEq"            );
     Run( mul_level,         "mul_level"        );
     Run( mul_level_ode,     "mul_level_ode"    );
     Run( Mul,               "Mul"              );
     Run( NearEqualExt,      "NearEqualExt"     );
     Run( number_skip,       "number_skip"      );
     Run( NumericType,       "NumericType"      );
     Run( num_limits,        "num_limits"       );
     Run( OdeStiff,          "OdeStiff"         );
     Run( ode_taylor,        "ode_taylor"       );
     Run( opt_val_hes,       "opt_val_hes"      );
     Run( ParVar,            "ParVar"           );
     Run( Poly,              "Poly"             );
     Run( pow_int,           "pow_int"          );
     Run( pow,               "pow"              );
     Run( reverse_any,       "reverse_any"      );
     Run( reverse_one,       "reverse_one"      );
     Run( reverse_three,     "reverse_three"    );
     Run( reverse_two,       "reverse_two"      );
     Run( RevOne,            "RevOne"           );
     Run( RevTwo,            "RevTwo"           );
     Run( Rosen34,           "Rosen34"          );
     Run( runge_45_2,        "runge_45_2"       );
     Run( seq_property,      "seq_property"     );
     Run( sign,              "sign"             );
     Run( Sinh,              "Sinh"             );
     Run( Sin,               "Sin"              );
     Run( Sqrt,              "Sqrt"             );
     Run( StackMachine,      "StackMachine"     );
     Run( SubEq,             "SubEq"            );
     Run( Sub,               "Sub"              );
     Run( Tanh,              "Tanh"             );
     Run( Tan,               "Tan"              );
     Run( TapeIndex,         "TapeIndex"        );
     Run( UnaryMinus,        "UnaryMinus"       );
     Run( UnaryPlus,         "UnaryPlus"        );
     Run( Value,             "Value"            );
     Run( Var2Par,           "Var2Par"          );
     Run( vec_ad,            "vec_ad"           );
# ifndef CPPAD_DEBUG_AND_RELEASE
     Run( check_for_nan,     "check_for_nan"    );
# endif
# if CPPAD_HAS_ADOLC
     Run( mul_level_adolc,      "mul_level_adolc"     );
     Run( mul_level_adolc_ode,  "mul_level_adolc_ode" );
# endif
# if CPPAD_HAS_EIGEN
     Run( eigen_array,       "eigen_array"      );
     Run( eigen_det,         "eigen_det"        );
# endif
     //
     // check for memory leak
     bool memory_ok = CppAD::thread_alloc::free_all();
     // print summary at end
     bool ok = Run.summary(memory_ok);
     //
     return static_cast<int>( ! ok );
}

Input File: example/general/general.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.3.2: Run the Speed Examples

10.3.2.a: Running Tests
To build this program and run its correctness tests see 2.3: cmake_check .

# include <cppad/cppad.hpp>

// various example routines
extern bool det_of_minor(void);
extern bool det_by_lu(void);
extern bool det_by_minor(void);
extern bool elapsed_seconds(void);
extern bool mat_sum_sq(void);
extern bool ode_evaluate(void);
extern bool sparse_hes_fun(void);
extern bool sparse_jac_fun(void);
extern bool speed_test(void);
extern bool time_test(void);

namespace {
     // function that runs one test
     size_t Run_ok_count    = 0;
     size_t Run_error_count = 0;
     const char* exception_list[] = {
          "elapsed_seconds",
          "speed_test",
          "time_test"
     };
     size_t n_exception = sizeof(exception_list) / sizeof(exception_list[0]);
     bool Run(bool TestOk(void), std::string name)
     {     bool ok               = true;
          std::streamsize width =  20;
          std::cout.width( width );
          std::cout.setf( std::ios_base::left );
          std::cout << name;
          bool exception = false;
          for(size_t i = 0; i < n_exception; i++)
               exception |= exception_list[i] == name;
          //
          ok &= name.size() < size_t(width);
          ok &= TestOk();
          if( ok )
          {     std::cout << "OK" << std::endl;
               Run_ok_count++;
          }
          else if ( exception )
          {     std::cout << "Error: perhaps too many other programs running";
               std::cout << std::endl;
               // no change to Run_ok_count
               ok = true;
          }
          else
          {     std::cout << "Error" << std::endl;
               Run_error_count++;
          }
          return ok;
     }
}

// main program that runs all the tests
int main(void)
{     bool ok = true;
     using std::cout;
     using std::endl;

     ok &= Run(det_of_minor,          "det_of_minor"   );
     ok &= Run(det_by_minor,         "det_by_minor"    );
     ok &= Run(det_by_lu,               "det_by_lu"    );
     ok &= Run(elapsed_seconds,   "elapsed_seconds"    );
     ok &= Run(mat_sum_sq,             "mat_sum_sq"    );
     ok &= Run(ode_evaluate,         "ode_evaluate"    );
     ok &= Run(sparse_hes_fun,    "sparse_hes_fun"     );
     ok &= Run(sparse_jac_fun,    "sparse_jac_fun"     );
     ok &= Run(speed_test,             "speed_test"    );
     ok &= Run(time_test,               "time_test"    );
     assert( ok || (Run_error_count > 0) );

     // check for memory leak in previous calculations
     if( ! CppAD::thread_alloc::free_all() )
     {     ok = false;
          cout << "Error: memroy leak detected" << endl;
     }

     if( ok )
     {     cout << "All " << int(Run_ok_count) << " tests passed ";
          cout << "(possibly excepting elapsed_seconds).";
     }
     else     cout << int(Run_error_count) << " tests failed.";
     cout << endl;


     return static_cast<int>( ! ok );
}

Input File: speed/example/example.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.3.3: Lu Factor and Solve with Recorded Pivoting

10.3.3.a: Syntax
int LuVecAD(
     size_t 
n,
     size_t 
m,
     VecAD<
double> &Matrix,
     VecAD<
double> &Rhs,
     VecAD<
double> &Result,
     AD<
double> &logdet)


10.3.3.b: Purpose
Solves the linear equation @[@ Matrix * Result = Rhs @]@ where Matrix is an @(@ n \times n @)@ matrix, Rhs is an @(@ n x m @)@ matrix, and Result is an @(@ n x m @)@ matrix.

The routine 8.14.1: LuSolve uses an arbitrary vector type, instead of 4.6: VecAD , to hold its elements. The pivoting operations for a ADFun object corresponding to an LuVecAD solution will change to be optimal for the matrix being factored.

It is often the case that LuSolve is faster than LuVecAD when LuSolve uses a simple vector class with 8.9.b: elements of type double , but the corresponding 5: ADFun objects have a fixed set of pivoting operations.

10.3.3.c: Storage Convention
The matrices stored in row major order. To be specific, if @(@ A @)@ contains the vector storage for an @(@ n x m @)@ matrix, @(@ i @)@ is between zero and @(@ n-1 @)@, and @(@ j @)@ is between zero and @(@ m-1 @)@, @[@ A_{i,j} = A[ i * m + j ] @]@ (The length of @(@ A @)@ must be equal to @(@ n * m @)@.)

10.3.3.d: n
is the number of rows in Matrix , Rhs , and Result .

10.3.3.e: m
is the number of columns in Rhs and Result . It is ok for m to be zero which is reasonable when you are only interested in the determinant of Matrix .

10.3.3.f: Matrix
On input, this is an @(@ n \times n @)@ matrix containing the variable coefficients for the equation we wish to solve. On output, the elements of Matrix have been overwritten and are not specified.

10.3.3.g: Rhs
On input, this is an @(@ n \times m @)@ matrix containing the right hand side for the equation we wish to solve. On output, the elements of Rhs have been overwritten and are not specified. If m is zero, Rhs is not used.

10.3.3.h: Result
On input, this is an @(@ n \times m @)@ matrix and the value of its elements do not matter. On output, the elements of Rhs contain the solution of the equation we wish to solve (unless the value returned by LuVecAD is equal to zero). If m is zero, Result is not used.

10.3.3.i: logdet
On input, the value of logdet does not matter. On output, it has been set to the log of the determinant of Matrix (but not quite). To be more specific, if signdet is the value returned by LuVecAD, the determinant of Matrix is given by the formula @[@ det = signdet \exp( logdet ) @]@ This enables LuVecAD to use logs of absolute values.

10.3.3.j: Example
The file 10.3.3.1: lu_vec_ad_ok.cpp contains an example and test of LuVecAD. It returns true if it succeeds and false otherwise.
Input File: example/general/lu_vec_ad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test

# include <cppad/cppad.hpp>
# include "lu_vec_ad.hpp"
# include <cppad/speed/det_by_minor.hpp>

bool LuVecADOk(void)
{     bool  ok = true;

     using namespace CppAD;
     typedef AD<double> ADdouble;
     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     size_t              n = 3;
     size_t              m = 2;
     double a1[] = {
          3., 0., 0., // (1,1) is first  pivot
          1., 2., 1., // (2,2) is second pivot
          1., 0., .5  // (3,3) is third  pivot
     };
     double a2[] = {
          1., 2., 1., // (1,2) is second pivot
          3., 0., 0., // (2,1) is first  pivot
          1., 0., .5  // (3,3) is third  pivot
     };
     double rhs[] = {
          1., 3.,
          2., 2.,
          3., 1.
     };

     VecAD<double>       Copy    (n * n);
     VecAD<double>       Rhs     (n * m);
     VecAD<double>       Result  (n * m);
     ADdouble            logdet;
     ADdouble            signdet;

     // routine for checking determinants using expansion by minors
     det_by_minor<ADdouble> Det(n);

     // matrix we are computing the determinant of
     CPPAD_TESTVECTOR(ADdouble) A(n * n);

     // dependent variable values
     CPPAD_TESTVECTOR(ADdouble) Y(1 + n * m);

     size_t  i;
     size_t  j;
     size_t  k;

     // Original matrix
     for(i = 0; i < n * n; i++)
          A[i] = a1[i];

     // right hand side
     for(j = 0; j < n; j++)
          for(k = 0; k < m; k++)
               Rhs[ j * m + k ] = rhs[ j * m + k ];

     // Declare independent variables
     Independent(A);

     // Copy the matrix
     ADdouble index(0);
     for(i = 0; i < n*n; i++)
     {     Copy[index] = A[i];
          index += 1.;
     }

     // Solve the equation
     signdet = LuVecAD(n, m, Copy, Rhs, Result, logdet);

     // Result is the first n * m dependent variables
     index = 0.;
     for(i = 0; i < n * m; i++)
     {     Y[i] = Result[index];
          index += 1.;
     }

     // Determinant is last component of the solution
     Y[ n * m ] = signdet * exp( logdet );

     // construct f: A -> Y
     ADFun<double> f(A, Y);

     // check determinant using minors routine
     ADdouble determinant = Det( A );
     ok &= NearEqual(Y[n * m], determinant, eps99, eps99);


     // Check solution of Rhs = A * Result
     double sum;
     for(k = 0; k < m; k++)
     {     for(i = 0; i < n; i++)
          {     sum = 0.;
               for(j = 0; j < n; j++)
                    sum += a1[i * n + j] * Value( Y[j * m + k] );
               ok &= NearEqual( rhs[i * m + k], sum, eps99, eps99);
          }
     }

     CPPAD_TESTVECTOR(double) y2(1 + n * m);
     CPPAD_TESTVECTOR(double) A2(n * n);
     for(i = 0; i < n * n; i++)
          A[i] = A2[i] = a2[i];


     y2          = f.Forward(0, A2);
     determinant = Det(A);
     ok &= NearEqual(y2[ n * m], Value(determinant), eps99, eps99);

     // Check solution of Rhs = A2 * Result
     for(k = 0; k < m; k++)
     {     for(i = 0; i < n; i++)
          {     sum = 0.;
               for(j = 0; j < n; j++)
                    sum += a2[i * n + j] * y2[j * m + k];
               ok &= NearEqual( rhs[i * m + k], sum, eps99, eps99);
          }
     }

     return ok;
}

Input File: example/general/lu_vec_ad_ok.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.4: List All (Except Deprecated) CppAD Examples
7.2.2: a11c_bthread.cpp A Simple Boost Thread Example and Test
7.2.1: a11c_openmp.cpp A Simple OpenMP Example and Test
7.2.3: a11c_pthread.cpp A Simple Parallel Pthread Example and Test
5.1.4.1: abort_recording.cpp Abort Current Recording: Example and Test
5.8.3.1: abs_eval.cpp abs_eval: Example and Test
5.8.3.2: abs_eval.hpp abs_eval Source Code
5.8.1.1: abs_get_started.cpp abs_normal Getting Started: Example and Test
5.8.6.1: abs_min_linear.cpp abs_min_linear: Example and Test
5.8.6.2: abs_min_linear.hpp abs_min_linear Source Code
5.8.10.1: abs_min_quad.cpp abs_min_quad: Example and Test
5.8.10.2: abs_min_quad.hpp abs_min_quad Source Code
4.4.2.1.1: acos.cpp The AD acos Function: Example and Test
4.4.2.15.1: acosh.cpp The AD acosh Function: Example and Test
4.2.1: ad_assign.cpp AD Assignment: Example and Test
4.1.1: ad_ctor.cpp AD Constructors: Example and Test
4.4.1.3.1: add.cpp AD Binary Addition: Example and Test
4.4.1.4.1: AddEq.cpp AD Compound Assignment Addition: Example and Test
10.2.1: ad_fun.cpp Creating Your Own Interface to an ADFun Object
10.2.2: ad_in_c.cpp Example and Test Linking CppAD to Languages Other than C++
4.3.4.1: ad_input.cpp AD Output Operator: Example and Test
4.3.5.1: ad_output.cpp AD Output Operator: Example and Test
4.4.2.2.1: asin.cpp The AD asin Function: Example and Test
4.4.2.16.1: asinh.cpp The AD asinh Function: Example and Test
4.4.3.1.1: atan2.cpp The AD atan2 Function: Example and Test
4.4.2.3.1: atan.cpp The AD atan Function: Example and Test
4.4.2.17.1: atanh.cpp The AD atanh Function: Example and Test
4.4.7.2.18: atomic_eigen_cholesky.cpp Atomic Eigen Cholesky Factorization: Example and Test
4.4.7.2.18.2: atomic_eigen_cholesky.hpp Atomic Eigen Cholesky Factorization Class
4.4.7.2.17: atomic_eigen_mat_inv.cpp Atomic Eigen Matrix Inverse: Example and Test
4.4.7.2.17.1: atomic_eigen_mat_inv.hpp Atomic Eigen Matrix Inversion Class
4.4.7.2.16: atomic_eigen_mat_mul.cpp Atomic Eigen Matrix Multiply: Example and Test
4.4.7.2.16.1: atomic_eigen_mat_mul.hpp Atomic Eigen Matrix Multiply Class
4.4.7.2.8.1: atomic_for_sparse_hes.cpp Atomic Forward Hessian Sparsity: Example and Test
4.4.7.2.6.1: atomic_for_sparse_jac.cpp Atomic Forward Jacobian Sparsity: Example and Test
4.4.7.2.4.1: atomic_forward.cpp Atomic Forward: Example and Test
4.4.7.2.11: atomic_get_started.cpp Getting Started with Atomic Operations: Example and Test
4.4.7.2.19: atomic_mat_mul.cpp User Atomic Matrix Multiply: Example and Test
4.4.7.2.19.1: atomic_mat_mul.hpp Matrix Multiply as an Atomic Operation
4.4.7.1.2: atomic_mul_level.cpp Atomic Operations and Multiple-Levels of AD: Example and Test
4.4.7.2.12: atomic_norm_sq.cpp Atomic Euclidean Norm Squared: Example and Test
4.4.7.2.13: atomic_reciprocal.cpp Reciprocal as an Atomic Operation: Example and Test
4.4.7.2.5.1: atomic_reverse.cpp Atomic Reverse: Example and Test
4.4.7.2.9.1: atomic_rev_sparse_hes.cpp Atomic Reverse Hessian Sparsity: Example and Test
4.4.7.2.7.1: atomic_rev_sparse_jac.cpp Atomic Reverse Jacobian Sparsity: Example and Test
4.4.7.2.14: atomic_set_sparsity.cpp Atomic Sparsity with Set Patterns: Example and Test
4.4.7.2.15: atomic_tangent.cpp Tan and Tanh as User Atomic Operations: Example and Test
4.4.3.3.1: azmul.cpp AD Absolute Zero Multiplication: Example and Test
4.7.9.3: base_adolc.hpp Enable use of AD<Base> where Base is Adolc's adouble Type
4.7.9.1: base_alloc.hpp Example AD<Base> Where Base Constructor Allocates Memory
4.7.9.6: base_complex.hpp Enable use of AD<Base> where Base is std::complex<double>
4.7.9.2: base_require.cpp Using a User Defined AD Base Type: Example and Test
12.10.1.1: bender_quad.cpp BenderQuad: Example and Test
4.5.3.1: bool_fun.cpp AD Boolean Functions: Example and Test
5.3.8.1: capacity_order.cpp Controlling Taylor Coefficient Memory Allocation: Example and Test
10.2.10.2: change_param.cpp Computing a Jacobian With Constants that Change
5.10.1: check_for_nan.cpp ADFun Checking For Nan: Example and Test
8.8.1: check_numeric_type.cpp The CheckNumericType Function: Example and Test
4.4.7.1.1: checkpoint.cpp Simple Checkpointing: Example and Test
4.4.7.1.4: checkpoint_extended_ode.cpp Checkpointing an Extended ODE Solver: Example and Test
4.4.7.1.3: checkpoint_ode.cpp Checkpointing an ODE Solver: Example and Test
8.10.1: check_simple_vector.cpp The CheckSimpleVector Function: Example and Test
2.2.2.3: colpack_hes.cpp ColPack: Sparse Hessian Example and Test
2.2.2.4: colpack_hessian.cpp ColPack: Sparse Hessian Example and Test
2.2.2.1: colpack_jac.cpp ColPack: Sparse Jacobian Example and Test
2.2.2.2: colpack_jacobian.cpp ColPack: Sparse Jacobian Example and Test
5.3.7.1: compare_change.cpp CompareChange and Re-Tape: Example and Test
4.5.1.1: compare.cpp AD Binary Comparison Operators: Example and Test
4.7.9.6.1: complex_poly.cpp Complex Polynomial: Example and Test
4.4.4.1: cond_exp.cpp Conditional Expressions: Example and Test
10.2.3: conj_grad.cpp Differentiate Conjugate Gradient Algorithm: Example and Test
4.4.2.4.1: cos.cpp The AD cos Function: Example and Test
4.4.2.5.1: cosh.cpp The AD cosh Function: Example and Test
10.2.4: cppad_eigen.hpp Enable Use of Eigen Linear Algebra Package with CppAD
8.22.1: cppad_vector.cpp CppAD::vector Template Class: Example and Test
5.5.9: dependency.cpp Computing Dependency: Example and Test
11.2.1.1: det_by_lu.cpp Determinant Using Lu Factorization: Example and Test
11.2.3.1: det_by_minor.cpp Determinant Using Expansion by Minors: Example and Test
11.2.2.1: det_of_minor.cpp Determinant of a Minor: Example and Test
4.4.1.3.4: div.cpp AD Binary Division: Example and Test
4.4.1.4.4: div_eq.cpp AD Compound Assignment Division: Example and Test
10.2.4.2: eigen_array.cpp Using Eigen Arrays: Example and Test
10.2.4.3: eigen_det.cpp Using Eigen To Compute Determinant: Example and Test
8.5.1.1: elapsed_seconds.cpp Elapsed Seconds: Example and Test
4.5.5.1: equal_op_seq.cpp EqualOpSeq: Example and Test
4.4.2.18.1: erf.cpp The AD erf Function: Example and Test
8.1.1: error_handler.cpp Replacing The CppAD Error Handler: Example and Test
4.4.2.6.1: exp.cpp The AD exp Function: Example and Test
4.4.2.19.1: expm1.cpp The AD exp Function: Example and Test
4.4.2.19.1: expm1.cpp The AD exp Function: Example and Test
4.4.2.14.1: fabs.cpp AD Absolute Value Function: Example and Test
5.5.7.1: for_hes_sparsity.cpp Forward Mode Hessian Sparsity: Example and Test
5.5.1.1: for_jac_sparsity.cpp Forward Mode Jacobian Sparsity: Example and Test
5.2.3.1: for_one.cpp First Order Partial Driver: Example and Test
5.5.8.1: for_sparse_hes.cpp Forward Mode Hessian Sparsity: Example and Test
5.5.2.1: for_sparse_jac.cpp Forward Mode Jacobian Sparsity: Example and Test
5.2.5.1: for_two.cpp Subset of Second Order Partials: Example and Test
5.3.4.1: forward.cpp Forward Mode: Example and Test
5.3.5.1: forward_dir.cpp Forward Mode: Example and Test of Multiple Directions
5.3.4.2: forward_order.cpp Forward Mode: Example and Test of Multiple Orders
5.1.2.1: fun_assign.cpp ADFun Assignment: Example and Test
5.9.1: fun_check.cpp ADFun Check and Re-Tape: Example and Test
10.3.1: general.cpp CppAD Examples and Tests
10.1: get_started.cpp Getting Started Using CppAD to Compute Derivatives
7.2.8: harmonic.cpp Multi-Threading Harmonic Summation Example / Test
5.2.2.2: hes_lagrangian.cpp Hessian of Lagrangian and ADFun Default Constructor: Example and Test
10.2.6: hes_lu_det.cpp Gradient of Determinant Using LU Factorization: Example and Test
10.2.5: hes_minor_det.cpp Gradient of Determinant Using Expansion by Minors: Example and Test
5.2.2.1: hessian.cpp Hessian: Example and Test
5.4.2.2: hes_times_dir.cpp Hessian Times Direction: Example and Test
5.1.1.1: independent.cpp Independent and ADFun Constructor: Example and Test
8.24.1: index_sort.cpp Index Sort: Example and Test
4.3.2.1: integer.cpp Convert From AD to Integer: Example and Test
10.2.7: interface2c.cpp Interfacing to C: Example and Test
4.4.5.2: interp_onetape.cpp Interpolation With Out Retaping: Example and Test
4.4.5.3: interp_retape.cpp Interpolation With Retaping: Example and Test
9.1: ipopt_solve_get_started.cpp Nonlinear Programming Using CppAD and Ipopt: Example and Test
9.3: ipopt_solve_ode_inverse.cpp ODE Inverse Problem Definitions: Source Code
9.2: ipopt_solve_retape.cpp Nonlinear Programming Retaping: Example and Test
10.2.9: jac_lu_det.cpp Gradient of Determinant Using Lu Factorization: Example and Test
10.2.8: jac_minor_det.cpp Gradient of Determinant Using Expansion by Minors: Example and Test
5.2.1.1: jacobian.cpp Jacobian: Example and Test
4.4.2.8.1: log10.cpp The AD log10 Function: Example and Test
4.4.2.20.1: log1p.cpp The AD log1p Function: Example and Test
4.4.2.7.1: log.cpp The AD log Function: Example and Test
5.8.5.1: lp_box.cpp abs_normal lp_box: Example and Test
5.8.5.2: lp_box.hpp lp_box Source Code
8.14.2.1: lu_factor.cpp LuFactor: Example and Test
8.14.3.1: lu_invert.cpp LuInvert: Example and Test
12.10.3.1: lu_ratio.cpp LuRatio: Example and Test
8.14.1.1: lu_solve.cpp LuSolve With Complex Arguments: Example and Test
10.3.3.1: lu_vec_ad_ok.cpp Lu Factor and Solve With Recorded Pivoting: Example and Test
11.2.6.1: mat_sum_sq.cpp Sum of the Elements of the Square of a Matrix: Example and Test
5.8.7.1: min_nso_linear.cpp abs_normal min_nso_linear: Example and Test
5.8.7.2: min_nso_linear.hpp min_nso_linear Source Code
5.8.11.1: min_nso_quad.cpp abs_normal min_nso_quad: Example and Test
5.8.11.2: min_nso_quad.hpp min_nso_quad Source Code
4.4.1.3.3: mul.cpp AD Binary Multiplication: Example and Test
4.4.1.4.3: mul_eq.cpp AD Compound Assignment Multiplication: Example and Test
4.7.9.3.1: mul_level_adolc.cpp Using Adolc with Multiple Levels of Taping: Example and Test
10.2.13: mul_level_adolc_ode.cpp Taylor's Ode Solver: A Multi-Level Adolc Example and Test
10.2.10.1: mul_level.cpp Multiple Level of AD: Example and Test
10.2.12: mul_level_ode.cpp Taylor's Ode Solver: A Multi-Level AD Example and Test
7.2.9: multi_atomic.cpp Multi-Threading User Atomic Example / Test
7.2.10: multi_newton.cpp Multi-Threaded Newton Method Example / Test
8.11.1: nan.cpp nan: Example and Test
8.2.1: near_equal.cpp NearEqual Function: Example and Test
4.5.2.1: near_equal_ext.cpp Compare AD with Base Objects: Example and Test
5.3.9.1: number_skip.cpp Number of Variables That Can be Skipped: Example and Test
8.7.1: numeric_type.cpp The NumericType: Example and Test
4.4.6.1: num_limits.cpp Numeric Limits: Example and Test
8.19.1: ode_err_control.cpp OdeErrControl: Example and Test
8.19.2: ode_err_maxabs.cpp OdeErrControl: Example and Test Using Maxabs Argument
11.2.7.1: ode_evaluate.cpp ode_evaluate: Example and test
8.21.1: ode_gear_control.cpp OdeGearControl: Example and Test
8.20.1: ode_gear.cpp OdeGear: Example and Test
10.2.11: ode_stiff.cpp A Stiff Ode: Example and Test
10.2.14: ode_taylor.cpp Taylor's Ode Solver: An Example and Test
5.7.3: optimize_compare_op.cpp Example Optimization and Comparison Operators
5.7.5: optimize_conditional_skip.cpp Example Optimization and Conditional Expressions
5.7.7: optimize_cumulative_sum.cpp Example Optimization and Cumulative Sum Operations
5.7.1: optimize_forward_active.cpp Example Optimization and Forward Activity Analysis
5.7.6: optimize_nest_conditional.cpp Example Optimization and Nested Conditional Expressions
5.7.4: optimize_print_for.cpp Example Optimization and Print Forward Operators
5.7.2: optimize_reverse_active.cpp Example Optimization and Reverse Activity Analysis
12.10.2.1: opt_val_hes.cpp opt_val_hes: Example and Test
4.5.4.1: par_var.cpp AD Parameter and Variable Functions: Example and Test
8.13.1: poly.cpp Polynomial Evaluation: Example and Test
4.4.3.2.1: pow.cpp The AD Power Function: Example and Test
8.12.1: pow_int.cpp The Pow Integer Exponent: Example and Test
4.3.6.1: print_for_cout.cpp Printing During Forward Mode: Example and Test
4.3.6.2: print_for_string.cpp Print During Zero Order Forward Mode: Example and Test
5.8.9.1: qp_box.cpp abs_normal qp_box: Example and Test
5.8.9.2: qp_box.hpp qp_box Source Code
5.8.8.1: qp_interior.cpp abs_normal qp_interior: Example and Test
5.8.8.2: qp_interior.hpp qp_interior Source Code
5.5.10: rc_sparsity.cpp Preferred Sparsity Patterns: Row and Column Indices: Example and Test
5.4.3.2: reverse_checkpoint.cpp Reverse Mode General Case (Checkpointing): Example and Test
5.4.1.1: reverse_one.cpp First Order Reverse Mode: Example and Test
5.4.3.1: reverse_three.cpp Third Order Reverse Mode: Example and Test
5.4.2.1: reverse_two.cpp Second Order Reverse ModeExample and Test
5.5.5.1: rev_hes_sparsity.cpp Reverse Mode Hessian Sparsity: Example and Test
5.5.3.1: rev_jac_sparsity.cpp Reverse Mode Jacobian Sparsity: Example and Test
5.2.4.1: rev_one.cpp First Order Derivative Driver: Example and Test
5.5.6.1: rev_sparse_hes.cpp Reverse Mode Hessian Sparsity: Example and Test
5.5.4.1: rev_sparse_jac.cpp Reverse Mode Jacobian Sparsity: Example and Test
5.2.6.1: rev_two.cpp Second Partials Reverse Driver: Example and Test
8.16.1: Rombergmul.cpp One Dimensional Romberg Integration: Example and Test
8.15.1: romberg_one.cpp One Dimensional Romberg Integration: Example and Test
8.18.1: rosen_34.cpp Rosen34: Example and Test
8.17.1: runge45_1.cpp Runge45: Example and Test
8.17.2: runge45_2.cpp Runge45: Example and Test
5.1.5.1: seq_property.cpp ADFun Sequence Properties: Example and Test
8.26.1: set_union.cpp Set Union: Example and Test
7.2.5: simple_ad_bthread.cpp A Simple Boost Threading AD: Example and Test
7.2.4: simple_ad_openmp.cpp A Simple OpenMP AD: Example and Test
7.2.6: simple_ad_pthread.cpp A Simple pthread AD: Example and Test
8.9.1: simple_vector.cpp Simple Vector Template Class: Example and Test
5.8.4.1: simplex_method.cpp abs_normal simplex_method: Example and Test
5.8.4.2: simplex_method.hpp simplex_method Source Code
4.4.2.9.1: sin.cpp The AD sin Function: Example and Test
4.4.2.10.1: sinh.cpp The AD sinh Function: Example and Test
5.6.3.1: sparse_hes.cpp Computing Sparse Hessian: Example and Test
11.2.9.1: sparse_hes_fun.cpp sparse_hes_fun: Example and test
5.6.4.1: sparse_hessian.cpp Sparse Hessian: Example and Test
5.6.1.1: sparse_jac_for.cpp Computing Sparse Jacobian Using Forward Mode: Example and Test
11.2.8.1: sparse_jac_fun.cpp sparse_jac_fun: Example and test
5.6.2.1: sparse_jacobian.cpp Sparse Jacobian: Example and Test
5.6.1.2: sparse_jac_rev.cpp Computing Sparse Jacobian Using Reverse Mode: Example and Test
8.27.1: sparse_rc.cpp sparse_rc: Example and Test
8.28.1: sparse_rcv.cpp sparse_rcv: Example and Test
5.6.4.3: sparse_sub_hes.cpp Subset of a Sparse Hessian: Example and Test
5.5.6.2: sparsity_sub.cpp Sparsity Patterns For a Subset of Variables: Example and Test
10.3.2: speed_example.cpp Run the Speed Examples
8.4.1: speed_program.cpp Example Use of SpeedTest
8.3.1: speed_test.cpp speed_test: Example and test
4.4.2.11.1: sqrt.cpp The AD sqrt Function: Example and Test
10.2.15: stack_machine.cpp Example Differentiating a Stack Machine Interpreter
4.4.1.3.2: sub.cpp AD Binary Subtraction: Example and Test
4.4.1.4.2: sub_eq.cpp AD Compound Assignment Subtraction: Example and Test
5.6.5.2: subgraph_hes2jac.cpp Sparse Hessian Using Subgraphs and Jacobian: Example and Test
5.6.5.1: subgraph_jac_rev.cpp Computing Sparse Jacobian Using Reverse Mode: Example and Test
5.4.4.1: subgraph_reverse.cpp Computing Reverse Mode on Subgraphs: Example and Test
5.5.11.1: subgraph_sparsity.cpp Subgraph Dependency Sparsity Patterns: Example and Test
5.6.4.2: sub_sparse_hes.cpp Computing Sparse Hessian for a Subset of Variables
4.4.2.12.1: tan.cpp The AD tan Function: Example and Test
4.4.2.13.1: tanh.cpp The AD tanh Function: Example and Test
4.4.5.1: tape_index.cpp Taping Array Index Operation: Example and Test
7.2.11.2: team_bthread.cpp Boost Thread Implementation of a Team of AD Threads
7.2.7: team_example.cpp Using a Team of AD Threads: Example and Test
7.2.11.1: team_openmp.cpp OpenMP Implementation of a Team of AD Threads
7.2.11.3: team_pthread.cpp Pthread Implementation of a Team of AD Threads
7.2.11: team_thread.hpp Specifications for A Team of AD Threads
8.23.1: thread_alloc.cpp Fast Multi-Threading Memory Allocator: Example and Test
7.2: thread_test.cpp Run Multi-Threading Examples and Speed Tests
8.5.2: time_test.cpp time_test: Example and test
8.25.1: to_string.cpp to_string: Example and Test
4.4.1.2.1: unary_minus.cpp AD Unary Minus Operator: Example and Test
4.4.1.1.1: unary_plus.cpp AD Unary Plus Operator: Example and Test
4.3.1.1: value.cpp Convert From AD to its Base Type: Example and Test
4.3.7.1: var2par.cpp Convert an AD Variable to a Parameter: Example and Test
4.6.1: vec_ad.cpp AD Vectors that Record Index Operations: Example and Test
8.22.2: vector_bool.cpp CppAD::vectorBool Class: Example and Test

Input File: omh/example_list.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.5: Using The CppAD Test Vector Template Class

10.5.a: Syntax
CPPAD_TESTVECTOR(Scalar)

10.5.b: Purpose
Many of the CppAD 10: examples and tests use the CPPAD_TESTVECTOR template class to pass information to CppAD. This is not a true template class because it's syntax uses (Scalar) instead of <Scalar> . This enables us to use
     Eigen::Matrix<
Scalar, Eigen::Dynamic, 1>
as one of the possible cases for this 'template class'.

10.5.c: Choice
The user can choose, during the install procedure, which template class to use in the examples and tests; see below. This shows that any 8.9: simple vector class can be used in place of
     CPPAD_TESTVECTOR(
Type)
When writing their own code, users can choose a specific simple vector they prefer; for example,
     CppAD::vector<
Type>

10.5.d: CppAD::vector
If in the 2.2.b: cmake command you specify 2.2.7: cppad_testvector to be cppad, CPPAD_CPPADVECTOR will be true. In this case, CPPAD_TESTVECTOR is defined by the following source code:

# if CPPAD_CPPADVECTOR
# define CPPAD_TESTVECTOR(Scalar) CppAD::vector< Scalar >
# endif
In this case CppAD will use its own vector for many of its examples and tests.

10.5.e: std::vector
If in the cmake command you specify cppad_testvector to be std, CPPAD_STDVECTOR will be true. In this case, CPPAD_TESTVECTOR is defined by the following source code:
# if CPPAD_STDVECTOR
# include <vector>
# define CPPAD_TESTVECTOR(Scalar) std::vector< Scalar >
# endif
In this case CppAD will use standard vector for many of its examples and tests.

10.5.f: boost::numeric::ublas::vector
If in the cmake command you specify cppad_testvector to be boost, CPPAD_BOOSTVECTOR will be true. In this case, CPPAD_TESTVECTOR is defined by the following source code:
# if CPPAD_BOOSTVECTOR
# include <boost/numeric/ublas/vector.hpp>
# define CPPAD_TESTVECTOR(Scalar) boost::numeric::ublas::vector< Scalar >
# endif
In this case CppAD will use this boost vector for many of its examples and tests.

10.5.g: Eigen Vectors
If in the cmake command you specify cppad_testvector to be eigen, CPPAD_EIGENVECTOR will be true. In this case, CPPAD_TESTVECTOR is defined by the following source code:
# if CPPAD_EIGENVECTOR
# include <cppad/example/cppad_eigen.hpp>
# define CPPAD_TESTVECTOR(Scalar) Eigen::Matrix< Scalar , Eigen::Dynamic, 1>
# endif
In this case CppAD will use the Eigen vector for many of its examples and tests.
Input File: cppad/core/testvector.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
10.6: Suppress Suspect Implicit Conversion Warnings

10.6.a: Syntax
# include <cppad/wno_conversion.hpp>

10.6.b: Purpose
In many cases it is good to have warnings for implicit conversions that may loose range or precision. The include command above, before any other includes, suppresses these warning for a particular compilation unit (which usually corresponds to a *.cpp file).
Input File: cppad/wno_conversion.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11: Speed Test an Operator Overloading AD Package

11.a: Purpose
CppAD has a set of speed tests that are used to determine if certain changes improve its execution speed. These tests can also be used to compare the AD packages Adolc (https://projects.coin-or.org/ADOL-C) , CppAD (http://www.coin-or.org/CppAD/) , Fadbad (http://www.fadbad.com/) and Sacado (http://trilinos.sandia.gov/packages/sacado/) .

11.b: debug_which
Usually, one wants to compile the speed tests in release mode. This can be done by setting 2.2.s: cppad_debug_which to debug_none in the cmake command. Correctness tests are included for all the speed tests, so it is possible you will want to compile these tests for debugging; i.e., set cppad_debug_which to debug_all. The sections below explain how you can run these tests on your computer.

11.c: Contents
speed_main: 11.1Running the Speed Test Program
speed_utility: 11.2Speed Testing Utilities
speed_double: 11.3Speed Test of Functions in Double
speed_adolc: 11.4Speed Test of Derivatives Using Adolc
speed_cppad: 11.5Speed Test Derivatives Using CppAD
speed_fadbad: 11.6Speed Test Derivatives Using Fadbad
speed_sacado: 11.7Speed Test Derivatives Using Sacado

Input File: omh/speed/speed.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1: Running the Speed Test Program

11.1.a: Syntax
speed/package/speed_package test seed option_list

11.1.b: Purpose
A version of this program runs the correctness tests or the speed tests for one AD package identified by package .

11.1.c: package

11.1.c.a: AD Package
The command line argument package specifies one of the AD package. The CppAD distribution comes with support for the following packages: 11.4: adolc , 11.5: cppad , 11.6: fadbad , 11.7: sacado . You can extend this program to include other package. Such an extension need not include all the tests. For example, 11.1.6: link_sparse_hessian just returns false for the 11.6.6: fadbad and 11.7.6: sacado packages.

11.1.c.b: double
The value package can be double in which case the function values (instead of derivatives) are computed using double precision operations. This enables one to compare the speed of computing function values in double to the speed of the derivative computations. (It is often useful to divide the speed of the derivative computation by the speed of the function evaluation in double.)

11.1.c.c: profile
In the special case where package is profile, the CppAD package is compiled and run with profiling to aid in determining where it is spending most of its time.

11.1.d: test
It the argument test specifies which test to run and has the following possible values: 11.1.d.a: correct , 11.1.d.b: speed , 11.1.2: det_minor , 11.1.1: det_lu , 11.1.3: mat_mul , 11.1.4: ode , 11.1.5: poly , 11.1.6: sparse_hessian , 11.1.7: sparse_jacobian . You can experiment with changing the implementation of a particular test for a particular package.

11.1.d.a: correct
If test is equal to correct, all of the correctness tests are run.

11.1.d.b: speed
If test is equal to speed, all of the speed tests are run.

11.1.e: seed
The command line argument seed is an unsigned integer (all its characters are between 0 and 9). The random number simulator 11.2.10: uniform_01 is initialized with the call
     uniform_01(
seed)
before any of the testing routines (listed above) are called.

11.1.f: Global Options
This global variable has prototype

     extern std::map<std::string, bool> global_option;
The syntax
     global_option["
option"]
has the value true, if option is present, and false otherwise. This is true for each option that follows seed . The order of the options does not matter and the list can be empty. Each option, is be a separate command line argument to the main program. The documentation below specifics how 11.5: speed_cppad uses these options, see the examples in 11.4: speed_adolc for how another package might uses these options.

11.1.f.a: onetape
If this option is present, 11.5: speed_cppad will use one taping of the operation sequence for all the repetitions of that speed test. Otherwise, the 12.4.g.b: operation sequence will be retaped for each test repetition.

All of the tests, except 11.1.1: det_lu , have the same operation sequence for each repetition. The operation sequence for det_lu may be different because it depends on the matrix for which the determinant is being calculated. For this reason, 11.5.2: cppad_det_lu.cpp returns false, to indicate that the test not implemented, when global_onetape is true.

11.1.f.b: memory
This option is special because individual CppAD speed tests need not do anything different if this option is true or false. If the memory option is present, the CppAD 8.23.9: hold_memory routine will be called by the speed test main program before any of the tests are executed This should make the CppAD thread_alloc allocator faster. If it is not present, CppAD will used standard memory allocation. Another package might use this option for a different memory allocation method.

11.1.f.c: optimize
If this option is present, CppAD will 5.7: optimize the operation sequence before doing computations. If it is false, this optimization will not be done. Note that this option is usually slower unless it is combined with the onetape option.

11.1.f.d: atomic
If this option is present, CppAD will use a user defined 4.4.7.2: atomic operation is used for the test. So far, CppAD has only implemented the 11.1.3: mat_mul test as an atomic operation.

11.1.f.e: hes2jac
If this option is present, 11.5: speed_cppad will compute hessians as the Jacobian of the gradient. This is accomplished using 10.2.10: multiple levels of AD. So far, CppAD has only implemented the 11.1.6: sparse_hessian test in this manner.

11.1.f.f: subgraph
If this option is present, 11.5: speed_cppad will compute sparse Jacobians using subgraphs. The CppAD 11.1.7: sparse_jacobian test is implemented for this option. In addition, the CppAD 11.1.6: sparse_hessian test is implemented for this option when hes2jac is present.

11.1.g: Sparsity Options
The following options only apply to the 11.1.7: sparse_jacobian and 11.1.6: sparse_hessian tests. The other tests return false when any of these options are present.

11.1.g.a: boolsparsity
If this option is present, CppAD will use a 12.4.j.b: vectors of bool to compute sparsity patterns. Otherwise CppAD will use 12.4.j.c: vectors of sets .

11.1.g.b: revsparsity
If this option is present, CppAD will use reverse mode for to compute sparsity patterns. Otherwise CppAD will use forward mode.

11.1.g.c: subsparsity
If this option is present, CppAD will use subgraphs to compute sparsity patterns. If either the boolsparsity or revsparsity is also present, the CppAD speed tests will return false; i.e., these options are not supported by 5.5.11: subgraph_sparsity .

11.1.g.d: colpack
If this option is present, CppAD will use 2.2.2: colpack to do the coloring. Otherwise, it will use it's own coloring algorithm.

11.1.h: Correctness Results
One, but not both, of the following two output lines
     
package_test_optionlist_available = false
     
package_test_optionlist_ok = flag
is generated for each correctness test where package and test are as above, optionlist are the options (in option_list ) separated by the underbar _ character (whereas they are separated by spaces in option_list ), and flag is true or false.

11.1.i: Speed Results
For each speed test, corresponds to three lines of the following form are generated:
     
package_test_optionlist_ok   = flag
     
package_test_size = [ size_1...size_n ]
     
package_test_rate = [ rate_1...rate_n ]
The values package , test , optionlist , and flag are as in the correctness results above. The values size_1 , ..., size_n are the size arguments used for the corresponding tests. The values rate_1 , ..., rate_n are the number of times per second that the corresponding size problem executed.

11.1.i.a: n_sweep
The 11.1.7: sparse_jacobian and 11.1.6: sparse_hessian tests has an extra output line with the following form
     
package_sparse_test_n_sweep = [ n_sweep_1...n_sweep_n ]
were test is jacobian (hessian). The values n_sweep_1 , ..., n_sweep_n are the number of sweeps (colors) used for each sparse Jacobian (Hessian) calculation; see n_sweep for 5.6.2.i: sparse_jacobian and 5.6.4.j: sparse_hessian .

11.1.j: Link Functions
Each 11.1.c: package defines it's own version of one of the link functions listed below. Each of these functions links this main program to the corresponding test:
11.1.1: link_det_lu Speed Testing Gradient of Determinant Using Lu Factorization
11.1.2: link_det_minor Speed Testing Gradient of Determinant by Minor Expansion
11.1.3: link_mat_mul Speed Testing Derivative of Matrix Multiply
11.1.4: link_ode Speed Testing the Jacobian of Ode Solution
11.1.5: link_poly Speed Testing Second Derivative of a Polynomial
11.1.6: link_sparse_hessian Speed Testing Sparse Hessian
11.1.7: link_sparse_jacobian Speed Testing Sparse Jacobian

Input File: speed/main.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization

11.1.1.a: Prototype
extern bool link_det_lu(
     size_t                 
size      ,
     size_t                 
repeat    ,
     CppAD::vector<double> &
matrix    ,
     CppAD::vector<double> &
gradient
);

11.1.1.b: Purpose
Each 11.1.c: package must define a version of this routine as specified below. This is used by the 11.1: speed_main program to run the corresponding speed and correctness tests.

11.1.1.c: Method
The same template routine 11.2.1: det_by_lu is used by the different AD packages.

11.1.1.d: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_det_lu should be false.

11.1.1.e: size
The argument size is the number of rows and columns in the matrix.

11.1.1.f: repeat
The argument repeat is the number of different matrices that the gradient (or determinant) is computed for.

11.1.1.g: matrix
The argument matrix is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the last matrix that the gradient (or determinant) is computed for.

11.1.1.h: gradient
The argument gradient is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the gradient of the determinant of matrix with respect to its elements.

11.1.1.h.a: double
In the case where package is double, only the first element of gradient is used and it is actually the determinant value (the gradient value is not computed).
Input File: speed/src/link_det_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.2: Speed Testing Gradient of Determinant by Minor Expansion

11.1.2.a: Prototype
extern bool link_det_minor(
     size_t                 
size      ,
     size_t                 
repeat    ,
     CppAD::vector<double> &
matrix    ,
     CppAD::vector<double> &
gradient
);

11.1.2.b: Purpose
Each 11.1.c: package must define a version of this routine as specified below. This is used by the 11.1: speed_main program to run the corresponding speed and correctness tests.

11.1.2.c: Method
The same template class 11.2.3: det_by_minor is used by the different AD packages.

11.1.2.d: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_det_minor should be false.

11.1.2.e: size
The argument size is the number of rows and columns in the matrix.

11.1.2.f: repeat
The argument repeat is the number of different matrices that the gradient (or determinant) is computed for.

11.1.2.g: matrix
The argument matrix is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the last matrix that the gradient (or determinant) is computed for.

11.1.2.h: gradient
The argument gradient is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the gradient of the determinant of matrix with respect to its elements.

11.1.2.h.a: double
In the case where package is double, only the first element of gradient is used and it is actually the determinant value (the gradient value is not computed).
Input File: speed/src/link_det_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.3: Speed Testing Derivative of Matrix Multiply

11.1.3.a: Prototype
extern bool link_mat_mul(
     size_t                         
size    ,
     size_t                         
repeat  ,
     CppAD::vector<double>&         
x       ,
     CppAD::vector<double>&         
z       ,
     CppAD::vector<double>&         
dz
);

11.1.3.b: Purpose
Each 11.1.c: package must define a version of this routine as specified below. This is used by the 11.1: speed_main program to run the corresponding speed and correctness tests.

11.1.3.c: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_mat_mul should be false.

11.1.3.d: n
The argument n is the number of rows and columns in the square matrix x .

11.1.3.e: repeat
The argument repeat is the number of different argument values that the derivative of z (or just the value of z ) will be computed.

11.1.3.f: x
The argument x is a vector with x.size() = size * size elements. The input value of its elements does not matter. The output value of its elements is the last random matrix that is multiplied and then summed to form z ; @[@ x_{i,j} = x[ i * s + j ] @]@ where s = size .

11.1.3.g: z
The argument z is a vector with one element. The input value of the element does not matter. The output of its element the sum of the elements of y = x * x ; i.e., @[@ \begin{array}{rcl} y_{i,j} & = & \sum_{k=0}^{s-1} x_{i,k} x_{k, j} \\ z & = & \sum_{i=0}^{s-1} \sum_{j=0}^{s-1} y_{i,j} \end{array} @]@

11.1.3.h: dz
The argument dz is a vector with dz.size() = size * size . The input values of its elements do not matter. The output value of its elements form the derivative of z with respect to x .
Input File: speed/src/link_mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.4: Speed Testing the Jacobian of Ode Solution

11.1.4.a: Prototype
extern bool link_ode(
     size_t                 
size      ,
     size_t                 
repeat    ,
     CppAD::vector<double> &
x         ,
     CppAD::vector<double> &
jacobian
);

11.1.4.b: Purpose
Each 11.1.c: package must define a version of this routine as specified below. This is used by the 11.1: speed_main program to run the corresponding speed and correctness tests.

11.1.4.c: Method
The same template routine 11.2.7: ode_evaluate is used by th different AD packages.

11.1.4.d: f
The function @(@ f : \B{R}^n \rightarrow \B{R}^n @)@ that is defined and computed by evaluating 11.2.7: ode_evaluate with a call of the form
     ode_evaluate(
xpfp)
with p equal to zero. Calls with the value p equal to one are used to check the derivative values.

11.1.4.e: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_ode should be false.

11.1.4.f: size
The argument size is the number of variables in the ordinary differential equations which is also equal to @(@ n @)@.

11.1.4.g: repeat
The argument repeat is the number of times the Jacobian is computed.

11.1.4.h: x
The argument x is a vector with @(@ n @)@ elements. The input value of the elements of x does not matter. On output, it has been set to the argument value for which the function, or its derivative, is being evaluated. The value of this vector must change with each repetition.

11.1.4.i: jacobian
The argument jacobian is a vector with @(@ n^2 @)@ elements. The input value of its elements does not matter. The output value of its elements is the Jacobian of the function @(@ f(x) @)@ that corresponds to output value of x . To be more specific, for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , n-1 @)@, @[@ \D{f[i]}{x[j]} (x) = jacobian [ i \cdot n + j ] @]@

11.1.4.i.a: double
In the case where package is double, only the first @(@ n @)@ element of jacobian are modified and they are to the function value @(@ f(x) @)@ corresponding to the output value of x .
Input File: speed/src/link_ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.5: Speed Testing Second Derivative of a Polynomial

11.1.5.a: Prototype
extern bool link_poly(
     size_t                 
size    ,
     size_t                 
repeat  ,
     CppAD::vector<double> &
a       ,
     CppAD::vector<double> &
z       ,
     CppAD::vector<double> &
ddp
);

11.1.5.b: Purpose
Each 11.1.c: package must define a version of this routine as specified below. This is used by the 11.1: speed_main program to run the corresponding speed and correctness tests.

11.1.5.c: Method
The same template routine 8.13: Poly is used by the different AD packages.

11.1.5.d: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_poly should be false.

11.1.5.e: size
The argument size is the order of the polynomial (the number of coefficients in the polynomial).

11.1.5.f: repeat
The argument repeat is the number of different argument values that the second derivative (or just the polynomial) will be computed at.

11.1.5.g: a
The argument a is a vector with size elements. The input value of its elements does not matter. The output value of its elements is the coefficients of the polynomial that is differentiated (i-th element is coefficient of order i ).

11.1.5.h: z
The argument z is a vector with one element. The input value of the element does not matter. The output of its element is the polynomial argument value were the last second derivative (or polynomial value) was computed.

11.1.5.i: ddp
The argument ddp is a vector with one element. The input value of its element does not matter. The output value of its element is the second derivative of the polynomial with respect to it's argument value.

11.1.5.i.a: double
In the case where package is double, the output value of the element of ddp is the polynomial value (the second derivative is not computed).
Input File: speed/src/link_poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.6: Speed Testing Sparse Hessian

11.1.6.a: Prototype
extern bool link_sparse_hessian(
     size_t                        
size      ,
     size_t                        
repeat    ,
     CppAD::vector<double>&        
x         ,
     const CppAD::vector<size_t>&  
row       ,
     const CppAD::vector<size_t>&  
col       ,
     CppAD::vector<double>&        
hessian   ,
     size_t                        
n_sweep
);

11.1.6.b: Method
Given a row index vector @(@ row @)@ and a second column vector @(@ col @)@, the corresponding function @(@ f : \B{R}^n \rightarrow \B{R} @)@ is defined by 11.2.9: sparse_hes_fun . The non-zero entries in the Hessian of this function have one of the following forms: @[@ \DD{f}{x[row[k]]}{x[row[k]]} \; , \; \DD{f}{x[row[k]]}{x[col[k]]} \; , \; \DD{f}{x[col[k]]}{x[row[k]]} \; , \; \DD{f}{x[col[k]]}{x[col[k]]} @]@ for some @(@ k @)@ between zero and @(@ K-1 @)@. All the other terms of the Hessian are zero.

11.1.6.c: size
The argument size , referred to as @(@ n @)@ below, is the dimension of the domain space for @(@ f(x) @)@.

11.1.6.d: repeat
The argument repeat is the number of times to repeat the test (with a different value for x corresponding to each repetition).

11.1.6.e: x
The argument x has prototype
        CppAD::vector<double>& 
x
and its size is @(@ n @)@; i.e., x.size() == size . The input value of the elements of x does not matter. On output, it has been set to the argument value for which the function, or its derivative, is being evaluated. The value of this vector need not change with each repetition.

11.1.6.f: row
The argument row has prototype
     const CppAD::vector<size_t> 
row
Its size defines the value @(@ K @)@. It contains the row indices for the corresponding function @(@ f(x) @)@. All the elements of row are between zero and @(@ n-1 @)@.

11.1.6.g: col
The argument col has prototype
     const CppAD::vector<size_t> 
col
Its size must be the same as row ; i.e., @(@ K @)@. It contains the column indices for the corresponding function @(@ f(x) @)@. All the elements of col are between zero and @(@ n-1 @)@. There are no duplicated entries requested, to be specific, if k1 != k2 then
     ( 
row[k1] , col[k1] ) != ( row[k2] , col[k2] )
Furthermore, the entries are lower triangular; i.e.,
     
col[k] <= row[k]
.

11.1.6.h: hessian
The argument hessian has prototype
     CppAD::vector<double>&  hessian
and its size is K . The input value of its elements does not matter. The output value of its elements is the Hessian of the function @(@ f(x) @)@. To be more specific, for @(@ k = 0 , \ldots , K-1 @)@, @[@ \DD{f}{ x[ \R{row}[k] ] }{ x[ \R{col}[k] ]} = \R{hessian} [k] @]@

11.1.6.i: n_sweep
The input value of n_sweep does not matter. On output, it is the value 5.6.4.j: n_sweep corresponding to the evaluation of hessian . This is also the number of colors corresponding to the 5.6.4.i.a: coloring method , which can be set to 11.1.g.d: colpack , and is otherwise cppad.

11.1.6.i.a: double
In the case where package is double, only the first element of hessian is used and it is actually the value of @(@ f(x) @)@ (derivatives are not computed).
Input File: speed/src/link_sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.7: Speed Testing Sparse Jacobian

11.1.7.a: Prototype
extern bool link_sparse_jacobian(
     size_t                       
size      ,
     size_t                       
repeat    ,
     size_t                       
m         ,
     const CppAD::vector<size_t>& 
row       ,
     const CppAD::vector<size_t>& 
col       ,
           CppAD::vector<double>& 
x         ,
           CppAD::vector<double>& 
jacobian  ,
           size_t&                
n_sweep
);

11.1.7.b: Method
Given a range space dimension m the row index vector @(@ row @)@, and column index vector @(@ col @)@, a corresponding function @(@ f : \B{R}^n \rightarrow \B{R}^m @)@ is defined by 11.2.8: sparse_jac_fun . The non-zero entries in the Jacobian of this function have the form @[@ \D{f[row[k]]}{x[col[k]]]} @]@ for some @(@ k @)@ between zero and K = row.size()-1 . All the other terms of the Jacobian are zero.

11.1.7.c: size
The argument size , referred to as @(@ n @)@ below, is the dimension of the domain space for @(@ f(x) @)@.

11.1.7.d: repeat
The argument repeat is the number of times to repeat the test (with a different value for x corresponding to each repetition).

11.1.7.e: m
Is the dimension of the range space for the function @(@ f(x) @)@.

11.1.7.f: row
The size of the vector row defines the value @(@ K @)@. All the elements of row are between zero and @(@ m-1 @)@.

11.1.7.g: col
The argument col is a vector with size @(@ K @)@. The input value of its elements does not matter. On output, it has been set the column index vector for the last repetition. All the elements of col are between zero and @(@ n-1 @)@. There are no duplicate row and column entires; i.e., if j != k ,
     
row[j] != row[k] || col[j] != col[k]

11.1.7.h: x
The argument x has prototype
        CppAD::vector<double>& 
x
and its size is @(@ n @)@; i.e., x.size() == size . The input value of the elements of x does not matter. On output, it has been set to the argument value for which the function, or its derivative, is being evaluated and placed in jacobian . The value of this vector need not change with each repetition.

11.1.7.i: jacobian
The argument jacobian has prototype
        CppAD::vector<double>& 
jacobian
and its size is K . The input value of its elements does not matter. The output value of its elements is the Jacobian of the function @(@ f(x) @)@. To be more specific, for @(@ k = 0 , \ldots , K - 1 @)@, @[@ \D{f[ \R{row}[k] ]}{x[ \R{col}[k] ]} (x) = \R{jacobian} [k] @]@

11.1.7.j: n_sweep
The input value of n_sweep does not matter. On output, it is the value 5.6.2.i: n_sweep corresponding to the evaluation of jacobian . This is also the number of colors corresponding to the 5.6.2.h.a: coloring method , which can be set to 11.1.g.d: colpack , and is otherwise cppad.

11.1.7.j.a: double
In the case where package is double, only the first @(@ m @)@ elements of jacobian are used and they are set to the value of @(@ f(x) @)@.
Input File: speed/src/link_sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.1.8: Microsoft Version of Elapsed Number of Seconds

11.1.8.a: Syntax
s = microsoft_timer()

11.1.8.b: Purpose
This routine is accurate to within .02 seconds (see 8.5.1: elapsed_seconds which uses this routine when the preprocessor symbol _MSC_VER is defined). It does not necessary work for time intervals that are greater than a day. It uses ::GetSystemTime for timing.

11.1.8.c: s
is a double equal to the number of seconds since the first call to microsoft_timer.

11.1.8.d: Linking
The source code for this routine is located in speed/src/microsoft_timer.cpp. The preprocessor symbol _MSC_VER must be defined, or this routine is not compiled.
Input File: speed/src/microsoft_timer.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2: Speed Testing Utilities

11.2.a: Speed Main Program
11.1: speed_main Running the Speed Test Program

11.2.b: Speed Utility Routines
11.2.1: det_by_lu Determinant Using Expansion by Lu Factorization
11.2.3: det_by_minor Determinant Using Expansion by Minors
11.2.2: det_of_minor Determinant of a Minor
11.2.4: det_33 Check Determinant of 3 by 3 matrix
11.2.5: det_grad_33 Check Gradient of Determinant of 3 by 3 matrix
11.2.6: mat_sum_sq Sum Elements of a Matrix Times Itself
11.2.7: ode_evaluate Evaluate a Function Defined in Terms of an ODE
11.2.8: sparse_jac_fun Evaluate a Function That Has a Sparse Jacobian
11.2.9: sparse_hes_fun Evaluate a Function That Has a Sparse Hessian
11.2.10: uniform_01 Simulate a [0,1] Uniform Random Variate

11.2.c: Library Routines
8.14.2: LuFactor LU Factorization of A Square Matrix
8.14.3: LuInvert Invert an LU Factored Equation
8.14.1: LuSolve Compute Determinant and Solve Linear Equations
8.13: Poly Evaluate a Polynomial or its Derivative

11.2.d: Source Code
11.2.1.2: det_by_lu.hpp Source: det_by_lu
11.2.3.2: det_by_minor.hpp Source: det_by_minor
11.2.5.1: det_grad_33.hpp Source: det_grad_33
11.2.2.2: det_of_minor.hpp Source: det_of_minor
8.14.2.2: lu_factor.hpp Source: LuFactor
8.14.3.2: lu_invert.hpp Source: LuInvert
8.14.1.2: lu_solve.hpp Source: LuSolve
11.2.6.2: mat_sum_sq.hpp Source: mat_sum_sq
8.13.2: poly.hpp Source: Poly
11.2.8.2: sparse_jac_fun.hpp Source: sparse_jac_fun
11.2.9.2: sparse_hes_fun.hpp Source: sparse_hes_fun
11.2.10.1: uniform_01.hpp Source: uniform_01

Input File: omh/speed/speed_utility.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.1: Determinant Using Expansion by Lu Factorization

11.2.1.a: Syntax
# include <cppad/speed/det_by_lu.hpp>
det_by_lu<Scalardet(n)
d = det(a)

11.2.1.b: Inclusion
The template class det_by_lu is defined in the CppAD namespace by including the file cppad/speed/det_by_lu.hpp (relative to the CppAD distribution directory).

11.2.1.c: Constructor
The syntax
     det_by_lu<
Scalardet(n)
constructs the object det which can be used for evaluating the determinant of n by n matrices using LU factorization.

11.2.1.d: Scalar
The type Scalar can be any 8.7: NumericType

11.2.1.e: n
The argument n has prototype
     size_t 
n

11.2.1.f: det
The syntax
     
d = det(a)
returns the determinant of the matrix @(@ A @)@ using LU factorization.

11.2.1.f.a: a
The argument a has prototype
     const 
Vector &a
It must be a Vector with length @(@ n * n @)@ and with It must be a Vector with length @(@ n * n @)@ and with elements of type Scalar . The elements of the @(@ n \times n @)@ matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , n-1 @)@, by @[@ A_{i,j} = a[ i * m + j] @]@

11.2.1.f.b: d
The return value d has prototype
     
Scalar d

11.2.1.g: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than @(@ n * n @)@. This must return a Scalar value corresponding to the i-th element of the vector y . This is the only requirement of the type Vector .

11.2.1.h: Example
The file 11.2.1.1: det_by_lu.cpp contains an example and test of det_by_lu.hpp. It returns true if it succeeds and false otherwise.

11.2.1.i: Source Code
The file 11.2.1.2: det_by_lu.hpp contains the source for this template function.
Input File: cppad/speed/det_by_lu.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.1.1: Determinant Using Lu Factorization: Example and Test

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_lu.hpp>

bool det_by_lu()
{     bool ok = true;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     // dimension of the matrix
     size_t n = 3;

     // construct the determinat object
     CppAD::det_by_lu<double> Det(n);

     double  a[] = {
          1., 2., 3.,  // a[0] a[1] a[2]
          3., 2., 1.,  // a[3] a[4] a[5]
          2., 1., 2.   // a[6] a[7] a[8]
     };
     CPPAD_TESTVECTOR(double) A(9);
     size_t i;
     for(i = 0; i < 9; i++)
          A[i] = a[i];


     // evaluate the determinant
     double det = Det(A);

     double check;
     check = a[0]*(a[4]*a[8] - a[5]*a[7])
           - a[1]*(a[3]*a[8] - a[5]*a[6])
           + a[2]*(a[3]*a[7] - a[4]*a[6]);

     ok = CppAD::NearEqual(det, check, eps99, eps99);

     return ok;
}

Input File: speed/example/det_by_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.1.2: Source: det_by_lu
# ifndef CPPAD_DET_BY_LU_HPP
# define CPPAD_DET_BY_LU_HPP
# include <cppad/utility/vector.hpp>
# include <cppad/utility/lu_solve.hpp>

// BEGIN CppAD namespace
namespace CppAD {

template <class Scalar>
class det_by_lu {
private:
     const size_t m_;
     const size_t n_;
     CppAD::vector<Scalar> A_;
     CppAD::vector<Scalar> B_;
     CppAD::vector<Scalar> X_;
public:
     det_by_lu(size_t n) : m_(0), n_(n), A_(n * n)
     {     }

     template <class Vector>
     inline Scalar operator()(const Vector &x)
     {

          Scalar       logdet;
          Scalar       det;
          int          signdet;
          size_t       i;

          // copy matrix so it is not overwritten
          for(i = 0; i < n_ * n_; i++)
               A_[i] = x[i];

          // comput log determinant
          signdet = CppAD::LuSolve(
               n_, m_, A_, B_, X_, logdet);

/*
          // Do not do this for speed test because it makes floating
          // point operation sequence very simple.
          if( signdet == 0 )
               det = 0;
          else     det =  Scalar( signdet ) * exp( logdet );
*/

          // convert to determinant
          det     = Scalar( signdet ) * exp( logdet );

# ifdef FADBAD
          // Fadbad requires tempories to be set to constants
          for(i = 0; i < n_ * n_; i++)
               A_[i] = 0;
# endif

          return det;
     }
};
} // END CppAD namespace
# endif

Input File: omh/det_by_lu_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.2: Determinant of a Minor

11.2.2.a: Syntax
# include <cppad/speed/det_of_minor.hpp>
d = det_of_minor(amnrc)

11.2.2.b: Inclusion
The template function det_of_minor is defined in the CppAD namespace by including the file cppad/speed/det_of_minor.hpp (relative to the CppAD distribution directory).

11.2.2.c: Purpose
This template function returns the determinant of a minor of the matrix @(@ A @)@ using expansion by minors. The elements of the @(@ n \times n @)@ minor @(@ M @)@ of the matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , n-1 @)@, by @[@ M_{i,j} = A_{R(i), C(j)} @]@ where the functions @(@ R(i) @)@ is defined by the 11.2.2.h: argument r and @(@ C(j) @)@ is defined by the 11.2.2.i: argument c .

This template function is for example and testing purposes only. Expansion by minors is chosen as an example because it uses a lot of floating point operations yet does not require much source code (on the order of m factorial floating point operations and about 70 lines of source code including comments). This is not an efficient method for computing a determinant; for example, using an LU factorization would be better.

11.2.2.d: Determinant of A
If the following conditions hold, the minor is the entire matrix @(@ A @)@ and hence det_of_minor will return the determinant of @(@ A @)@:
  1. @(@ n = m @)@.
  2. for @(@ i = 0 , \ldots , m-1 @)@, @(@ r[i] = i+1 @)@, and @(@ r[m] = 0 @)@.
  3. for @(@ j = 0 , \ldots , m-1 @)@, @(@ c[j] = j+1 @)@, and @(@ c[m] = 0 @)@.


11.2.2.e: a
The argument a has prototype
     const std::vector<
Scalar>& a
and is a vector with size @(@ m * m @)@ (see description of 11.2.2.k: Scalar below). The elements of the @(@ m \times m @)@ matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , m-1 @)@ and @(@ j = 0 , \ldots , m-1 @)@, by @[@ A_{i,j} = a[ i * m + j] @]@

11.2.2.f: m
The argument m has prototype
     size_t 
m
and is the number of rows (and columns) in the square matrix @(@ A @)@.

11.2.2.g: n
The argument n has prototype
     size_t 
n
and is the number of rows (and columns) in the square minor @(@ M @)@.

11.2.2.h: r
The argument r has prototype
     std::vector<size_t>& 
r
and is a vector with @(@ m + 1 @)@ elements. This vector defines the function @(@ R(i) @)@ which specifies the rows of the minor @(@ M @)@. To be specific, the function @(@ R(i) @)@ for @(@ i = 0, \ldots , n-1 @)@ is defined by @[@ \begin{array}{rcl} R(0) & = & r[m] \\ R(i+1) & = & r[ R(i) ] \end{array} @]@ All the elements of r must have value less than or equal m . The elements of vector r are modified during the computation, and restored to their original value before the return from det_of_minor.

11.2.2.i: c
The argument c has prototype
     std::vector<size_t>& 
c
and is a vector with @(@ m + 1 @)@ elements This vector defines the function @(@ C(i) @)@ which specifies the rows of the minor @(@ M @)@. To be specific, the function @(@ C(i) @)@ for @(@ j = 0, \ldots , n-1 @)@ is defined by @[@ \begin{array}{rcl} C(0) & = & c[m] \\ C(j+1) & = & c[ C(j) ] \end{array} @]@ All the elements of c must have value less than or equal m . The elements of vector c are modified during the computation, and restored to their original value before the return from det_of_minor.

11.2.2.j: d
The result d has prototype
     
Scalar d
and is equal to the determinant of the minor @(@ M @)@.

11.2.2.k: Scalar
If x and y are objects of type Scalar and i is an object of type int, the Scalar must support the following operations:
Syntax Description Result Type
Scalar x default constructor for Scalar object.
x = i set value of x to current value of i
x = y set value of x to current value of y
x + y value of x plus y Scalar
x - y value of x minus y Scalar
x * y value of x times value of y Scalar

11.2.2.l: Example
The file 11.2.2.1: det_of_minor.cpp contains an example and test of det_of_minor.hpp. It returns true if it succeeds and false otherwise.

11.2.2.m: Source Code
The file 11.2.2.2: det_of_minor.hpp contains the source for this template function.
Input File: cppad/speed/det_of_minor.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.2.1: Determinant of a Minor: Example and Test
# include <vector>
# include <cstddef>
# include <cppad/speed/det_of_minor.hpp>

bool det_of_minor()
{     bool   ok = true;
     size_t i;

     // dimension of the matrix A
     size_t m = 3;
     // index vectors set so minor is the entire matrix A
     std::vector<size_t> r(m + 1);
     std::vector<size_t> c(m + 1);
     for(i= 0; i < m; i++)
     {     r[i] = i+1;
          c[i] = i+1;
     }
     r[m] = 0;
     c[m] = 0;
     // values in the matrix A
     double  data[] = {
          1., 2., 3.,
          3., 2., 1.,
          2., 1., 2.
     };
     // construct vector a with the values of the matrix A
     std::vector<double> a(data, data + 9);

     // evaluate the determinant of A
     size_t n   = m; // minor has same dimension as A
     double det = CppAD::det_of_minor(a, m, n, r, c);

     // check the value of the determinant of A
     ok &= (det == (double) (1*(2*2-1*1) - 2*(3*2-1*2) + 3*(3*1-2*2)) );

     // minor where row 0 and column 1 are removed
     r[m] = 1;  // skip row index 0 by starting at row index 1
     c[0] = 2;  // skip column index 1 by pointing from index 0 to index 2
     // evaluate determinant of the minor
     n   = m - 1; // dimension of the minor
     det = CppAD::det_of_minor(a, m, m-1, r, c);

     // check the value of the determinant of the minor
     ok &= (det == (double) (3*2-1*2) );

     return ok;
}

Input File: speed/example/det_of_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.2.2: Source: det_of_minor
# ifndef CPPAD_DET_OF_MINOR_HPP
# define CPPAD_DET_OF_MINOR_HPP
# include <vector>
# include <cstddef>

namespace CppAD { // BEGIN CppAD namespace
template <class Scalar>
Scalar det_of_minor(
     const std::vector<Scalar>& a  ,
     size_t                     m  ,
     size_t                     n  ,
     std::vector<size_t>&       r  ,
     std::vector<size_t>&       c  )
{
     const size_t R0 = r[m]; // R(0)
     size_t       Cj = c[m]; // C(j)    (case j = 0)
     size_t       Cj1 = m;   // C(j-1)  (case j = 0)

     // check for 1 by 1 case
     if( n == 1 ) return a[ R0 * m + Cj ];

     // initialize determinant of the minor M
     Scalar detM = Scalar(0);

     // initialize sign of factor for next sub-minor
     int s = 1;

     // remove row with index 0 in M from all the sub-minors of M
     r[m] = r[R0];

     // for each column of M
     for(size_t j = 0; j < n; j++)
     {     // element with index (0,j) in the minor M
          Scalar M0j = a[ R0 * m + Cj ];

          // remove column with index j in M to form next sub-minor S of M
          c[Cj1] = c[Cj];

          // compute determinant of the current sub-minor S
          Scalar detS = det_of_minor(a, m, n - 1, r, c);

          // restore column Cj to represenation of M as a minor of A
          c[Cj1] = Cj;

          // include this sub-minor term in the summation
          if( s > 0 )
               detM = detM + M0j * detS;
          else     detM = detM - M0j * detS;

          // advance to next column of M
          Cj1 = Cj;
          Cj  = c[Cj];
          s   = - s;
     }

     // restore row zero to the minor representation for M
     r[m] = R0;

     // return the determinant of the minor M
     return detM;
}
} // END CppAD namespace
# endif

Input File: omh/det_of_minor_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.3: Determinant Using Expansion by Minors

11.2.3.a: Syntax
# include <cppad/speed/det_by_minor.hpp>
det_by_minor<Scalardet(n)
d = det(a)

11.2.3.b: Inclusion
The template class det_by_minor is defined in the CppAD namespace by including the file cppad/speed/det_by_minor.hpp (relative to the CppAD distribution directory).

11.2.3.c: Constructor
The syntax
     det_by_minor<
Scalardet(n)
constructs the object det which can be used for evaluating the determinant of n by n matrices using expansion by minors.

11.2.3.d: Scalar
The type Scalar must satisfy the same conditions as in the function 11.2.2.k: det_of_minor .

11.2.3.e: n
The argument n has prototype
     size_t 
n

11.2.3.f: det
The syntax
     
d = det(a)
returns the determinant of the matrix A using expansion by minors.

11.2.3.f.a: a
The argument a has prototype
     const 
Vector &a
It must be a Vector with length @(@ n * n @)@ and with elements of type Scalar . The elements of the @(@ n \times n @)@ matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , n-1 @)@, by @[@ A_{i,j} = a[ i * m + j] @]@

11.2.3.f.b: d
The return value d has prototype
     
Scalar d
It is equal to the determinant of @(@ A @)@.

11.2.3.g: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than @(@ n * n @)@. This must return a Scalar value corresponding to the i-th element of the vector y . This is the only requirement of the type Vector .

11.2.3.h: Example
The file 11.2.3.1: det_by_minor.cpp contains an example and test of det_by_minor.hpp. It returns true if it succeeds and false otherwise.

11.2.3.i: Source Code
The file 11.2.3.2: det_by_minor.hpp contains the source for this template function.
Input File: cppad/speed/det_by_minor.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.3.1: Determinant Using Expansion by Minors: Example and Test

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>

bool det_by_minor()
{     bool ok = true;

     // dimension of the matrix
     size_t n = 3;

     // construct the determinat object
     CppAD::det_by_minor<double> Det(n);

     double  a[] = {
          1., 2., 3.,  // a[0] a[1] a[2]
          3., 2., 1.,  // a[3] a[4] a[5]
          2., 1., 2.   // a[6] a[7] a[8]
     };
     CPPAD_TESTVECTOR(double) A(9);
     size_t i;
     for(i = 0; i < 9; i++)
          A[i] = a[i];


     // evaluate the determinant
     double det = Det(A);

     double check;
     check = a[0]*(a[4]*a[8] - a[5]*a[7])
           - a[1]*(a[3]*a[8] - a[5]*a[6])
           + a[2]*(a[3]*a[7] - a[4]*a[6]);

     ok = det == check;

     return ok;
}

Input File: speed/example/det_by_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.3.2: Source: det_by_minor
# ifndef CPPAD_DET_BY_MINOR_HPP
# define CPPAD_DET_BY_MINOR_HPP
# include <cppad/speed/det_of_minor.hpp>
# include <vector>

// BEGIN CppAD namespace
namespace CppAD {

template <class Scalar>
class det_by_minor {
private:
     size_t              m_;

     // made mutable because modified and then restored
     mutable std::vector<size_t> r_;
     mutable std::vector<size_t> c_;

     // make mutable because its value does not matter
     mutable std::vector<Scalar> a_;
public:
     det_by_minor(size_t m) : m_(m) , r_(m + 1) , c_(m + 1), a_(m * m)
     {
          size_t i;

          // values for r and c that correspond to entire matrix
          for(i = 0; i < m; i++)
          {     r_[i] = i+1;
               c_[i] = i+1;
          }
          r_[m] = 0;
          c_[m] = 0;
     }

     template <class Vector>
     inline Scalar operator()(const Vector &x) const
     {     size_t i = m_ * m_;
          while(i--)
               a_[i] = x[i];
          return det_of_minor(a_, m_, m_, r_, c_);
     }

};

} // END CppAD namespace
# endif

Input File: omh/det_by_minor_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.4: Check Determinant of 3 by 3 matrix

11.2.4.a: Syntax
# include <cppad/speed/det_33.hpp>
ok = det_33(xd)

11.2.4.b: Purpose
This routine can be used to check a method for computing the determinant of a matrix.

11.2.4.c: Inclusion
The template function det_33 is defined in the CppAD namespace by including the file cppad/speed/det_33.hpp (relative to the CppAD distribution directory).

11.2.4.d: x
The argument x has prototype
     const 
Vector &x
. It contains the elements of the matrix @(@ X @)@ in row major order; i.e., @[@ X_{i,j} = x [ i * 3 + j ] @]@

11.2.4.e: d
The argument d has prototype
     const 
Vector &d
. It is tested to see if d[0] it is equal to @(@ \det ( X ) @)@.

11.2.4.f: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than 9. This must return a double value corresponding to the i-th element of the vector y . This is the only requirement of the type Vector . (Note that only the first element of the vector d is used.)

11.2.4.g: ok
The return value ok has prototype
     bool 
ok
It is true, if the determinant d[0] passes the test and false otherwise.

11.2.4.h: Source Code
The file 11.2.4.1: det_33.hpp contains the source code for this template function.
Input File: cppad/speed/det_33.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.4.1: Source: det_33
# ifndef CPPAD_DET_33_HPP
# define CPPAD_DET_33_HPP
# include <cppad/utility/near_equal.hpp>
namespace CppAD {
template <class Vector>
     bool det_33(const Vector &x, const Vector &d)
     {     bool ok = true;
          double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

          // use expansion by minors to compute the determinant by hand
          double check = 0.;
          check += x[0] * ( x[4] * x[8] - x[5] * x[7] );
          check -= x[1] * ( x[3] * x[8] - x[5] * x[6] );
          check += x[2] * ( x[3] * x[7] - x[4] * x[6] );

          ok &= CppAD::NearEqual(check, d[0], eps99, eps99);

          return ok;
     }
}
# endif

Input File: omh/det_33_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.5: Check Gradient of Determinant of 3 by 3 matrix

11.2.5.a: Syntax
# include <cppad/speed/det_grad_33.hpp>
ok = det_grad_33(xg)

11.2.5.b: Purpose
This routine can be used to check a method for computing the gradient of the determinant of a matrix.

11.2.5.c: Inclusion
The template function det_grad_33 is defined in the CppAD namespace by including the file cppad/speed/det_grad_33.hpp (relative to the CppAD distribution directory).

11.2.5.d: x
The argument x has prototype
     const 
Vector &x
. It contains the elements of the matrix @(@ X @)@ in row major order; i.e., @[@ X_{i,j} = x [ i * 3 + j ] @]@

11.2.5.e: g
The argument g has prototype
     const 
Vector &g
. It contains the elements of the gradient of @(@ \det ( X ) @)@ in row major order; i.e., @[@ \D{\det (X)}{X(i,j)} = g [ i * 3 + j ] @]@

11.2.5.f: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than 9. This must return a double value corresponding to the i-th element of the vector y . This is the only requirement of the type Vector .

11.2.5.g: ok
The return value ok has prototype
     bool 
ok
It is true, if the gradient g passes the test and false otherwise.

11.2.5.h: Source Code
The file 11.2.5.1: det_grad_33.hpp contains the source code for this template function.
Input File: cppad/speed/det_grad_33.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.5.1: Source: det_grad_33
# ifndef CPPAD_DET_GRAD_33_HPP
# define CPPAD_DET_GRAD_33_HPP
# include <limits>
# include <cppad/utility/near_equal.hpp>
namespace CppAD {
template <class Vector>
     bool det_grad_33(const Vector &x, const Vector &g)
     {     bool ok = true;
          typedef typename Vector::value_type Float;
          Float eps = 10. * Float( std::numeric_limits<double>::epsilon() );

          // use expansion by minors to compute the derivative by hand
          double check[9];
          check[0] = + ( x[4] * x[8] - x[5] * x[7] );
          check[1] = - ( x[3] * x[8] - x[5] * x[6] );
          check[2] = + ( x[3] * x[7] - x[4] * x[6] );
          //
          check[3] = - ( x[1] * x[8] - x[2] * x[7] );
          check[4] = + ( x[0] * x[8] - x[2] * x[6] );
          check[5] = - ( x[0] * x[7] - x[1] * x[6] );
          //
          check[6] = + ( x[1] * x[5] - x[2] * x[4] );
          check[7] = - ( x[0] * x[5] - x[2] * x[3] );
          check[8] = + ( x[0] * x[4] - x[1] * x[3] );
          //
          for(size_t i = 0; i < 3 * 3; i++)
               ok &= CppAD::NearEqual(check[i], g[i], eps, eps);

          return ok;
     }
}
# endif

Input File: omh/det_grad_33_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.6: Sum Elements of a Matrix Times Itself

11.2.6.a: Syntax
# include <cppad/speed/mat_sum_sq.hpp>
mat_sum_sq(nxyz)

11.2.6.b: Purpose
This routine is intended for use with the matrix multiply speed tests; to be specific, it computes @[@ \begin{array}{rcl} y_{i,j} & = & \sum_{k=0}^{n-1} x_{i,k} x_{k,j} \\ z_0 & = & \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} y_{i,j} \end{array} @]@ see 11.1.3: link_mat_mul .

11.2.6.c: Inclusion
The template function mat_sum_sq is defined in the CppAD namespace by including the file cppad/speed/mat_sum_sq.hpp (relative to the CppAD distribution directory).

11.2.6.d: n
This argument has prototype
     size_t 
n
It specifies the size of the matrices.

11.2.6.e: x
The argument x has prototype
     const 
Vector &x
and x.size() == n * n . It contains the elements of @(@ x @)@ in row major order; i.e., @[@ x_{i,j} = x [ i * n + j ] @]@

11.2.6.f: y
The argument y has prototype
     
Vectory
and y.size() == n * n . The input value of its elements does not matter. Upon return, @[@ \begin{array}{rcl} y_{i,j} & = & \sum_{k=0}^{n-1} x_{i,k} x_{k,j} \\ y[ i * n + j ] & = & y_{i,j} \end{array} @]@

11.2.6.g: z
The argument d has prototype
     
Vectorz
. The input value of its element does not matter. Upon return @[@ \begin{array}{rcl} z_0 & = & \sum_{i=0}^{n-1} \sum_{j=0}^n y_{i,j} \\ z[0] & = & z_0 \end{array} @]@

11.2.6.h: Vector
The type Vector is any 8.9: SimpleVector , or it can be a raw pointer to the vector elements. The element type must support addition, multiplication, and assignment to both its own type and to a double value.

11.2.6.i: Example
The file 11.2.6.1: mat_sum_sq.cpp contains an example and test of mat_sum_sq.hpp. It returns true if it succeeds and false otherwise.

11.2.6.j: Source Code
The file 11.2.6.2: mat_sum_sq.hpp contains the source for this template function.
Input File: cppad/speed/mat_sum_sq.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
# include <vector>
# include <cstddef>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/mat_sum_sq.hpp>

bool mat_sum_sq()
{     bool   ok = true;
     double x_00, x_01, x_10, x_11, check;

     // dimension of the matrices x, y, and the result z
     size_t n = 2;
     CppAD::vector<double> x(n * n), y(n * n), z(1);

     // x = [ 1 2 ; 3 4 ]
     x[0] = x_00 = 1.;
     x[1] = x_01 = 2.;
     x[2] = x_10 = 3.;
     x[3] = x_11 = 4.;

     // compute y = x * x and z = sum of elements in y
     CppAD::mat_sum_sq(n, x, y, z);

     // check y_00
     check = x_00 * x_00 + x_01 * x_10;
     ok   &= (check == y[0]);

     // check y_01
     check = x_00 * x_01 + x_01 * x_11;
     ok   &= (check == y[1]);

     // check y_10
     check = x_10 * x_00 + x_11 * x_10;
     ok   &= (check == y[2]);

     // check y_11
     check = x_10 * x_01 + x_11 * x_11;
     ok   &= (check == y[3]);

     // check z
     check = y[0] + y[1] + y[2] + y[3];
     ok   &= (check == z[0]);

     return ok;
}

Input File: speed/example/mat_sum_sq.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.6.2: Source: mat_sum_sq
# ifndef CPPAD_MAT_SUM_SQ_HPP
# define CPPAD_MAT_SUM_SQ_HPP
# include <cstddef>
//
namespace CppAD {
     template <class Vector>
     void mat_sum_sq(size_t n, Vector& x , Vector& y , Vector& z)
     {     size_t i, j, k;
          // Very simple computation of y = x * x for speed comparison
          for(i = 0; i < n; i++)
          {     for(j = 0; j < n; j++)
               {     y[i * n + j] = 0.;
                    for(k = 0; k < n; k++)
                         y[i * n + j] += x[i * n + k] * x[k * n + j];
               }
          }
          z[0] = 0.;
          for(i = 0; i < n; i++)
          {     for(j = 0; j < n; j++)
                    z[0] += y[i * n + j];
          }
          return;
     }

}
# endif

Input File: omh/mat_sum_sq_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.7: Evaluate a Function Defined in Terms of an ODE

11.2.7.a: Syntax
# include <cppad/speed/ode_evaluate.hpp>
ode_evaluate(xpfp)

11.2.7.b: Purpose
This routine evaluates a function @(@ f : \B{R}^n \rightarrow \B{R}^n @)@ defined by @[@ f(x) = y(x, 1) @]@ where @(@ y(x, t) @)@ solves the ordinary differential equation @[@ \begin{array}{rcl} y(x, 0) & = & x \\ \partial_t y (x, t ) & = & g[ y(x,t) , t ] \end{array} @]@ where @(@ g : \B{R}^n \times \B{R} \rightarrow \B{R}^n @)@ is an unspecified function.

11.2.7.c: Inclusion
The template function ode_evaluate is defined in the CppAD namespace by including the file cppad/speed/ode_evaluate.hpp (relative to the CppAD distribution directory).

11.2.7.d: Float

11.2.7.d.a: Operation Sequence
The type Float must be a 8.7: NumericType . The Float 12.4.g.b: operation sequence for this routine does not depend on the value of the argument x , hence it does not need to be retaped for each value of @(@ x @)@.

11.2.7.d.b: fabs
If y and z are Float objects, the syntax
     
y = fabs(z)
must be supported. Note that it does not matter if the operation sequence for fabs depends on z because the corresponding results are not actually used by ode_evaluate; see fabs in 8.17.l.a: Runge45 .

11.2.7.e: x
The argument x has prototype
     const CppAD::vector<
Float>& x
It contains he argument value for which the function, or its derivative, is being evaluated. The value @(@ n @)@ is determined by the size of the vector x .

11.2.7.f: p
The argument p has prototype
     size_t 
p

11.2.7.f.a: p == 0
In this case a numerical method is used to solve the ode and obtain an accurate approximation for @(@ y(x, 1) @)@. This numerical method has a fixed that does not depend on x .

11.2.7.f.b: p = 1
In this case an analytic solution for the partial derivative @(@ \partial_x y(x, 1) @)@ is returned.

11.2.7.g: fp
The argument fp has prototype
     CppAD::vector<
Float>& fp
The input value of the elements of fp does not matter.

11.2.7.g.a: Function
If p is zero, fp has size equal to @(@ n @)@ and contains the value of @(@ y(x, 1) @)@.

11.2.7.g.b: Gradient
If p is one, fp has size equal to n^2 and for @(@ i = 0 , \ldots 1 @)@, @(@ j = 0 , \ldots , n-1 @)@ @[@ \D{y[i]}{x[j]} (x, 1) = fp [ i \cdot n + j ] @]@

11.2.7.h: Example
The file 11.2.7.1: ode_evaluate.cpp contains an example and test of ode_evaluate.hpp. It returns true if it succeeds and false otherwise.

11.2.7.i: Source Code
The file 11.2.7.2: ode_evaluate.hpp contains the source code for this template function.
Input File: cppad/speed/ode_evaluate.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.7.1: ode_evaluate: Example and test
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/cppad.hpp>

bool ode_evaluate(void)
{     using CppAD::NearEqual;
     using CppAD::AD;

     bool ok = true;

     size_t n = 3;
     CppAD::vector<double>       x(n);
     CppAD::vector<double>       ym(n * n);
     CppAD::vector< AD<double> > X(n);
     CppAD::vector< AD<double> > Ym(n);

     // choose x
     size_t j;
     for(j = 0; j < n; j++)
     {     x[j] = double(j + 1);
          X[j] = x[j];
     }

     // declare independent variables
     Independent(X);

     // evaluate function
     size_t m = 0;
     CppAD::ode_evaluate(X, m, Ym);

     // evaluate derivative
     m = 1;
     CppAD::ode_evaluate(x, m, ym);

     // use AD to evaluate derivative
     CppAD::ADFun<double>   F(X, Ym);
     CppAD::vector<double>  dy(n * n);
     dy = F.Jacobian(x);

     size_t k;
     for(k = 0; k < n * n; k++)
          ok &= NearEqual(ym[k], dy[k] , 1e-7, 1e-7);

     return ok;
}

Input File: speed/example/ode_evaluate.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.7.2: Source: ode_evaluate
# ifndef CPPAD_ODE_EVALUATE_HPP
# define CPPAD_ODE_EVALUATE_HPP
# include <cppad/utility/vector.hpp>
# include <cppad/utility/ode_err_control.hpp>
# include <cppad/utility/runge_45.hpp>

namespace CppAD {

     template <class Float>
     class ode_evaluate_fun {
     public:
          // Given that y_i (0) = x_i,
          // the following y_i (t) satisfy the ODE below:
          // y_0 (t) = x[0]
          // y_1 (t) = x[1] + x[0] * t
          // y_2 (t) = x[2] + x[1] * t + x[0] * t^2/2
          // y_3 (t) = x[3] + x[2] * t + x[1] * t^2/2 + x[0] * t^3 / 3!
          // ...
          void Ode(
               const Float&                    t,
               const CppAD::vector<Float>&     y,
               CppAD::vector<Float>&           f)
          {     size_t n  = y.size();
               f[0]      = 0.;
               for(size_t k = 1; k < n; k++)
                    f[k] = y[k-1];
          }
     };
     //
     template <class Float>
     void ode_evaluate(
          const CppAD::vector<Float>& x  ,
          size_t                      p  ,
          CppAD::vector<Float>&       fp )
     {     using CppAD::vector;
          typedef vector<Float> VectorFloat;

          size_t n = x.size();
          CPPAD_ASSERT_KNOWN( p == 0 || p == 1,
               "ode_evaluate: p is not zero or one"
          );
          CPPAD_ASSERT_KNOWN(
               ((p==0) & (fp.size()==n)) || ((p==1) & (fp.size()==n*n)),
               "ode_evaluate: the size of fp is not correct"
          );
          if( p == 0 )
          {     // function that defines the ode
               ode_evaluate_fun<Float> F;

               // number of Runge45 steps to use
               size_t M = 10;

               // initial and final time
               Float ti = 0.0;
               Float tf = 1.0;

               // initial value for y(x, t); i.e. y(x, 0)
               // (is a reference to x)
               const VectorFloat& yi = x;

               // final value for y(x, t); i.e., y(x, 1)
               // (is a reference to fp)
               VectorFloat& yf = fp;

               // Use fourth order Runge-Kutta to solve ODE
               yf = CppAD::Runge45(F, M, ti, tf, yi);

               return;
          }
          /* Compute derivaitve of y(x, 1) w.r.t x
          y_0 (x, t) = x[0]
          y_1 (x, t) = x[1] + x[0] * t
          y_2 (x, t) = x[2] + x[1] * t + x[0] * t^2/2
          y_3 (x, t) = x[3] + x[2] * t + x[1] * t^2/2 + x[0] * t^3 / 3!
          ...
          */
          size_t i, j, k;
          for(i = 0; i < n; i++)
          {     for(j = 0; j < n; j++)
                    fp[ i * n + j ] = 0.0;
          }
          size_t factorial = 1;
          for(k = 0; k < n; k++)
          {     if( k > 1 )
                    factorial *= k;
               for(i = k; i < n; i++)
               {     // partial w.r.t x[i-k] of x[i-k] * t^k / k!
                    j = i - k;
                    fp[ i * n + j ] += 1.0 / Float(factorial);
               }
          }
     }
}
# endif

Input File: omh/ode_evaluate.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.8: Evaluate a Function That Has a Sparse Jacobian

11.2.8.a: Syntax
# include <cppad/speed/sparse_jac_fun.hpp>
sparse_jac_fun(mnxrowcolpfp)

11.2.8.b: Purpose
This routine evaluates @(@ f(x) @)@ and @(@ f^{(1)} (x) @)@ where the Jacobian @(@ f^{(1)} (x) @)@ is sparse. The function @(@ f : \B{R}^n \rightarrow \B{R}^m @)@ only depends on the size and contents of the index vectors row and col . The non-zero entries in the Jacobian of this function have one of the following forms: @[@ \D{ f[row[k]]}{x[col[k]]} @]@ for some @(@ k @)@ between zero and @(@ K-1 @)@. All the other terms of the Jacobian are zero.

11.2.8.c: Inclusion
The template function sparse_jac_fun is defined in the CppAD namespace by including the file cppad/speed/sparse_jac_fun.hpp (relative to the CppAD distribution directory).

11.2.8.d: Float
The type Float must be a 8.7: NumericType . In addition, if y and z are Float objects,
     
y = exp(z)
must set the y equal the exponential of z , i.e., the derivative of y with respect to z is equal to y .

11.2.8.e: FloatVector
The type FloatVector is any 8.9: SimpleVector , or it can be a raw pointer, with elements of type Float .

11.2.8.f: n
The argument n has prototype
     size_t 
n
It specifies the dimension for the domain space for @(@ f(x) @)@.

11.2.8.g: m
The argument m has prototype
     size_t 
m
It specifies the dimension for the range space for @(@ f(x) @)@.

11.2.8.h: x
The argument x has prototype
     const 
FloatVectorx
It contains the argument value for which the function, or its derivative, is being evaluated. We use @(@ n @)@ to denote the size of the vector x .

11.2.8.i: row
The argument row has prototype
      const CppAD::vector<size_t>& 
row
It specifies indices in the range of @(@ f(x) @)@ for non-zero components of the Jacobian (see 11.2.9.b: purpose above). The value @(@ K @)@ is defined by K = row.size() . All the elements of row must be between zero and m-1 .

11.2.8.j: col
The argument col has prototype
      const CppAD::vector<size_t>& 
col
and its size must be @(@ K @)@; i.e., the same as row . It specifies the component of @(@ x @)@ for the non-zero Jacobian terms. All the elements of col must be between zero and n-1 .

11.2.8.k: p
The argument p has prototype
     size_t 
p
It is either zero or one and specifies the order of the derivative of @(@ f @)@ that is being evaluated, i.e., @(@ f^{(p)} (x) @)@ is evaluated.

11.2.8.l: fp
The argument fp has prototype
     
FloatVectorfp
If p = 0 , it size is m otherwise its size is K . The input value of the elements of fp does not matter.

11.2.8.l.a: Function
If p is zero, fp has size @(@ m @)@ and (fp[0], ... , fp[m-1]) is the value of @(@ f(x) @)@.

11.2.8.l.b: Jacobian
If p is one, fp has size K and for @(@ k = 0 , \ldots , K-1 @)@, @[@ \D{f[ \R{row}[i] ]}{x[ \R{col}[j] ]} = fp [k] @]@

11.2.8.m: Example
The file 11.2.8.1: sparse_jac_fun.cpp contains an example and test of sparse_jac_fun.hpp. It returns true if it succeeds and false otherwise.

11.2.8.n: Source Code
The file 11.2.8.2: sparse_jac_fun.hpp contains the source code for this template function.
Input File: cppad/speed/sparse_jac_fun.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.8.1: sparse_jac_fun: Example and test
# include <cppad/speed/sparse_jac_fun.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/cppad.hpp>

bool sparse_jac_fun(void)
{     using CppAD::NearEqual;
     using CppAD::AD;

     bool ok = true;

     size_t j, k;
     double eps = CppAD::numeric_limits<double>::epsilon();
     size_t n   = 3;
     size_t m   = 4;
     size_t K   = 5;
     CppAD::vector<size_t>       row(K), col(K);
     CppAD::vector<double>       x(n),   yp(K);
     CppAD::vector< AD<double> > a_x(n), a_y(m);

     // choose x
     for(j = 0; j < n; j++)
          a_x[j] = x[j] = double(j + 1);

     // choose row, col
     for(k = 0; k < K; k++)
     {     row[k] = k % m;
          col[k] = (K - k) % n;
     }

     // declare independent variables
     Independent(a_x);

     // evaluate function
     size_t order = 0;
     CppAD::sparse_jac_fun< AD<double> >(m, n, a_x, row, col, order, a_y);

     // evaluate derivative
     order = 1;
     CppAD::sparse_jac_fun<double>(m, n, x, row, col, order, yp);

     // use AD to evaluate derivative
     CppAD::ADFun<double>   f(a_x, a_y);
     CppAD::vector<double>  jac(m * n);
     jac = f.Jacobian(x);

     for(k = 0; k < K; k++)
     {     size_t index = row[k] * n + col[k];
          ok &= NearEqual(jac[index], yp[k] , eps, eps);
     }
     return ok;
}

Input File: speed/example/sparse_jac_fun.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.8.2: Source: sparse_jac_fun
# ifndef CPPAD_SPARSE_JAC_FUN_HPP
# define CPPAD_SPARSE_JAC_FUN_HPP
# include <cppad/core/cppad_assert.hpp>
# include <cppad/utility/check_numeric_type.hpp>
# include <cppad/utility/vector.hpp>

// following needed by gcc under fedora 17 so that exp(double) is defined
# include <cppad/base_require.hpp>

namespace CppAD {
     template <class Float, class FloatVector>
     void sparse_jac_fun(
          size_t                       m    ,
          size_t                       n    ,
          const FloatVector&           x    ,
          const CppAD::vector<size_t>& row  ,
          const CppAD::vector<size_t>& col  ,
          size_t                       p    ,
          FloatVector&                 fp   )
     {
          // check numeric type specifications
          CheckNumericType<Float>();
          // check value of p
          CPPAD_ASSERT_KNOWN(
               p == 0 || p == 1,
               "sparse_jac_fun: p != 0 and p != 1"
          );
          size_t K = row.size();
          CPPAD_ASSERT_KNOWN(
               K >= m,
               "sparse_jac_fun: row.size() < m"
          );
          size_t i, j, k;

          if( p == 0 )
               for(i = 0; i < m; i++)
                    fp[i] = Float(0);

          Float t;
          for(k = 0; k < K; k++)
          {     i    = row[k];
               j    = col[k];
               t    = exp( x[j] * x[j] / 2.0 );
               switch(p)
               {
                    case 0:
                    fp[i] += t;
                    break;

                    case 1:
                    fp[k] = t * x[j];
                    break;
               }
          }
     }
}
# endif

Input File: omh/sparse_jac_fun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.9: Evaluate a Function That Has a Sparse Hessian

11.2.9.a: Syntax
# include <cppad/speed/sparse_hes_fun.hpp>
sparse_hes_fun(nxrowcolpfp)

11.2.9.b: Purpose
This routine evaluates @(@ f(x) @)@, @(@ f^{(1)} (x) @)@, or @(@ f^{(2)} (x) @)@ where the Hessian @(@ f^{(2)} (x) @)@ is sparse. The function @(@ f : \B{R}^n \rightarrow \B{R} @)@ only depends on the size and contents of the index vectors row and col . The non-zero entries in the Hessian of this function have one of the following forms: @[@ \DD{f}{x[row[k]]}{x[row[k]]} \; , \; \DD{f}{x[row[k]]}{x[col[k]]} \; , \; \DD{f}{x[col[k]]}{x[row[k]]} \; , \; \DD{f}{x[col[k]]}{x[col[k]]} @]@ for some @(@ k @)@ between zero and @(@ K-1 @)@. All the other terms of the Hessian are zero.

11.2.9.c: Inclusion
The template function sparse_hes_fun is defined in the CppAD namespace by including the file cppad/speed/sparse_hes_fun.hpp (relative to the CppAD distribution directory).

11.2.9.d: Float
The type Float must be a 8.7: NumericType . In addition, if y and z are Float objects,
     
y = exp(z)
must set the y equal the exponential of z , i.e., the derivative of y with respect to z is equal to y .

11.2.9.e: FloatVector
The type FloatVector is any 8.9: SimpleVector , or it can be a raw pointer, with elements of type Float .

11.2.9.f: n
The argument n has prototype
     size_t 
n
It specifies the dimension for the domain space for @(@ f(x) @)@.

11.2.9.g: x
The argument x has prototype
     const 
FloatVectorx
It contains the argument value for which the function, or its derivative, is being evaluated. We use @(@ n @)@ to denote the size of the vector x .

11.2.9.h: row
The argument row has prototype
      const CppAD::vector<size_t>& 
row
It specifies one of the first index of @(@ x @)@ for each non-zero Hessian term (see 11.2.9.b: purpose above). All the elements of row must be between zero and n-1 . The value @(@ K @)@ is defined by K = row.size() .

11.2.9.i: col
The argument col has prototype
      const CppAD::vector<size_t>& 
col
and its size must be @(@ K @)@; i.e., the same as for col . It specifies the second index of @(@ x @)@ for the non-zero Hessian terms. All the elements of col must be between zero and n-1 . There are no duplicated entries requested, to be specific, if k1 != k2 then
     ( 
row[k1] , col[k1] ) != ( row[k2] , col[k2] )

11.2.9.j: p
The argument p has prototype
     size_t 
p
It is either zero or two and specifies the order of the derivative of @(@ f @)@ that is being evaluated, i.e., @(@ f^{(p)} (x) @)@ is evaluated.

11.2.9.k: fp
The argument fp has prototype
     
FloatVectorfp
The input value of the elements of fp does not matter.

11.2.9.k.a: Function
If p is zero, fp has size one and fp[0] is the value of @(@ f(x) @)@.

11.2.9.k.b: Hessian
If p is two, fp has size K and for @(@ k = 0 , \ldots , K-1 @)@, @[@ \DD{f}{ x[ \R{row}[k] ] }{ x[ \R{col}[k] ]} = fp [k] @]@

11.2.9.l: Example
The file 11.2.9.1: sparse_hes_fun.cpp contains an example and test of sparse_hes_fun.hpp. It returns true if it succeeds and false otherwise.

11.2.9.m: Source Code
The file 11.2.9.2: sparse_hes_fun.hpp contains the source code for this template function.
Input File: cppad/speed/sparse_hes_fun.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.9.1: sparse_hes_fun: Example and test
# include <cppad/speed/sparse_hes_fun.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/cppad.hpp>

bool sparse_hes_fun(void)
{     using CppAD::NearEqual;
     bool ok = true;

     typedef CppAD::AD<double> ADScalar;

     size_t j, k;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();
     size_t n   = 5;
     size_t m   = 1;
     size_t K   = 2 * n;
     CppAD::vector<size_t>       row(K),  col(K);
     CppAD::vector<double>       x(n),    ypp(K);
     CppAD::vector<ADScalar>     a_x(n),  a_y(m);

     // choose x
     for(j = 0; j < n; j++)
          a_x[j] = x[j] = double(j + 1);

     // choose row, col
     for(k = 0; k < K; k++)
     {     row[k] = k % 3;
          col[k] = k / 3;
     }
     for(k = 0; k < K; k++)
     {     for(size_t k1 = 0; k1 < K; k1++)
               assert( k == k1 || row[k] != row[k1] || col[k] != col[k1] );
     }

     // declare independent variables
     Independent(a_x);

     // evaluate function
     size_t order = 0;
     CppAD::sparse_hes_fun<ADScalar>(n, a_x, row, col, order, a_y);

     // evaluate Hessian
     order = 2;
     CppAD::sparse_hes_fun<double>(n, x, row, col, order, ypp);

     // use AD to evaluate Hessian
     CppAD::ADFun<double>   f(a_x, a_y);
     CppAD::vector<double>  hes(n * n);
     // compoute Hessian of f_0 (x)
     hes = f.Hessian(x, 0);

     for(k = 0; k < K; k++)
     {     size_t index = row[k] * n + col[k];
          ok &= NearEqual(hes[index], ypp[k] , eps, eps);
     }
     return ok;
}

Input File: speed/example/sparse_hes_fun.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.9.2: Source: sparse_hes_fun
# ifndef CPPAD_SPARSE_HES_FUN_HPP
# define CPPAD_SPARSE_HES_FUN_HPP
# include <cppad/core/cppad_assert.hpp>
# include <cppad/utility/check_numeric_type.hpp>
# include <cppad/utility/vector.hpp>

// following needed by gcc under fedora 17 so that exp(double) is defined
# include <cppad/base_require.hpp>

namespace CppAD {
     template <class Float, class FloatVector>
     void sparse_hes_fun(
          size_t                       n    ,
          const FloatVector&           x    ,
          const CppAD::vector<size_t>& row  ,
          const CppAD::vector<size_t>& col  ,
          size_t                       p    ,
          FloatVector&                fp    )
     {
          // check numeric type specifications
          CheckNumericType<Float>();

          // check value of p
          CPPAD_ASSERT_KNOWN(
               p == 0 || p == 2,
               "sparse_hes_fun: p != 0 and p != 2"
          );

          size_t K = row.size();
          size_t i, j, k;
          if( p == 0 )
               fp[0] = Float(0);
          else
          {     for(k = 0; k < K; k++)
                    fp[k] = Float(0);
          }

          // determine which diagonal entries are present in row[k], col[k]
          CppAD::vector<size_t> diagonal(n);
          for(i = 0; i < n; i++)
               diagonal[i] = K;   // no diagonal entry for this row
          for(k = 0; k < K; k++)
          {     if( row[k] == col[k] )
               {     CPPAD_ASSERT_UNKNOWN( diagonal[row[k]] == K );
                    // index of the diagonal entry
                    diagonal[ row[k] ] = k;
               }
          }

          // determine which entries must be multiplied by a factor of two
          CppAD::vector<Float> factor(K);
          for(k = 0; k < K; k++)
          {     factor[k] = Float(1);
               for(size_t k1 = 0; k1 < K; k1++)
               {     bool reflected = true;
                    reflected &= k != k1;
                    reflected &= row[k] != col[k];
                    reflected &= row[k] == col[k1];
                    reflected &= col[k] == row[k1];
                    if( reflected )
                         factor[k] = Float(2);
               }
          }

          Float t;
          for(k = 0; k < K; k++)
          {     i    = row[k];
               j    = col[k];
               t    = exp( x[i] * x[j] );
               switch(p)
               {
                    case 0:
                    fp[0] += t;
                    break;

                    case 2:
                    if( i == j )
                    {     // dt_dxi = 2.0 * xi * t
                         fp[k] += ( Float(2) + Float(4) * x[i] * x[i] ) * t;
                    }
                    else
                    {     // dt_dxi = xj * t
                         fp[k] += factor[k] * ( Float(1) + x[i] * x[j] ) * t;
                         if( diagonal[i] != K )
                         {     size_t ki = diagonal[i];
                              fp[ki] += x[j] * x[j] * t;
                         }
                         if( diagonal[j] != K )
                         {     size_t kj = diagonal[j];
                              fp[kj] += x[i] * x[i] * t;
                         }
                    }
                    break;
               }
          }

     }
}
# endif

Input File: omh/sparse_hes_fun.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.10: Simulate a [0,1] Uniform Random Variate

11.2.10.a: Syntax
# include <cppad/speed/uniform_01.hpp>
uniform_01(seed)
uniform_01(nx)

11.2.10.b: Purpose
This routine is used to create random values for speed testing purposes.

11.2.10.c: Inclusion
The template function uniform_01 is defined in the CppAD namespace by including the file cppad/speed/uniform_01.hpp (relative to the CppAD distribution directory).

11.2.10.d: seed
The argument seed has prototype
     size_t 
seed
It specifies a seed for the uniform random number generator.

11.2.10.e: n
The argument n has prototype
     size_t 
n
It specifies the number of elements in the random vector x .

11.2.10.f: x
The argument x has prototype
     
Vector &x
. The input value of the elements of x does not matter. Upon return, the elements of x are set to values randomly sampled over the interval [0,1].

11.2.10.g: Vector
If y is a double value, the object x must support the syntax
     
x[i] = y
where i has type size_t with value less than or equal @(@ n-1 @)@. This is the only requirement of the type Vector .

11.2.10.h: Source Code
The file 11.2.10.1: uniform_01.hpp constraints the source code for this template function.
Input File: cppad/speed/uniform_01.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.2.10.1: Source: uniform_01
# ifndef CPPAD_UNIFORM_01_HPP
# define CPPAD_UNIFORM_01_HPP
# include <cstdlib>

namespace CppAD {
     inline void uniform_01(size_t seed)
     {     std::srand( (unsigned int) seed); }

     template <class Vector>
     void uniform_01(size_t n, Vector &x)
     {     static double factor = 1. / double(RAND_MAX);
          while(n--)
               x[n] = std::rand() * factor;
     }
}
# endif

Input File: omh/uniform_01_hpp.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3: Speed Test of Functions in Double

11.3.a: Purpose
CppAD has a set of speed tests for just calculating functions (in double precision instead of an AD type). This section links to the source code the function value speed tests.

11.3.b: Running Tests
To build these speed tests, and run their correctness tests, execute the following commands starting in the 2.2.b.a: build directory :
     cd speed/double
     make check_speed_double VERBOSE=1
You can then run the corresponding speed tests with the following command
     ./speed_double speed 
seed
where seed is a positive integer. See 11.1: speed_main for more options.

11.3.c: Contents
11.3.1: Double Speed: Determinant by Minor Expansion
11.3.2: Double Speed: Determinant Using Lu Factorization
11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
11.3.4: Double Speed: Ode Solution
11.3.5: Double Speed: Evaluate a Polynomial
11.3.6: Double Speed: Sparse Hessian
11.3.7: Double Speed: Sparse Jacobian

Input File: omh/speed/speed_double.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.1: Double Speed: Determinant by Minor Expansion

11.3.1.a: Specifications
See 11.1.2: link_det_minor .

11.3.1.b: Implementation
# include <cppad/utility/vector.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_minor(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &det      )
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"])
          return false;
     // -----------------------------------------------------
     // setup
     CppAD::det_by_minor<double>   Det(size);
     size_t n = size * size; // number of independent variables

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, matrix);

          // computation of the determinant
          det[0] = Det(matrix);
     }
     return true;
}

Input File: speed/double/det_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.2: Double Speed: Determinant Using Lu Factorization

11.3.2.a: Specifications
See 11.1.1: link_det_lu .

11.3.2.b: Implementation
# include <cppad/utility/vector.hpp>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_lu(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>           &matrix   ,
     CppAD::vector<double>           &det      )
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"])
          return false;
     // -----------------------------------------------------
     // setup
     CppAD::det_by_lu<double>  Det(size);
     size_t n = size * size; // number of independent variables

     // ------------------------------------------------------

     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, matrix);

          // computation of the determinant
          det[0] = Det(matrix);
     }
     return true;
}

Input File: speed/double/det_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.3: CppAD Speed: Matrix Multiplication (Double Version)

11.3.3.a: Specifications
See 11.1.3: link_mat_mul .

11.3.3.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/mat_sum_sq.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_mat_mul(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           z        ,
     CppAD::vector<double>&           dz
)
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"])
          return false;
     // -----------------------------------------------------
     size_t n = size * size; // number of independent variables
     CppAD::vector<double> y(n);

     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, x);

          // do computation
          mat_sum_sq(size, x, y, z);

     }
     return true;
}

Input File: speed/double/mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.4: Double Speed: Ode Solution

11.3.4.a: Specifications
See 11.1.4: link_ode .

11.3.4.b: Implementation
# include <cstring>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_ode(
     size_t                     size       ,
     size_t                     repeat     ,
     CppAD::vector<double>      &x         ,
     CppAD::vector<double>      &jacobian
)
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"])
          return false;
     // -------------------------------------------------------------
     // setup
     assert( x.size() == size );

     size_t n = size;

     size_t m = 0;
     CppAD::vector<double> f(n);

     while(repeat--)
     {     // choose next x value
          uniform_01(n, x);

          // evaluate function
          CppAD::ode_evaluate(x, m, f);

     }
     size_t i;
     for(i = 0; i < n; i++)
          jacobian[i] = f[i];
     return true;
}

Input File: speed/double/ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.5: Double Speed: Evaluate a Polynomial

11.3.5.a: Specifications
See 11.1.5: link_poly .

11.3.5.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_poly(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &a        ,  // coefficients of polynomial
     CppAD::vector<double>     &z        ,  // polynomial argument value
     CppAD::vector<double>     &p        )  // second derivative w.r.t z
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"])
          return false;
     // -----------------------------------------------------
     // setup

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next argument value
          CppAD::uniform_01(1, z);

          // evaluate the polynomial at the new argument value
          p[0] = CppAD::Poly(0, a, z[0]);
     }
     return true;
}

Input File: speed/double/poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.6: Double Speed: Sparse Hessian

11.3.6.a: Specifications
See 11.1.6: link_sparse_hessian .

11.3.6.b: Implementation
# include <cppad/utility/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/sparse_hes_fun.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_sparse_hessian(
     size_t                           size     ,
     size_t                           repeat   ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           hessian  ,
     size_t&                          n_sweep  )
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"]||global_option["boolsparsity"])
          return false;
     // -----------------------------------------------------
     // setup
     using CppAD::vector;
     size_t order = 0;          // derivative order corresponding to function
     size_t n     = size;       // argument space dimension
     size_t m     = 1;          // range space dimension
     vector<double> y(m);       // function value

     // choose a value for x
     CppAD::uniform_01(n, x);

     // ------------------------------------------------------

     while(repeat--)
     {
          // computation of the function
          CppAD::sparse_hes_fun<double>(n, x, row, col, order, y);
     }
     hessian[0] = y[0];

     return true;
}

Input File: speed/double/sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.3.7: Double Speed: Sparse Jacobian

11.3.7.a: Specifications
See 11.1.7: link_sparse_jacobian .

11.3.7.b: Implementation
# include <cppad/utility/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/sparse_jac_fun.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_sparse_jacobian(
     size_t                           size     ,
     size_t                           repeat   ,
     size_t                           m        ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
           CppAD::vector<double>&     x        ,
           CppAD::vector<double>&     jacobian ,
           size_t&                    n_sweep  )
{
     if(global_option["onetape"]||global_option["atomic"]||global_option["optimize"]||global_option["boolsparsity"])
          return false;
     // -----------------------------------------------------
     // setup
     using CppAD::vector;
     size_t i;
     size_t order = 0;          // order for computing function value
     size_t n     = size;       // argument space dimension
     vector<double> yp(m);      // function value yp = f(x)

     // ------------------------------------------------------
     while(repeat--)
     {     // choose a value for x
          CppAD::uniform_01(n, x);

          // computation of the function
          CppAD::sparse_jac_fun<double>(m, n, x, row, col, order, yp);
     }
     for(i = 0; i < m; i++)
          jacobian[i] = yp[i];

     return true;
}

Input File: speed/double/sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4: Speed Test of Derivatives Using Adolc

11.4.a: Purpose
CppAD has a set of speed tests that are used to compare Adolc with other AD packages. This section links to the source code the Adolc speed tests (any suggestions to make the Adolc results faster are welcome).

11.4.b: adolc_prefix
To run these tests, you must include the 2.2.1: adolc_prefix in you 2.2.b: cmake command .

11.4.c: Running Tests
To build these speed tests, and run their correctness tests, execute the following commands starting in the 2.2.b.a: build directory :
     cd speed/adolc
     make check_speed_adolc VERBOSE=1
You can then run the corresponding speed tests with the following command
     ./speed_adolc speed 
seed
where seed is a positive integer. See 11.1: speed_main for more options.

11.4.d: Contents
11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
11.4.3: Adolc Speed: Matrix Multiplication
11.4.4: Adolc Speed: Ode
11.4.5: Adolc Speed: Second Derivative of a Polynomial
11.4.6: Adolc Speed: Sparse Hessian
11.4.7: adolc Speed: Sparse Jacobian
11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix

Input File: omh/speed/speed_adolc.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion

11.4.1.a: Specifications
See 11.1.2: link_det_minor .

11.4.1.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <adolc/adolc.h>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_minor(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // speed test global option values
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     typedef adouble    ADScalar;
     typedef ADScalar*  ADVector;

     int tag  = 0;         // tape identifier
     int m    = 1;         // number of dependent variables
     int n    = size*size; // number of independent variables
     double f;             // function value
     int j;                // temporary index

     // set up for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t capacity;           // capacity of an allocation

     // object for computing determinant
     CppAD::det_by_minor<ADScalar> Det(size);

     // AD value of determinant
     ADScalar   detA;

     // AD version of matrix
     ADVector A   = thread_alloc::create_array<ADScalar>(size_t(n), capacity);

     // vectors of reverse mode weights
     double* u    = thread_alloc::create_array<double>(size_t(m), capacity);
     u[0] = 1.;

     // vector with matrix value
     double* mat  = thread_alloc::create_array<double>(size_t(n), capacity);

     // vector to receive gradient result
     double* grad = thread_alloc::create_array<double>(size_t(n), capacity);

     // ----------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose a matrix
          CppAD::uniform_01(n, mat);

          // declare independent variables
          int keep = 1; // keep forward mode results
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               A[j] <<= mat[j];

          // AD computation of the determinant
          detA = Det(A);

          // create function object f : A -> detA
          detA >>= f;
          trace_off();

          // evaluate and return gradient using reverse mode
          fos_reverse(tag, m, n, u, grad);
     }
     else
     {
          // choose a matrix
          CppAD::uniform_01(n, mat);

          // declare independent variables
          int keep = 0; // do not keep forward mode results in buffer
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               A[j] <<= mat[j];

          // AD computation of the determinant
          detA = Det(A);

          // create function object f : A -> detA
          detA >>= f;
          trace_off();

          while(repeat--)
          {     // get the next matrix
               CppAD::uniform_01(n, mat);

               // evaluate the determinant at the new matrix value
               keep = 1; // keep this forward mode result
               zos_forward(tag, m, n, keep, mat, &f);

               // evaluate and return gradient using reverse mode
               fos_reverse(tag, m, n, u, grad);
          }
     }
     // --------------------------------------------------------------------

     // return matrix and gradient
     for(j = 0; j < n; j++)
     {     matrix[j] = mat[j];
          gradient[j] = grad[j];
     }

     // tear down
     thread_alloc::delete_array(grad);
     thread_alloc::delete_array(mat);
     thread_alloc::delete_array(u);
     thread_alloc::delete_array(A);
     return true;
}

Input File: speed/adolc/det_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization

11.4.2.a: Specifications
See 11.1.1: link_det_lu .

11.4.2.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <adolc/adolc.h>

# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/track_new_del.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_lu(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // speed test global option values
     if( global_option["onetape"] || global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     int tag  = 0;         // tape identifier
     int keep = 1;         // keep forward mode results in buffer
     int m    = 1;         // number of dependent variables
     int n    = size*size; // number of independent variables
     double f;             // function value
     int j;                // temporary index

     // set up for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t size_min;           // requested number of elements
     size_t size_out;           // capacity of an allocation

     // object for computing determinant
     typedef adouble            ADScalar;
     typedef ADScalar*          ADVector;
     CppAD::det_by_lu<ADScalar> Det(size);

     // AD value of determinant
     ADScalar   detA;

     // AD version of matrix
     size_min    = n;
     ADVector A  = thread_alloc::create_array<ADScalar>(size_min, size_out);

     // vectors of reverse mode weights
     size_min    = m;
     double* u   = thread_alloc::create_array<double>(size_min, size_out);
     u[0] = 1.;

     // vector with matrix value
     size_min     = n;
     double* mat  = thread_alloc::create_array<double>(size_min, size_out);

     // vector to receive gradient result
     size_min     = n;
     double* grad = thread_alloc::create_array<double>(size_min, size_out);
     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, mat);

          // declare independent variables
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               A[j] <<= mat[j];

          // AD computation of the determinant
          detA = Det(A);

          // create function object f : A -> detA
          detA >>= f;
          trace_off();

          // evaluate and return gradient using reverse mode
          fos_reverse(tag, m, n, u, grad);
     }
     // ------------------------------------------------------

     // return matrix and gradient
     for(j = 0; j < n; j++)
     {     matrix[j] = mat[j];
          gradient[j] = grad[j];
     }
     // tear down
     thread_alloc::delete_array(grad);
     thread_alloc::delete_array(mat);
     thread_alloc::delete_array(u);
     thread_alloc::delete_array(A);

     return true;
}

Input File: speed/adolc/det_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.3: Adolc Speed: Matrix Multiplication

11.4.3.a: Specifications
See 11.1.3: link_mat_mul .

11.4.3.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <adolc/adolc.h>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/mat_sum_sq.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/vector.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_mat_mul(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           z        ,
     CppAD::vector<double>&           dz       )
{
     // speed test global option values
     if( global_option["memory"] || global_option["atomic"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     typedef adouble    ADScalar;
     typedef ADScalar*  ADVector;

     int tag  = 0;         // tape identifier
     int m    = 1;         // number of dependent variables
     int n    = size*size; // number of independent variables
     double f;             // function value
     int j;                // temporary index

     // set up for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t capacity;           // capacity of an allocation

     // AD domain space vector
     ADVector X = thread_alloc::create_array<ADScalar>(size_t(n), capacity);

     // Product matrix
     ADVector Y = thread_alloc::create_array<ADScalar>(size_t(n), capacity);

     // AD range space vector
     ADVector Z = thread_alloc::create_array<ADScalar>(size_t(m), capacity);

     // vector with matrix value
     double* mat = thread_alloc::create_array<double>(size_t(n), capacity);

     // vector of reverse mode weights
     double* u  = thread_alloc::create_array<double>(size_t(m), capacity);
     u[0] = 1.;

     // gradient
     double* grad = thread_alloc::create_array<double>(size_t(n), capacity);

     // ----------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose a matrix
          CppAD::uniform_01(n, mat);

          // declare independent variables
          int keep = 1; // keep forward mode results
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               X[j] <<= mat[j];

          // do computations
          CppAD::mat_sum_sq(size, X, Y, Z);

          // create function object f : X -> Z
          Z[0] >>= f;
          trace_off();

          // evaluate and return gradient using reverse mode
          fos_reverse(tag, m, n, u, grad);
     }
     else
     {     // choose a matrix
          CppAD::uniform_01(n, mat);

          // declare independent variables
          int keep = 0; // do not keep forward mode results
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               X[j] <<= mat[j];

          // do computations
          CppAD::mat_sum_sq(size, X, Y, Z);

          // create function object f : X -> Z
          Z[0] >>= f;
          trace_off();

          while(repeat--)
          {     // choose a matrix
               CppAD::uniform_01(n, mat);

               // evaluate the determinant at the new matrix value
               keep = 1; // keep this forward mode result
               zos_forward(tag, m, n, keep, mat, &f);

               // evaluate and return gradient using reverse mode
               fos_reverse(tag, m, n, u, grad);
          }
     }
     // return function, matrix, and gradient
     z[0] = f;
     for(j = 0; j < n; j++)
     {     x[j]  = mat[j];
          dz[j] = grad[j];
     }

     // tear down
     thread_alloc::delete_array(X);
     thread_alloc::delete_array(Y);
     thread_alloc::delete_array(Z);
     thread_alloc::delete_array(mat);
     thread_alloc::delete_array(u);
     thread_alloc::delete_array(grad);

     return true;
}


Input File: speed/adolc/mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.4: Adolc Speed: Ode

11.4.4.a: Specifications
See 11.1.4: link_ode .

11.4.4.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <adolc/adolc.h>

# include <cppad/utility/vector.hpp>
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_ode(
     size_t                     size       ,
     size_t                     repeat     ,
     CppAD::vector<double>      &x         ,
     CppAD::vector<double>      &jac
)
{
     // speed test global option values
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -------------------------------------------------------------
     // setup
     assert( x.size() == size );
     assert( jac.size() == size * size );

     typedef CppAD::vector<adouble> ADVector;
     typedef CppAD::vector<double>  DblVector;

     size_t i, j;
     int tag    = 0;       // tape identifier
     int keep   = 0;       // do not keep forward mode results
     size_t p   = 0;       // use ode to calculate function values
     size_t n   = size;    // number of independent variables
     size_t m   = n;       // number of dependent variables
     ADVector  X(n), Y(m); // independent and dependent variables
     DblVector f(m);       // function value

     // set up for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t size_min;           // requested number of elements
     size_t size_out;           // capacity of an allocation

     // raw memory for use with adolc
     size_min = n;
     double *x_raw   = thread_alloc::create_array<double>(size_min, size_out);
     size_min = m * n;
     double *jac_raw = thread_alloc::create_array<double>(size_min, size_out);
     size_min = m;
     double **jac_ptr = thread_alloc::create_array<double*>(size_min, size_out);
     for(i = 0; i < m; i++)
          jac_ptr[i] = jac_raw + i * n;

     // -------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose next x value
          uniform_01(n, x);

          // declare independent variables
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               X[j] <<= x[j];

          // evaluate function
          CppAD::ode_evaluate(X, p, Y);

          // create function object f : X -> Y
          for(i = 0; i < m; i++)
               Y[i] >>= f[i];
          trace_off();

          // evaluate the Jacobian
          for(j = 0; j < n; j++)
               x_raw[j] = x[j];
          jacobian(tag, m, n, x_raw, jac_ptr);
     }
     else
     {     // choose next x value
          uniform_01(n, x);

          // declare independent variables
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               X[j] <<= x[j];

          // evaluate function
          CppAD::ode_evaluate(X, p, Y);

          // create function object f : X -> Y
          for(i = 0; i < m; i++)
               Y[i] >>= f[i];
          trace_off();

          while(repeat--)
          {     // get next argument value
               uniform_01(n, x);
               for(j = 0; j < n; j++)
                    x_raw[j] = x[j];

               // evaluate jacobian
               jacobian(tag, m, n, x_raw, jac_ptr);
          }
     }
     // convert return value to a simple vector
     for(i = 0; i < m; i++)
     {     for(j = 0; j < n; j++)
               jac[i * n + j] = jac_ptr[i][j];
     }
     // ----------------------------------------------------------------------
     // tear down
     thread_alloc::delete_array(x_raw);
     thread_alloc::delete_array(jac_raw);
     thread_alloc::delete_array(jac_ptr);

     return true;
}

Input File: speed/adolc/ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.5: Adolc Speed: Second Derivative of a Polynomial

11.4.5.a: Specifications
See 11.1.5: link_poly .

11.4.5.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <vector>
# include <adolc/adolc.h>

# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/poly.hpp>
# include <cppad/utility/vector.hpp>
# include <cppad/utility/thread_alloc.hpp>
# include "adolc_alloc_mat.hpp"

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_poly(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &a        ,  // coefficients of polynomial
     CppAD::vector<double>     &z        ,  // polynomial argument value
     CppAD::vector<double>     &ddp      )  // second derivative w.r.t z
{
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     size_t i;
     int tag  = 0;  // tape identifier
     int keep = 0;  // do not keep forward mode results in buffer
     int m    = 1;  // number of dependent variables
     int n    = 1;  // number of independent variables
     int d    = 2;  // highest derivative degree
     double f;      // function value

     // set up for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t capacity;           // capacity of an allocation

     // choose a vector of polynomial coefficients
     CppAD::uniform_01(size, a);

     // AD copy of the polynomial coefficients
     std::vector<adouble> A(size);
     for(i = 0; i < size; i++)
          A[i] = a[i];

     // domain and range space AD values
     adouble Z, P;

     // allocate arguments to hos_forward
     double* x0 = thread_alloc::create_array<double>(size_t(n), capacity);
     double* y0 = thread_alloc::create_array<double>(size_t(m), capacity);
     double** x = adolc_alloc_mat(size_t(n), size_t(d));
     double** y = adolc_alloc_mat(size_t(m), size_t(d));

     // Taylor coefficient for argument
     x[0][0] = 1.;  // first order
     x[0][1] = 0.;  // second order

     // ----------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose an argument value
          CppAD::uniform_01(1, z);

          // declare independent variables
          trace_on(tag, keep);
          Z <<= z[0];

          // AD computation of the function value
          P = CppAD::Poly(0, A, Z);

          // create function object f : Z -> P
          P >>= f;
          trace_off();

          // set the argument value
          x0[0] = z[0];

          // evaluate the polynomial at the new argument value
          hos_forward(tag, m, n, d, keep, x0, x, y0, y);

          // second derivative is twice second order Taylor coef
          ddp[0] = 2. * y[0][1];
     }
     else
     {
          // choose an argument value
          CppAD::uniform_01(1, z);

          // declare independent variables
          trace_on(tag, keep);
          Z <<= z[0];

          // AD computation of the function value
          P = CppAD::Poly(0, A, Z);

          // create function object f : Z -> P
          P >>= f;
          trace_off();

          while(repeat--)
          {     // get the next argument value
               CppAD::uniform_01(1, z);
               x0[0] = z[0];

               // evaluate the polynomial at the new argument value
               hos_forward(tag, m, n, d, keep, x0, x, y0, y);

               // second derivative is twice second order Taylor coef
               ddp[0] = 2. * y[0][1];
          }
     }
     // ------------------------------------------------------
     // tear down
     adolc_free_mat(x);
     adolc_free_mat(y);
     thread_alloc::delete_array(x0);
     thread_alloc::delete_array(y0);

     return true;
}

Input File: speed/adolc/poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.6: Adolc Speed: Sparse Hessian

11.4.6.a: Specifications
See 11.1.6: link_sparse_hessian .

11.4.6.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <adolc/adolc.h>
# include <adolc/adolc_sparse.h>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/thread_alloc.hpp>
# include <cppad/speed/sparse_hes_fun.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_sparse_hessian(
     size_t                           size     ,
     size_t                           repeat   ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
     CppAD::vector<double>&           x_return ,
     CppAD::vector<double>&           hessian  ,
     size_t&                          n_sweep )
{
     if( global_option["atomic"] || (! global_option["colpack"]) )
          return false;
     if( global_option["memory"] || global_option["optimize"] || global_option["boolsparsity"] )
          return false;
     // -----------------------------------------------------
     // setup
     typedef unsigned int*    SizeVector;
     typedef double*          DblVector;
     typedef adouble          ADScalar;
     typedef ADScalar*        ADVector;


     size_t i, j, k;         // temporary indices
     size_t order = 0;    // derivative order corresponding to function
     size_t m = 1;        // number of dependent variables
     size_t n = size;     // number of independent variables

     // setup for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t capacity;           // capacity of an allocation

     // tape identifier
     int tag  = 0;
     // AD domain space vector
     ADVector a_x = thread_alloc::create_array<ADScalar>(n, capacity);
     // AD range space vector
     ADVector a_y = thread_alloc::create_array<ADScalar>(m, capacity);
     // double argument value
     DblVector x = thread_alloc::create_array<double>(n, capacity);
     // double function value
     double f;

     // options that control sparse_hess
     int        options[2];
     options[0] = 0; // safe mode
     options[1] = 0; // indirect recovery

     // structure that holds some of the work done by sparse_hess
     int        nnz;                   // number of non-zero values
     SizeVector rind   = CPPAD_NULL;   // row indices
     SizeVector cind   = CPPAD_NULL;   // column indices
     DblVector  values = CPPAD_NULL;   // Hessian values

     // ----------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose a value for x
          CppAD::uniform_01(n, x);

          // declare independent variables
          int keep = 0; // keep forward mode results
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               a_x[j] <<= x[j];

          // AD computation of f (x)
          CppAD::sparse_hes_fun<ADScalar>(n, a_x, row, col, order, a_y);

          // create function object f : x -> y
          a_y[0] >>= f;
          trace_off();

          // is this a repeat call with the same sparsity pattern
          int same_pattern = 0;

          // calculate the hessian at this x
          rind   = CPPAD_NULL;
          cind   = CPPAD_NULL;
          values = CPPAD_NULL;
          sparse_hess(tag, int(n),
               same_pattern, x, &nnz, &rind, &cind, &values, options
          );
          // only needed last time through loop
          if( repeat == 0 )
          {     size_t K = row.size();
               for(int ell = 0; ell < nnz; ell++)
               {     i = size_t(rind[ell]);
                    j = size_t(cind[ell]);
                    for(k = 0; k < K; k++)
                    {     if( (row[k]==i && col[k]==j) || (row[k]==j && col[k]==i) )
                              hessian[k] = values[ell];
                    }
               }
          }

          // free raw memory allocated by sparse_hess
          free(rind);
          free(cind);
          free(values);
     }
     else
     {     // choose a value for x
          CppAD::uniform_01(n, x);

          // declare independent variables
          int keep = 0; // keep forward mode results
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               a_x[j] <<= x[j];

          // AD computation of f (x)
          CppAD::sparse_hes_fun<ADScalar>(n, a_x, row, col, order, a_y);

          // create function object f : x -> y
          a_y[0] >>= f;
          trace_off();

          // is this a repeat call at the same argument
          int same_pattern = 0;

          while(repeat--)
          {     // choose a value for x
               CppAD::uniform_01(n, x);

               // calculate the hessian at this x
               sparse_hess(tag, int(n),
                    same_pattern, x, &nnz, &rind, &cind, &values, options
               );
               same_pattern = 1;
          }
          size_t K = row.size();
          for(int ell = 0; ell < nnz; ell++)
          {     i = size_t(rind[ell]);
               j = size_t(cind[ell]);
               for(k = 0; k < K; k++)
               {     if( (row[k]==i && col[k]==j) || (row[k]==j && col[k]==i) )
                         hessian[k] = values[ell];
               }
          }
          // free raw memory allocated by sparse_hessian
          free(rind);
          free(cind);
          free(values);
     }
     // --------------------------------------------------------------------
     // return argument
     for(j = 0; j < n; j++)
          x_return[j] = x[j];

     // do not know how to return number of sweeps used
     n_sweep = 0;

     // tear down
     thread_alloc::delete_array(a_x);
     thread_alloc::delete_array(a_y);
     thread_alloc::delete_array(x);
     return true;

}

Input File: speed/adolc/sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.7: adolc Speed: Sparse Jacobian

11.4.7.a: Specifications
See 11.1.7: link_sparse_jacobian .

11.4.7.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <adolc/adolc.h>
# include <adolc/adolc_sparse.h>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/sparse_jac_fun.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_sparse_jacobian(
     size_t                           size     ,
     size_t                           repeat   ,
     size_t                           m        ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
           CppAD::vector<double>&     x_return ,
           CppAD::vector<double>&     jacobian ,
           size_t&                    n_sweep  )
{
     if( global_option["atomic"] || (! global_option["colpack"]) )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     typedef unsigned int*    SizeVector;
     typedef double*          DblVector;
     typedef adouble          ADScalar;
     typedef ADScalar*        ADVector;

     size_t i, j, k;            // temporary indices
     size_t n = size;           // number of independent variables
     size_t order = 0;          // derivative order corresponding to function

     // set up for thread_alloc memory allocator (fast and checks for leaks)
     using CppAD::thread_alloc; // the allocator
     size_t capacity;           // capacity of an allocation

     // tape identifier
     int tag  = 0;
     // AD domain space vector
     ADVector a_x = thread_alloc::create_array<ADScalar>(n, capacity);
     // AD range space vector
     ADVector a_y = thread_alloc::create_array<ADScalar>(m, capacity);
     // argument value in double
     DblVector x = thread_alloc::create_array<double>(n, capacity);
     // function value in double
     DblVector y = thread_alloc::create_array<double>(m, capacity);


     // options that control sparse_jac
     int        options[4];
     if( global_option["boolsparsity"] )
          options[0] = 1;  // sparsity by propagation of bit pattern
     else
          options[0] = 0;  // sparsity pattern by index domains
     options[1] = 0; // (0 = safe mode, 1 = tight mode)
     options[2] = 0; // see changing to -1 and back to 0 below
     options[3] = 0; // (0 = column compression, 1 = row compression)

     // structure that holds some of the work done by sparse_jac
     int        nnz;                   // number of non-zero values
     SizeVector rind   = CPPAD_NULL;   // row indices
     SizeVector cind   = CPPAD_NULL;   // column indices
     DblVector  values = CPPAD_NULL;   // Jacobian values

     // choose a value for x
     CppAD::uniform_01(n, x);

     // declare independent variables
     int keep = 0; // keep forward mode results
     trace_on(tag, keep);
     for(j = 0; j < n; j++)
          a_x[j] <<= x[j];

     // AD computation of f (x)
     CppAD::sparse_jac_fun<ADScalar>(m, n, a_x, row, col, order, a_y);

     // create function object f : x -> y
     for(i = 0; i < m; i++)
          a_y[i] >>= y[i];
     trace_off();

     // Retrieve n_sweep using undocumented feature of sparsedrivers.cpp
     int same_pattern = 0;
     options[2]       = -1;
     n_sweep = sparse_jac(tag, int(m), int(n),
          same_pattern, x, &nnz, &rind, &cind, &values, options
     );
     options[2]       = 0;
     // ----------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose a value for x
          CppAD::uniform_01(n, x);

          // declare independent variables
          trace_on(tag, keep);
          for(j = 0; j < n; j++)
               a_x[j] <<= x[j];

          // AD computation of f (x)
          CppAD::sparse_jac_fun<ADScalar>(m, n, a_x, row, col, order, a_y);

          // create function object f : x -> y
          for(i = 0; i < m; i++)
               a_y[i] >>= y[i];
          trace_off();

          // is this a repeat call with the same sparsity pattern
          same_pattern = 0;

          // calculate the jacobian at this x
          rind   = CPPAD_NULL;
          cind   = CPPAD_NULL;
          values = CPPAD_NULL;
          sparse_jac(tag, int(m), int(n),
               same_pattern, x, &nnz, &rind, &cind, &values, options
          );
          // only needed last time through loop
          if( repeat == 0 )
          {     size_t K = row.size();
               for(int ell = 0; ell < nnz; ell++)
               {     i = size_t(rind[ell]);
                    j = size_t(cind[ell]);
                    for(k = 0; k < K; k++)
                    {     if( row[k]==i && col[k]==j )
                              jacobian[k] = values[ell];
                    }
               }
          }

          // free raw memory allocated by sparse_jac
          free(rind);
          free(cind);
          free(values);
     }
     else
     {     while(repeat--)
          {     // choose a value for x
               CppAD::uniform_01(n, x);

               // calculate the jacobian at this x
               sparse_jac(tag, int(m), int(n),
                    same_pattern, x, &nnz, &rind, &cind, &values, options
               );
               same_pattern = 1;
          }
          size_t K = row.size();
          for(int ell = 0; ell < nnz; ell++)
          {     i = size_t(rind[ell]);
               j = size_t(cind[ell]);
               for(k = 0; k < K; k++)
               {     if( row[k]==i && col[k]==j )
                         jacobian[k] = values[ell];
               }
          }

          // free raw memory allocated by sparse_jac
          free(rind);
          free(cind);
          free(values);
     }
     // --------------------------------------------------------------------
     // return argument
     for(j = 0; j < n; j++)
          x_return[j] = x[j];

     // tear down
     thread_alloc::delete_array(a_x);
     thread_alloc::delete_array(a_y);
     thread_alloc::delete_array(x);
     thread_alloc::delete_array(y);
     return true;
}

Input File: speed/adolc/sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix

11.4.8.a: Syntax
mat = adolc_alloc_mat(mn)
adolc_free_mat(mat)

11.4.8.b: Purpose
Use the 8.23: thread_alloc memory allocator to allocate and free memory that can be used as a matrix with the Adolc package.

11.4.8.c: m
Is the number of rows in the matrix.

11.4.8.d: n
Is the number of columns in the matrix.

11.4.8.e: mat
Is the matrix. To be specific, between a call to adolc_alloc_mat, and the corresponding call to adolc_free_mat, for i = 0 , ... , m-1 and j = 0 , ... , n-1 , mat[i][j] is the element in row i and column j .
Input File: speed/adolc/alloc_mat.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5: Speed Test Derivatives Using CppAD

11.5.a: Purpose
CppAD has a set of speed tests that are used to determine if certain changes improve its execution speed (and to compare CppAD with other AD packages). This section links to the source code the CppAD speed tests (any suggestions to make the CppAD results faster are welcome).

11.5.b: Running Tests
To build these speed tests, and run their correctness tests, execute the following commands starting in the 2.2.b.a: build directory :
     cd speed/cppad
     make check_speed_cppad VERBOSE=1
You can then run the corresponding speed tests with the following command
     ./speed_cppad speed 
seed
where seed is a positive integer. See 11.1: speed_main for more options.

11.5.c: Contents
11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
11.5.3: CppAD Speed, Matrix Multiplication
11.5.4: CppAD Speed: Gradient of Ode Solution
11.5.5: CppAD Speed: Second Derivative of a Polynomial
11.5.6: CppAD Speed: Sparse Hessian
11.5.7: CppAD Speed: Sparse Jacobian

Input File: omh/speed/speed_cppad.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion

11.5.1.a: Specifications
See 11.1.2: link_det_minor .

11.5.1.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_minor(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = { "memory", "onetape", "optimize"};
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     // --------------------------------------------------------------------

     // optimization options: no conditional skips or compare operators
     std::string options="no_compare_op";
     // -----------------------------------------------------
     // setup

     // object for computing determinant
     typedef CppAD::AD<double>       ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;
     CppAD::det_by_minor<ADScalar>   Det(size);

     size_t i;               // temporary index
     size_t m = 1;           // number of dependent variables
     size_t n = size * size; // number of independent variables
     ADVector   A(n);        // AD domain space vector
     ADVector   detA(m);     // AD range space vector

     // vectors of reverse mode weights
     CppAD::vector<double> w(1);
     w[0] = 1.;

     // the AD function object
     CppAD::ADFun<double> f;

     // ---------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {
          // choose a matrix
          CppAD::uniform_01(n, matrix);
          for( i = 0; i < size * size; i++)
               A[i] = matrix[i];

          // declare independent variables
          Independent(A);

          // AD computation of the determinant
          detA[0] = Det(A);

          // create function object f : A -> detA
          f.Dependent(A, detA);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          // evaluate the determinant at the new matrix value
          f.Forward(0, matrix);

          // evaluate and return gradient using reverse mode
          gradient = f.Reverse(1, w);
     }
     else
     {
          // choose a matrix
          CppAD::uniform_01(n, matrix);
          for( i = 0; i < size * size; i++)
               A[i] = matrix[i];

          // declare independent variables
          Independent(A);

          // AD computation of the determinant
          detA[0] = Det(A);

          // create function object f : A -> detA
          f.Dependent(A, detA);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          // ------------------------------------------------------
          while(repeat--)
          {     // get the next matrix
               CppAD::uniform_01(n, matrix);

               // evaluate the determinant at the new matrix value
               f.Forward(0, matrix);

               // evaluate and return gradient using reverse mode
               gradient = f.Reverse(1, w);
          }
     }
     return true;
}

Input File: speed/cppad/det_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization

11.5.2.a: Specifications
See 11.1.1: link_det_lu .

11.5.2.b: Implementation
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/cppad.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_lu(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>           &matrix   ,
     CppAD::vector<double>           &gradient )
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = { "memory", "optimize"};
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     // --------------------------------------------------------------------

     // optimization options: no conditional skips or compare operators
     std::string options="no_compare_op";
     // -----------------------------------------------------
     // setup
     typedef CppAD::AD<double>           ADScalar;
     typedef CppAD::vector<ADScalar>     ADVector;
     CppAD::det_by_lu<ADScalar>          Det(size);

     size_t i;               // temporary index
     size_t m = 1;           // number of dependent variables
     size_t n = size * size; // number of independent variables
     ADVector   A(n);        // AD domain space vector
     ADVector   detA(m);     // AD range space vector
     CppAD::ADFun<double> f; // AD function object

     // vectors of reverse mode weights
     CppAD::vector<double> w(1);
     w[0] = 1.;

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, matrix);
          for( i = 0; i < n; i++)
               A[i] = matrix[i];

          // declare independent variables
          Independent(A);

          // AD computation of the determinant
          detA[0] = Det(A);

          // create function object f : A -> detA
          f.Dependent(A, detA);
          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          // evaluate and return gradient using reverse mode
          f.Forward(0, matrix);
          gradient = f.Reverse(1, w);
     }
     return true;
}

Input File: speed/cppad/det_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.3: CppAD Speed, Matrix Multiplication

11.5.3.a: Specifications
See 11.1.3: link_mat_mul .

11.5.3.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/mat_sum_sq.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/example/mat_mul.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_mat_mul(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           z        ,
     CppAD::vector<double>&           dz
)
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = { "memory", "onetape", "optimize", "atomic"};
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     // --------------------------------------------------------------------
     // optimization options: no conditional skips or compare operators
     std::string options="no_compare_op";
     // -----------------------------------------------------
     // setup
     typedef CppAD::AD<double>           ADScalar;
     typedef CppAD::vector<ADScalar>     ADVector;

     size_t j;               // temporary index
     size_t m = 1;           // number of dependent variables
     size_t n = size * size; // number of independent variables
     ADVector   X(n);        // AD domain space vector
     ADVector   Y(n);        // Store product matrix
     ADVector   Z(m);        // AD range space vector
     CppAD::ADFun<double> f; // AD function object

     // vectors of reverse mode weights
     CppAD::vector<double> w(1);
     w[0] = 1.;

     // user atomic information
     CppAD::vector<ADScalar> ax(3 + 2 * n), ay(n);
     atomic_mat_mul atom_mul;
     //
     if( global_option["boolsparsity"] )
          atom_mul.option( CppAD::atomic_base<double>::pack_sparsity_enum );
     else
          atom_mul.option( CppAD::atomic_base<double>::set_sparsity_enum );
     // ------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, x);
          for( j = 0; j < n; j++)
               X[j] = x[j];

          // declare independent variables
          Independent(X);

          // do computations
          if( ! global_option["atomic"] )
               mat_sum_sq(size, X, Y, Z);
          else
          {     ax[0] = ADScalar( size ); // number of rows in left matrix
               ax[1] = ADScalar( size ); // rows in left and columns in right
               ax[2] = ADScalar( size ); // number of columns in right matrix
               for(j = 0; j < n; j++)
               {     ax[3 + j]     = X[j];
                    ax[3 + n + j] = X[j];
               }
               // Y = X * X
               atom_mul(ax, ay);
               Z[0] = 0.;
               for(j = 0; j < n; j++)
                    Z[0] += ay[j];
          }
          // create function object f : X -> Z
          f.Dependent(X, Z);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          // evaluate and return gradient using reverse mode
          z  = f.Forward(0, x);
          dz = f.Reverse(1, w);
     }
     else
     {     // get a next matrix
          CppAD::uniform_01(n, x);
          for(j = 0; j < n; j++)
               X[j] = x[j];

          // declare independent variables
          Independent(X);

          // do computations
          if( ! global_option["atomic"] )
               mat_sum_sq(size, X, Y, Z);
          else
          {     for(j = 0; j < n; j++)
               {     ax[j]   = X[j];
                    ax[j+n] = X[j];
               }
               // Y = X * X
               atom_mul(ax, ay);
               Z[0] = 0.;
               for(j = 0; j < n; j++)
                    Z[0] += ay[j];
          }

          // create function object f : X -> Z
          f.Dependent(X, Z);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          while(repeat--)
          {     // get a next matrix
               CppAD::uniform_01(n, x);

               // evaluate and return gradient using reverse mode
               z  = f.Forward(0, x);
               dz = f.Reverse(1, w);
          }
     }
     // --------------------------------------------------------------------
     // Free temporary work space (any future atomic_mat_mul constructors
     // would create new temporary work space.)
     CppAD::user_atomic<double>::clear();

     return true;
}

Input File: speed/cppad/mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.4: CppAD Speed: Gradient of Ode Solution

11.5.4.a: Specifications
See 11.1.4: link_ode .

11.5.4.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cassert>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_ode(
     size_t                     size       ,
     size_t                     repeat     ,
     CppAD::vector<double>      &x         ,
     CppAD::vector<double>      &jacobian
)
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = { "memory", "onetape", "optimize"};
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     // --------------------------------------------------------------------
     // optimization options: no conditional skips or compare operators
     std::string options="no_compare_op";
     // --------------------------------------------------------------------
     // setup
     assert( x.size() == size );
     assert( jacobian.size() == size * size );

     typedef CppAD::AD<double>       ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;

     size_t j;
     size_t p = 0;              // use ode to calculate function values
     size_t n = size;           // number of independent variables
     size_t m = n;              // number of dependent variables
     ADVector  X(n), Y(m);      // independent and dependent variables
     CppAD::ADFun<double>  f;   // AD function

     // -------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose next x value
          uniform_01(n, x);
          for(j = 0; j < n; j++)
               X[j] = x[j];

          // declare the independent variable vector
          Independent(X);

          // evaluate function
          CppAD::ode_evaluate(X, p, Y);

          // create function object f : X -> Y
          f.Dependent(X, Y);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          jacobian = f.Jacobian(x);
     }
     else
     {     // an x value
          uniform_01(n, x);
          for(j = 0; j < n; j++)
               X[j] = x[j];

          // declare the independent variable vector
          Independent(X);

          // evaluate function
          CppAD::ode_evaluate(X, p, Y);

          // create function object f : X -> Y
          f.Dependent(X, Y);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          while(repeat--)
          {     // get next argument value
               uniform_01(n, x);

               // evaluate jacobian
               jacobian = f.Jacobian(x);
          }
     }
     return true;
}

Input File: speed/cppad/ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.5: CppAD Speed: Second Derivative of a Polynomial

11.5.5.a: Specifications
See 11.1.5: link_poly .

11.5.5.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

bool link_poly(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &a        ,  // coefficients of polynomial
     CppAD::vector<double>     &z        ,  // polynomial argument value
     CppAD::vector<double>     &ddp      )  // second derivative w.r.t z
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = { "memory", "onetape", "optimize"};
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     // --------------------------------------------------------------------
     // optimization options: no conditional skips or compare operators
     std::string options="no_compare_op";
     // -----------------------------------------------------
     // setup
     typedef CppAD::AD<double>     ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;

     size_t i;      // temporary index
     size_t m = 1;  // number of dependent variables
     size_t n = 1;  // number of independent variables
     ADVector Z(n); // AD domain space vector
     ADVector P(m); // AD range space vector

     // choose the polynomial coefficients
     CppAD::uniform_01(size, a);

     // AD copy of the polynomial coefficients
     ADVector A(size);
     for(i = 0; i < size; i++)
          A[i] = a[i];

     // forward mode first and second differentials
     CppAD::vector<double> p(1), dp(1), dz(1), ddz(1);
     dz[0]  = 1.;
     ddz[0] = 0.;

     // AD function object
     CppAD::ADFun<double> f;

     // --------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {
          // choose an argument value
          CppAD::uniform_01(1, z);
          Z[0] = z[0];

          // declare independent variables
          Independent(Z);

          // AD computation of the function value
          P[0] = CppAD::Poly(0, A, Z[0]);

          // create function object f : A -> detA
          f.Dependent(Z, P);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          // pre-allocate memory for three forward mode calculations
          f.capacity_order(3);

          // evaluate the polynomial
          p = f.Forward(0, z);

          // evaluate first order Taylor coefficient
          dp = f.Forward(1, dz);

          // second derivative is twice second order Taylor coef
          ddp     = f.Forward(2, ddz);
          ddp[0] *= 2.;
     }
     else
     {
          // choose an argument value
          CppAD::uniform_01(1, z);
          Z[0] = z[0];

          // declare independent variables
          Independent(Z);

          // AD computation of the function value
          P[0] = CppAD::Poly(0, A, Z[0]);

          // create function object f : A -> detA
          f.Dependent(Z, P);

          if( global_option["optimize"] )
               f.optimize(options);

          // skip comparison operators
          f.compare_change_count(0);

          while(repeat--)
          {     // sufficient memory is allocated by second repetition

               // get the next argument value
               CppAD::uniform_01(1, z);

               // evaluate the polynomial at the new argument value
               p = f.Forward(0, z);

               // evaluate first order Taylor coefficient
               dp = f.Forward(1, dz);

               // second derivative is twice second order Taylor coef
               ddp     = f.Forward(2, ddz);
               ddp[0] *= 2.;
          }
     }
     return true;
}

Input File: speed/cppad/poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.6: CppAD Speed: Sparse Hessian

11.5.6.a: Specifications
See 11.1.6: link_sparse_hessian .

11.5.6.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/sparse_hes_fun.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

namespace {
     // typedefs
     using CppAD::vector;
     typedef CppAD::AD<double>                     a1double;
     typedef CppAD::AD<a1double>                   a2double;
     typedef vector<bool>                          b_vector;
     typedef vector<size_t>                        s_vector;
     typedef vector<double>                        d_vector;
     typedef vector<a1double>                      a1vector;
     typedef vector<a2double>                      a2vector;
     typedef CppAD::sparse_rc<s_vector>            sparsity_pattern;
     typedef CppAD::sparse_rcv<s_vector, d_vector> sparse_matrix;
     // ------------------------------------------------------------------------
     void create_fun(
          const d_vector&             x        ,
          const s_vector&             row      ,
          const s_vector&             col      ,
          CppAD::ADFun<double>&       fun      )
     {
          // initialize a1double version of independent variables
          size_t n = x.size();
          a1vector a1x(n);
          for(size_t j = 0; j < n; j++)
               a1x[j] = x[j];
          //
          // optimization options
          std::string optimize_options="no_compare_op";
          //
          // order of derivative in sparse_hes_fun
          size_t order = 0;
          //
          if( ! global_option["hes2jac"] )
          {
               // declare independent variables
               Independent(a1x);
               //
               // AD computation of y
               a1vector a1y(1);
               CppAD::sparse_hes_fun<a1double>(n, a1x, row, col, order, a1y);
               //
               // create function object f : X -> Y
               fun.Dependent(a1x, a1y);
               //
               if( global_option["optimize"] )
                    fun.optimize(optimize_options);
               //
               // skip comparison operators
               fun.compare_change_count(0);
               //
               // fun corresonds to f(x)
               return;
          }
          // declare independent variables for f(x)
          a2vector a2x(n);
          for(size_t j = 0; j < n; j++)
               a2x[j] = a1x[j];
          Independent(a2x);
          //
          // a2double computation of y
          a2vector a2y(1);
          CppAD::sparse_hes_fun<a2double>(n, a2x, row, col, order, a2y);
          //
          // create function object corresponding to y = f(x)
          CppAD::ADFun<a1double> a1f;
          a1f.Dependent(a2x, a2y);
          //
          // declare independent variables for g(x)
          Independent(a1x);
          //
          // a1double computation of z
          a1vector a1w(1), a1z(n);
          a1w[0] = 1.0;
          a1f.Forward(0, a1x);
          a1z = a1f.Reverse(1, a1w);
          //
          // create function object z = g(x) = f'(x)
          fun.Dependent(a1x, a1z);
          //
          if( global_option["optimize"] )
               fun.optimize(optimize_options);
          //
          // skip comparison operators
          fun.compare_change_count(0);
          //
          // fun corresonds to g(x)
          return;
     }
     // ------------------------------------------------------------------------
     void calc_sparsity(
          sparsity_pattern&      sparsity ,
          CppAD::ADFun<double>&  fun      )
     {
          size_t n = fun.Domain();
          size_t m = fun.Range();
          //
          bool transpose     = false;
          //
          if( global_option["subsparsity"] )
          {     CPPAD_ASSERT_UNKNOWN( global_option["hes2jac"] )
               CPPAD_ASSERT_UNKNOWN( n == m );
               b_vector select_domain(n), select_range(m);
               for(size_t j = 0; j < n; ++j)
                    select_domain[j] = true;
               for(size_t i = 0; i < m; ++i)
                    select_range[i] = true;
               //
               // fun corresponds to g(x)
               fun.subgraph_sparsity(
                    select_domain, select_range, transpose, sparsity
               );
               return;
          }
          bool dependency    = false;
          bool reverse       = global_option["revsparsity"];
          bool internal_bool = global_option["boolsparsity"];
          //
          if( ! global_option["hes2jac"] )
          {     // fun corresponds to f(x)
               //
               CPPAD_ASSERT_UNKNOWN( m == 1 );
               //
               b_vector select_range(m);
               select_range[0] = true;
               //
               if( reverse )
               {     sparsity_pattern identity;
                    identity.resize(n, n, n);
                    for(size_t k = 0; k < n; k++)
                         identity.set(k, k, k);
                    fun.for_jac_sparsity(
                         identity, transpose, dependency, internal_bool, sparsity
                    );
                    fun.rev_hes_sparsity(
                         select_range, transpose, internal_bool, sparsity
                    );
               }
               else
               {     b_vector select_domain(n);
                    for(size_t j = 0; j < n; j++)
                         select_domain[j] = true;
                    fun.for_hes_sparsity(
                         select_domain, select_range, internal_bool, sparsity
                    );
               }
               return;
          }
          // fun correspnds to g(x)
          CPPAD_ASSERT_UNKNOWN( m == n );
          //
          // sparsity pattern for identity matrix
          sparsity_pattern eye;
          eye.resize(n, n, n);
          for(size_t k = 0; k < n; k++)
               eye.set(k, k, k);
          //
          if( reverse )
          {     fun.rev_jac_sparsity(
                    eye, transpose, dependency, internal_bool, sparsity
               );
          }
          else
          {     fun.for_jac_sparsity(
                    eye, transpose, dependency, internal_bool, sparsity
               );
          }
          return;
     }
     // ------------------------------------------------------------------------
     size_t calc_hessian(
          d_vector&               hessian  ,
          const d_vector&         x        ,
          sparse_matrix&          subset   ,
          const sparsity_pattern& sparsity ,
          CppAD::sparse_jac_work& jac_work ,
          CppAD::sparse_hes_work& hes_work ,
          CppAD::ADFun<double>&   fun      )
     {     size_t n_sweep;
          //
          if( ! global_option["hes2jac"] )
          {     // fun corresponds to f(x)
               //
               // coloring method
               std::string coloring = "cppad.symmetric";
# if CPPAD_HAS_COLPACK
               if( global_option["colpack"] )
                    coloring = "colpack.symmetric";
# endif
               // only one function component
               d_vector w(1);
               w[0] = 1.0;
               //
               // compute hessian
               n_sweep = fun.sparse_hes(
                    x, w, subset, sparsity, coloring, hes_work
               );
          }
          else
          {     // fun corresponds to g(x)
               //
               if( global_option["subgraph"] )
               {     fun.subgraph_jac_rev(x, subset);
                    n_sweep = 0;
               }
               else
               {
                    //
                    // coloring method
                    std::string coloring = "cppad";
# if CPPAD_HAS_COLPACK
                    if( global_option["colpack"] )
                         coloring = "colpack";
# endif
                    size_t group_max = 1;
                    n_sweep = fun.sparse_jac_for(
                         group_max, x, subset, sparsity, coloring, jac_work
                    );
               }
          }
          // return result
          const d_vector& val( subset.val() );
          size_t nnz = subset.nnz();
          for(size_t k = 0; k < nnz; k++)
               hessian[k] = val[k];
          //
          return n_sweep;
     }
}

bool link_sparse_hessian(
     size_t                           size     ,
     size_t                           repeat   ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           hessian  ,
     size_t&                          n_sweep  )
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = {
          "memory", "onetape", "optimize", "hes2jac", "subgraph",
# if CPPAD_HAS_COLPACK
          "boolsparsity", "revsparsity", "subsparsity", "colpack"
# else
          "boolsparsity", "revsparsity"
# endif
     };
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     if( global_option["subsparsity"] )
     {     if( global_option["boolsparsity"] || global_option["revsparsity"] )
               return false;
          if( ! global_option["hes2jac"] )
               return false;
     }
     if( global_option["subgraph"] )
     {     if( ! global_option["hes2jac"] )
               return false;
     }
     // -----------------------------------------------------------------------
     // setup
     size_t n = size;          // number of independent variables
     CppAD::ADFun<double> fun; // AD function object used to calculate Hessian
     //
     // declare sparsity pattern
     sparsity_pattern sparsity;
     //
     // declare subset where Hessian is evaluated
     sparsity_pattern subset_pattern;
     size_t nr  = n;
     size_t nc  = n;
     size_t nnz = row.size();
     subset_pattern.resize(nr, nc, nnz);
     for(size_t k = 0; k < nnz; k++)
          subset_pattern.set(k, row[k], col[k]);
     sparse_matrix subset( subset_pattern );
     //
     // structures that holds some of the work done by sparse_jac, sparse_hes
     CppAD::sparse_jac_work jac_work;
     CppAD::sparse_hes_work hes_work;

     // -----------------------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose a value for x
          CppAD::uniform_01(n, x);
          //
          // create f(x)
          create_fun(x, row, col, fun);
          //
          // calculate the sparsity pattern for Hessian of f(x)
          calc_sparsity(sparsity, fun);
          //
          // calculate the Hessian at this x
          jac_work.clear(); // wihtout work from previous calculation
          hes_work.clear();
          n_sweep = calc_hessian(
               hessian, x, subset, sparsity, jac_work, hes_work, fun
          );
     }
     else
     {     // choose a value for x
          CppAD::uniform_01(n, x);
          //
          // create f(x)
          create_fun(x, row, col, fun);
          //
          // calculate the sparsity pattern for Hessian of f(x)
          calc_sparsity(sparsity, fun);
          //
          while(repeat--)
          {     // choose a value for x
               CppAD::uniform_01(n, x);
               //
               // calculate this Hessian at this x
               n_sweep = calc_hessian(
                    hessian, x, subset, sparsity, jac_work, hes_work, fun
               );
          }
     }
     return true;
}

Input File: speed/cppad/sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.5.7: CppAD Speed: Sparse Jacobian

11.5.7.a: Specifications
See 11.1.7: link_sparse_jacobian .

11.5.7.b: Implementation
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/sparse_jac_fun.hpp>

// Note that CppAD uses global_option["memory"] at the main program level
# include <map>
extern std::map<std::string, bool> global_option;

namespace {
     using CppAD::vector;
     typedef vector<size_t>  s_vector;
     typedef vector<bool>    b_vector;

     void calc_sparsity(
          CppAD::sparse_rc<s_vector>& sparsity ,
          CppAD::ADFun<double>&       f        )
     {     bool reverse       = global_option["revsparsity"];
          bool transpose     = false;
          bool internal_bool = global_option["boolsparsity"];
          bool dependency    = false;
          bool subgraph      = global_option["subsparsity"];
          size_t n = f.Domain();
          size_t m = f.Range();
          if( subgraph )
          {     b_vector select_domain(n), select_range(m);
               for(size_t j = 0; j < n; ++j)
                    select_domain[j] = true;
               for(size_t i = 0; i < m; ++i)
                    select_range[i] = true;
               f.subgraph_sparsity(
                    select_domain, select_range, transpose, sparsity
               );
          }
          else
          {     size_t q = n;
               if( reverse )
                    q = m;
               //
               CppAD::sparse_rc<s_vector> identity;
               identity.resize(q, q, q);
               for(size_t k = 0; k < q; k++)
                    identity.set(k, k, k);
               //
               if( reverse )
               {     f.rev_jac_sparsity(
                         identity, transpose, dependency, internal_bool, sparsity
                    );
               }
               else
               {     f.for_jac_sparsity(
                         identity, transpose, dependency, internal_bool, sparsity
                    );
               }
          }
     }
}

bool link_sparse_jacobian(
     size_t                           size     ,
     size_t                           repeat   ,
     size_t                           m        ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
           CppAD::vector<double>&     x        ,
           CppAD::vector<double>&     jacobian ,
           size_t&                    n_sweep  )
{
     // --------------------------------------------------------------------
     // check global options
     const char* valid[] = {
          "memory", "onetape", "optimize", "subgraph",
# if CPPAD_HAS_COLPACK
          "boolsparsity", "revsparsity", "subsparsity", "colpack"
# else
          "boolsparsity", "revsparsity", "subsparsity"
# endif
     };
     size_t n_valid = sizeof(valid) / sizeof(valid[0]);
     typedef std::map<std::string, bool>::iterator iterator;
     //
     for(iterator itr=global_option.begin(); itr!=global_option.end(); ++itr)
     {     if( itr->second )
          {     bool ok = false;
               for(size_t i = 0; i < n_valid; i++)
                    ok |= itr->first == valid[i];
               if( ! ok )
                    return false;
          }
     }
     if( global_option["subsparsity"] )
     {     if( global_option["boolsparisty"] || global_option["revsparsity"] )
               return false;
     }
     // ---------------------------------------------------------------------
     // optimization options: no conditional skips or compare operators
     std::string options="no_compare_op";
     // -----------------------------------------------------
     // setup
     typedef CppAD::AD<double>    a_double;
     typedef vector<double>       d_vector;
     typedef vector<a_double>     ad_vector;
     //
     size_t order = 0;         // derivative order corresponding to function
     size_t n     = size;      // number of independent variables
     ad_vector  a_x(n);        // AD domain space vector
     ad_vector  a_y(m);        // AD range space vector y = f(x)
     CppAD::ADFun<double> f;   // AD function object
     //
     // declare sparsity pattern
     CppAD::sparse_rc<s_vector>  sparsity;
     //
     // declare subset where Jacobian is evaluated
     CppAD::sparse_rc<s_vector> subset_pattern;
     size_t nr  = m;
     size_t nc  = n;
     size_t nnz = row.size();
     subset_pattern.resize(nr, nc, nnz);
     for(size_t k = 0; k < nnz; k++)
          subset_pattern.set(k, row[k], col[k]);
     CppAD::sparse_rcv<s_vector, d_vector> subset( subset_pattern );
     const d_vector& subset_val( subset.val() );
     //
     // coloring method
     std::string coloring = "cppad";
# if CPPAD_HAS_COLPACK
     if( global_option["colpack"] )
          coloring = "colpack";
# endif
     //
     // maximum number of colors at once
     size_t group_max = 25;
     // ------------------------------------------------------
     if( ! global_option["onetape"] ) while(repeat--)
     {     // choose a value for x
          CppAD::uniform_01(n, x);
          for(size_t j = 0; j < n; j++)
               a_x[j] = x[j];
          //
          // declare independent variables
          Independent(a_x);
          //
          // AD computation of f(x)
          CppAD::sparse_jac_fun<a_double>(m, n, a_x, row, col, order, a_y);
          //
          // create function object f : X -> Y
          f.Dependent(a_x, a_y);
          //
          if( global_option["optimize"] )
               f.optimize(options);
          //
          // skip comparison operators
          f.compare_change_count(0);
          //
          // calculate the Jacobian sparsity pattern for this function
          calc_sparsity(sparsity, f);
          //
          if( global_option["subgraph"] )
          {     // user reverse mode becasue forward not yet implemented
               f.subgraph_jac_rev(x, subset);
               n_sweep = 0;
          }
          else
          {     // structure that holds some of the work done by sparse_jac_for
               CppAD::sparse_jac_work work;
               //
               // calculate the Jacobian at this x
               // (use forward mode because m > n ?)
               n_sweep = f.sparse_jac_for(
                    group_max, x, subset, sparsity, coloring, work
               );
          }
          for(size_t k = 0; k < nnz; k++)
               jacobian[k] = subset_val[k];
     }
     else
     {     // choose a value for x
          CppAD::uniform_01(n, x);
          for(size_t j = 0; j < n; j++)
               a_x[j] = x[j];
          //
          // declare independent variables
          Independent(a_x);
          //
          // AD computation of f(x)
          CppAD::sparse_jac_fun<a_double>(m, n, a_x, row, col, order, a_y);
          //
          // create function object f : X -> Y
          f.Dependent(a_x, a_y);
          //
          if( global_option["optimize"] )
               f.optimize(options);
          //
          // skip comparison operators
          f.compare_change_count(0);
          //
          // calculate the Jacobian sparsity pattern for this function
          calc_sparsity(sparsity, f);
          //
          // structure that holds some of the work done by sparse_jac_for
          CppAD::sparse_jac_work work;
          //
          while(repeat--)
          {     // choose a value for x
               CppAD::uniform_01(n, x);
               //
               // calculate the Jacobian at this x
               if( global_option["subgraph"] )
               {     // user reverse mode becasue forward not yet implemented
                    f.subgraph_jac_rev(x, subset);
                    n_sweep = 0;
               }
               else
               {     // (use forward mode because m > n ?)
                    n_sweep = f.sparse_jac_for(
                         group_max, x, subset, sparsity, coloring, work
                    );
               }
               for(size_t k = 0; k < nnz; k++)
                    jacobian[k] = subset_val[k];
          }
     }
     return true;
}

Input File: speed/cppad/sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6: Speed Test Derivatives Using Fadbad

11.6.a: Purpose
CppAD has a set of speed tests that are used to compare Fadbad with other AD packages. This section links to the source code the Fadbad speed tests (any suggestions to make the Fadbad results faster are welcome).

11.6.b: fadbad_prefix
To run these tests, you must include the 2.2.4: fadbad_prefix in you 2.2.b: cmake command .

11.6.c: Running Tests
To build these speed tests, and run their correctness tests, execute the following commands starting in the 2.2.b.a: build directory :
     cd speed/fadbad
     make check_speed_fadbad VERBOSE=1
You can then run the corresponding speed tests with the following command
     ./speed_fadbad speed 
seed
where seed is a positive integer. See 11.1: speed_main for more options.

11.6.d: Contents
11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
11.6.3: Fadbad Speed: Matrix Multiplication
11.6.4: Fadbad Speed: Ode
11.6.5: Fadbad Speed: Second Derivative of a Polynomial
11.6.6: Fadbad Speed: Sparse Hessian
11.6.7: fadbad Speed: sparse_jacobian

Input File: omh/speed/speed_fadbad.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion

11.6.1.a: Specifications
See 11.1.2: link_det_minor .

11.6.1.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <FADBAD++/badiff.h>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/vector.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_minor(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // speed test global option values
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["onetape"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup

     // object for computing determinant
     typedef fadbad::B<double>       ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;
     CppAD::det_by_minor<ADScalar>   Det(size);

     size_t i;                // temporary index
     size_t m = 1;            // number of dependent variables
     size_t n = size * size;  // number of independent variables
     ADScalar   detA;         // AD value of the determinant
     ADVector   A(n);         // AD version of matrix

     // ------------------------------------------------------
     while(repeat--)
       {     // get the next matrix
          CppAD::uniform_01(n, matrix);

          // set independent variable values
          for(i = 0; i < n; i++)
               A[i] = matrix[i];

          // compute the determinant
          detA = Det(A);

          // create function object f : A -> detA
          detA.diff(0, m);  // index 0 of m dependent variables

          // evaluate and return gradient using reverse mode
          for(i =0; i < n; i++)
               gradient[i] = A[i].d(0); // partial detA w.r.t A[i]
     }
     // ---------------------------------------------------------
     return true;
}

Input File: speed/fadbad/det_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization

11.6.2.a: Specifications
See 11.1.1: link_det_lu .

11.6.2.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <FADBAD++/badiff.h>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/vector.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_lu(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // speed test global option values
     if( global_option["onetape"] || global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     //
     // object for computing determinant
     typedef fadbad::B<double>       ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;
     CppAD::det_by_lu<ADScalar>      Det(size);

     size_t i;                // temporary index
     size_t m = 1;            // number of dependent variables
     size_t n = size * size;  // number of independent variables
     ADScalar   detA;         // AD value of the determinant
     ADVector   A(n);         // AD version of matrix

     // ------------------------------------------------------
     while(repeat--)
       {     // get the next matrix
          CppAD::uniform_01(n, matrix);

          // set independent variable values
          for(i = 0; i < n; i++)
               A[i] = matrix[i];

          // compute the determinant
          detA = Det(A);

          // create function object f : A -> detA
          detA.diff(0, m);  // index 0 of m dependent variables

          // evaluate and return gradient using reverse mode
          for(i =0; i < n; i++)
               gradient[i] = A[i].d(0); // partial detA w.r.t A[i]
     }
     // ---------------------------------------------------------
     return true;
}

Input File: speed/fadbad/det_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.3: Fadbad Speed: Matrix Multiplication

11.6.3.a: Specifications
See 11.1.3: link_mat_mul .

11.6.3.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <FADBAD++/badiff.h>
# include <cppad/speed/mat_sum_sq.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/vector.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_mat_mul(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           z        ,
     CppAD::vector<double>&           dz       )
{
     // speed test global option values
     if( global_option["memory"] || global_option["onetape"] || global_option["atomic"] || global_option["optimize"] )
          return false;
     // The correctness check for this test is failing, so abort (for now).
     return false;

     // -----------------------------------------------------
     // setup

     // object for computing determinant
     typedef fadbad::B<double>       ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;

     size_t j;                // temporary index
     size_t m = 1;            // number of dependent variables
     size_t n = size * size;  // number of independent variables
     ADVector   X(n);         // AD domain space vector
     ADVector   Y(n);         // Store product matrix
     ADVector   Z(m);         // AD range space vector

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, x);

          // set independent variable values
          for(j = 0; j < n; j++)
               X[j] = x[j];

          // do the computation
          mat_sum_sq(size, X, Y, Z);

          // create function object f : X -> Z
          Z[0].diff(0, m);  // index 0 of m dependent variables

          // evaluate and return gradient using reverse mode
          for(j = 0; j < n; j++)
               dz[j] = X[j].d(0); // partial Z[0] w.r.t X[j]
     }
     // return function value
     z[0] = Z[0].x();

     // ---------------------------------------------------------
     return true;
}

Input File: speed/fadbad/mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.4: Fadbad Speed: Ode

11.6.4.a: Specifications
See 11.1.4: link_ode .

11.6.4.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <FADBAD++/fadiff.h>
# include <algorithm>
# include <cassert>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/ode_evaluate.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

namespace fadbad {
     // define fabs for use by ode_evaluate
     fadbad::F<double> fabs(const fadbad::F<double>& x)
     {     return std::max(-x, x); }
}

bool link_ode(
     size_t                     size       ,
     size_t                     repeat     ,
     CppAD::vector<double>      &x         ,
     CppAD::vector<double>      &jacobian
)
{
     // speed test global option values
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["onetape"] || global_option["optimize"] )
          return false;
     // -------------------------------------------------------------
     // setup
     assert( x.size() == size );
     assert( jacobian.size() == size * size );

     typedef fadbad::F<double>       ADScalar;
     typedef CppAD::vector<ADScalar> ADVector;

     size_t i, j;
     size_t p = 0;          // use ode to calculate function values
     size_t n = size;       // number of independent variables
     size_t m = n;          // number of dependent variables
     ADVector X(n), Y(m);   // independent and dependent variables

     // -------------------------------------------------------------
     while(repeat--)
     {     // choose next x value
          CppAD::uniform_01(n, x);
          for(j = 0; j < n; j++)
          {     // set value of x[j]
               X[j] = x[j];
               // set up for X as the independent variable vector
               X[j].diff(j, n);
          }

          // evaluate function
          CppAD::ode_evaluate(X, p, Y);

          // return values with Y as the dependent variable vector
          for(i = 0; i < m; i++)
          {     for(j = 0; j < n; j++)
                    jacobian[ i * n + j ] = Y[i].d(j);
          }
     }
     return true;
}

Input File: speed/fadbad/ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.5: Fadbad Speed: Second Derivative of a Polynomial

11.6.5.a: Specifications
See 11.1.5: link_poly .

11.6.5.b: Implementation
# include <cppad/utility/vector.hpp>
# include <cppad/utility/poly.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <FADBAD++/tadiff.h>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_poly(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &a        ,  // coefficients of polynomial
     CppAD::vector<double>     &z        ,  // polynomial argument value
     CppAD::vector<double>     &ddp      )  // second derivative w.r.t z
{
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["onetape"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     size_t i;             // temporary index
     fadbad::T<double>  Z; // domain space AD value
     fadbad::T<double>  P; // range space AD value

     // choose the polynomial coefficients
     CppAD::uniform_01(size, a);

     // AD copy of the polynomial coefficients
     CppAD::vector< fadbad::T<double> > A(size);
     for(i = 0; i < size; i++)
          A[i] = a[i];

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next argument value
          CppAD::uniform_01(1, z);

          // independent variable value
          Z    = z[0]; // argument value
          Z[1] = 1;    // argument first order Taylor coefficient

          // AD computation of the dependent variable
          P = CppAD::Poly(0, A, Z);

          // Taylor-expand P to degree one
          P.eval(2);

          // second derivative is twice second order Taylor coefficient
          ddp[0] = 2. * P[2];

          // Free DAG corresponding to P does not seem to improve speed.
          // Probably because it gets freed the next time P is assigned.
          // P.reset();
     }
     // ------------------------------------------------------
     return true;
}

Input File: speed/fadbad/poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.6: Fadbad Speed: Sparse Hessian
// A fadbad version of this test is not yet available
bool link_sparse_hessian(
        size_t                           size       ,
        size_t                           repeat     ,
        const CppAD::vector<size_t>&      row       ,
        const CppAD::vector<size_t>&      col       ,
        CppAD::vector<double>&            x         ,
        CppAD::vector<double>&            hessian   ,
        size_t&                           n_sweep
)
{
     return false;
}

Input File: speed/fadbad/sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.6.7: fadbad Speed: sparse_jacobian
// A fadbad version of this test is not yet available
bool link_sparse_jacobian(
     size_t                           size     ,
     size_t                           repeat   ,
     size_t                           m        ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
           CppAD::vector<double>&     x        ,
           CppAD::vector<double>&     jacobian ,
           size_t&                    n_sweep  )
{
     return false;
}

Input File: speed/fadbad/sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7: Speed Test Derivatives Using Sacado

11.7.a: Purpose
CppAD has a set of speed tests that are used to compare Sacado with other AD packages. This section links to the source code the Sacado speed tests (any suggestions to make the Sacado results faster are welcome).

11.7.b: sacado_prefix
To run these tests, you must include the 2.2.6: sacado_prefix in you 2.2.b: cmake command .

11.7.c: Running Tests
To build these speed tests, and run their correctness tests, execute the following commands starting in the 2.2.b.a: build directory :
     cd speed/sacado
     make check_speed_sacado VERBOSE=1
You can then run the corresponding speed tests with the following command
     ./speed_sacado speed 
seed
where seed is a positive integer. See 11.1: speed_main for more options.

11.7.d: Contents
11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
11.7.3: Sacado Speed: Matrix Multiplication
11.7.4: Sacado Speed: Gradient of Ode Solution
11.7.5: Sacado Speed: Second Derivative of a Polynomial
11.7.6: Sacado Speed: Sparse Hessian
11.7.7: sacado Speed: sparse_jacobian

Input File: omh/speed/speed_sacado.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion

11.7.1.a: Specifications
See 11.1.2: link_det_minor .

11.7.1.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <Sacado.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/vector.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_minor(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // speed test global option values
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["onetape"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup

     // object for computing determinant
     typedef Sacado::Rad::ADvar<double>    ADScalar;
     typedef CppAD::vector<ADScalar>       ADVector;
     CppAD::det_by_minor<ADScalar>         Det(size);

     size_t i;                // temporary index
     size_t n = size * size;  // number of independent variables
     ADScalar   detA;         // AD value of the determinant
     ADVector   A(n);         // AD version of matrix

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, matrix);

          // set independent variable values
          for(i = 0; i < n; i++)
               A[i] = matrix[i];

          // compute the determinant
          detA = Det(A);

          // reverse mode compute gradient of last computed value; i.e., detA
          ADScalar::Gradcomp();

          // return gradient
          for(i =0; i < n; i++)
               gradient[i] = A[i].adj(); // partial detA w.r.t A[i]
     }
     // ---------------------------------------------------------
     return true;
}

Input File: speed/sacado/det_minor.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization

11.7.2.a: Specifications
See 11.1.1: link_det_lu .

11.7.2.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//

# include <Sacado.hpp>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/utility/vector.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_det_lu(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &matrix   ,
     CppAD::vector<double>     &gradient )
{
     // speed test global option values
     if( global_option["onetape"] || global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     //
     // object for computing determinant
     typedef Sacado::Rad::ADvar<double>   ADScalar;
     typedef CppAD::vector<ADScalar>      ADVector;
     CppAD::det_by_lu<ADScalar>           Det(size);

     size_t i;                // temporary index
     size_t n = size * size;  // number of independent variables
     ADScalar   detA;         // AD value of the determinant
     ADVector   A(n);         // AD version of matrix

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, matrix);

          // set independent variable values
          for(i = 0; i < n; i++)
               A[i] = matrix[i];

          // compute the determinant
          detA = Det(A);

          // compute the gradient of detA
          ADScalar::Gradcomp();

          // evaluate and return gradient using reverse mode
          for(i =0; i < n; i++)
               gradient[i] = A[i].adj(); // partial detA w.r.t A[i]
     }
     // ---------------------------------------------------------
     return true;
}

Input File: speed/sacado/det_lu.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.3: Sacado Speed: Matrix Multiplication

11.7.3.a: Specifications
See 11.1.3: link_mat_mul .

11.7.3.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <Sacado.hpp>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/mat_sum_sq.hpp>
# include <cppad/speed/uniform_01.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_mat_mul(
     size_t                           size     ,
     size_t                           repeat   ,
     CppAD::vector<double>&           x        ,
     CppAD::vector<double>&           z        ,
     CppAD::vector<double>&           dz       )
{
     // speed test global option values
     if( global_option["memory"] || global_option["onetape"] || global_option["atomic"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup

     // object for computing determinant
     typedef Sacado::Rad::ADvar<double>    ADScalar;
     typedef CppAD::vector<ADScalar>       ADVector;

     size_t j;                // temporary index
     size_t m = 1;            // number of dependent variables
     size_t n = size * size;  // number of independent variables
     ADVector   X(n);         // AD domain space vector
     ADVector   Y(n);         // Store product matrix
     ADVector   Z(m);         // AD range space vector
     ADScalar   f;

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next matrix
          CppAD::uniform_01(n, x);

          // set independent variable values
          for(j = 0; j < n; j++)
               X[j] = x[j];

          // do the computation
          mat_sum_sq(size, X, Y, Z);

          // create function object f : X -> Z
          f = Z[0];

          // reverse mode gradient of last ADvar computed value; i.e., f
          ADScalar::Gradcomp();

          // return gradient
          for(j = 0; j < n; j++)
               dz[j] = X[j].adj(); // partial f w.r.t X[j]
     }
     // return function value
     z[0] = f.val();

     // ---------------------------------------------------------
     return true;
}

Input File: speed/sacado/mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.4: Sacado Speed: Gradient of Ode Solution

11.7.4.a: Specifications
See 11.1.4: link_ode .

11.7.4.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <Sacado.hpp>
# include <cassert>
# include <cppad/utility/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/ode_evaluate.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_ode(
     size_t                     size       ,
     size_t                     repeat     ,
     CppAD::vector<double>      &x         ,
     CppAD::vector<double>      &jacobian
)
{
     // speed test global option values
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["onetape"] || global_option["optimize"] )
          return false;
     // -------------------------------------------------------------
     // setup
     assert( x.size() == size );
     assert( jacobian.size() == size * size );

     typedef Sacado::Fad::DFad<double>  ADScalar;
     typedef CppAD::vector<ADScalar>    ADVector;

     size_t i, j;
     size_t p = 0;          // use ode to calculate function values
     size_t n = size;       // number of independent variables
     size_t m = n;          // number of dependent variables
     ADVector X(n), Y(m);   // independent and dependent variables

     // -------------------------------------------------------------
     while(repeat--)
     {     // choose next x value
          CppAD::uniform_01(n, x);
          for(j = 0; j < n; j++)
          {     // set up for X as the independent variable vector
               X[j] = ADScalar(int(n), int(j), x[j]);
          }

          // evaluate function
          CppAD::ode_evaluate(X, p, Y);

          // return values with Y as the dependent variable vector
          for(i = 0; i < m; i++)
          {     for(j = 0; j < n; j++)
                    jacobian[ i * n + j ] = Y[i].dx(j);
          }
     }
     return true;
}

Input File: speed/sacado/ode.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.5: Sacado Speed: Second Derivative of a Polynomial

11.7.5.a: Specifications
See 11.1.5: link_poly .

11.7.5.b: Implementation
// suppress conversion warnings before other includes
# include <cppad/wno_conversion.hpp>
//
# include <Sacado.hpp>
# include <cppad/utility/vector.hpp>
# include <cppad/utility/poly.hpp>
# include <cppad/speed/uniform_01.hpp>

// list of possible options
# include <map>
extern std::map<std::string, bool> global_option;

bool link_poly(
     size_t                     size     ,
     size_t                     repeat   ,
     CppAD::vector<double>     &a        ,  // coefficients of polynomial
     CppAD::vector<double>     &z        ,  // polynomial argument value
     CppAD::vector<double>     &ddp      )  // second derivative w.r.t z
{
     if( global_option["atomic"] )
          return false;
     if( global_option["memory"] || global_option["onetape"] || global_option["optimize"] )
          return false;
     // -----------------------------------------------------
     // setup
     typedef Sacado::Tay::Taylor<double>  ADScalar;
     CppAD::vector<ADScalar>              A(size);

     size_t i;               // temporary index
     ADScalar   Z;           // domain space AD value
     ADScalar   P;           // range space AD value
     unsigned int order = 2; // order of Taylor coefficients
     Z.resize(order+1, false);
     P.resize(order+1, false);

     // choose the polynomial coefficients
     CppAD::uniform_01(size, a);

     // AD copy of the polynomial coefficients
     for(i = 0; i < size; i++)
          A[i] = a[i];

     // ------------------------------------------------------
     while(repeat--)
     {     // get the next argument value
          CppAD::uniform_01(1, z);

          // independent variable value
          Z.fastAccessCoeff(0)   = z[0]; // argument value
          Z.fastAccessCoeff(1)   = 1.;   // first order coefficient
          Z.fastAccessCoeff(2)   = 0.;   // second order coefficient

          // AD computation of the dependent variable
          P = CppAD::Poly(0, A, Z);

          // second derivative is twice second order Taylor coefficient
          ddp[0] = 2. * P.fastAccessCoeff(2);
     }
     // ------------------------------------------------------
     return true;
}

Input File: speed/sacado/poly.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.6: Sacado Speed: Sparse Hessian
// A sacado version of this test is not yet implemented
extern bool link_sparse_hessian(
        size_t                           size       ,
        size_t                           repeat     ,
        const CppAD::vector<size_t>&      row       ,
        const CppAD::vector<size_t>&      col       ,
        CppAD::vector<double>&            x         ,
        CppAD::vector<double>&            hessian   ,
        size_t&                           n_sweep
)
{
     return false;
}

Input File: speed/sacado/sparse_hessian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
11.7.7: sacado Speed: sparse_jacobian
// A sacado version of this test is not yet available
bool link_sparse_jacobian(
     size_t                           size     ,
     size_t                           repeat   ,
     size_t                           m        ,
     const CppAD::vector<size_t>&     row      ,
     const CppAD::vector<size_t>&     col      ,
           CppAD::vector<double>&     x        ,
           CppAD::vector<double>&     jacobian ,
           size_t&                    n_sweep  )
{
     return false;
}

Input File: speed/sacado/sparse_jacobian.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12: Appendix

12.a: Contents
Faq: 12.1Frequently Asked Questions and Answers
directory: 12.2Directory Structure
Theory: 12.3The Theory of Derivative Calculations
glossary: 12.4Glossary
Bib: 12.5Bibliography
wish_list: 12.6The CppAD Wish List
whats_new: 12.7Changes and Additions to CppAD
deprecated: 12.8CppAD Deprecated API Features
compare_c: 12.9Compare Speed of C and C++
numeric_ad: 12.10Some Numerical AD Utilities
addon: 12.11CppAD Addons
License: 12.12Your License for the CppAD Software

Input File: omh/appendix.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.1: Frequently Asked Questions and Answers

12.1.a: Assignment and Independent
Why does the code sequence
     Independent(
u);
     
v = u[0];
behave differently from the code sequence
     
v = u[0];
     Independent(
u);
Before the call to 5.1.1: Independent , u[0] is a 12.4.h: parameter and after the call it is a variable. Thus in the first case, v is a variable and in the second case it is a parameter.

12.1.b: Bugs
What should I do if I suspect that there is a bug in CppAD ?

  1. The first step is to check currently open issues (https://github.com/coin-or/CppAD/issues) on github. If it is an open issue, and you want to hurry it along, you can add a comment to the effect that it is holding you up.
  2. The next step is to search the 12.7: whats_new sections for mention of a related bug fix between the date of the version you are using and the current date. It the bug has been fixed, obtain a more recent release that has the fix and see if that works for you.
  3. The next step is to create a simple demonstration of the bug; see the file bug/template.sh for a template that you can edit for that purpose. The smaller the program, the better the bug report.
  4. The next step is open a new issue on github and provide your simple example so that the problem can be reproduced.


12.1.c: CompareChange
If you attempt to use the 12.8.3: CompareChange function when NDEBUG is true, you will get an error message stating that CompareChange is not a member of the 5: ADFun template class.

12.1.d: Complex Types
Which of the following complex types is better:
     AD< std::complex<
Base> >
     std::complex< AD<
Base> >
The 4.4.2.14.d: complex abs function is differentiable with respect to its real and imaginary parts, but it is not complex differentiable. Thus one would prefer to use
     std::complex< AD<
Base> >
On the other hand, the C++ standard only specifies std::complex<Type> where Type is float, double, or lone double. The effect of instantiating the template complex for any other type is unspecified.

12.1.e: Exceptions
Why, in all the examples, do you pass back a boolean variable instead of throwing an exception ?

The examples are also used to test the correctness of CppAD and to check your installation. For these two uses, it is helpful to run all the tests and to know which ones failed. The actual code in CppAD uses the 8.1: ErrorHandler utility to signal exceptions. Specifications for redefining this action are provided.

12.1.f: Independent Variables
Is it possible to evaluate the same tape recording with different values for the independent variables ?

Yes (see 5.3.1: forward_zero ).

12.1.g: Matrix Inverse
Is it possible to differentiate (with respect to the matrix elements) the computation of the inverse of a matrix where the computation of the inverse uses pivoting ?

12.1.g.a: LuSolve
# The example routine 8.14.1: LuSolve can be used to do this because the inverse is a special case of the solution of linear equations. The examples 10.2.9: jac_lu_det.cpp and 10.2.6: hes_lu_det.cpp use LuSolve to compute derivatives of the determinant with respect to the components of the matrix.

12.1.g.b: Atomic Operation
One can also do this by making the inversion of the matrix an atomic operation; e.g., see 4.4.7.2.17: atomic_eigen_mat_inv.cpp .

12.1.h: Mode: Forward or Reverse
When evaluating derivatives, one always has a choice between forward and reverse mode. How does one decide which mode to use ?

In general, the best mode depends on the number of domain and range components in the function that your are differentiating. Each call to 5.3: Forward computes the derivative of all the range directions with respect to one domain direction. Each call to 5.4: Reverse computes the derivative of one range direction with respect to all the domain directions. The times required for (speed of) calls Forward and Reverse are about equal. The 5.1.5.f: Parameter function can be used to quickly determine that some range directions have derivative zero.

12.1.i: Namespace

12.1.i.a: Test Vector Preprocessor Symbol
Why do you use CPPAD_TESTVECTOR instead of a namespace for the CppAD 10.5: testvector class ?

The preprocessor symbol 10.5: CPPAD_TESTVECTOR determines which 8.9: SimpleVector template class is used for extensive testing. The default definition for CPPAD_TESTVECTOR is the 8.22: CppAD::vector template class, but it can be changed. Note that all the preprocessor symbols that are defined or used by CppAD begin with either CPPAD (some old deprecated symbols begin with CppAD).

12.1.j: Speed
How do I get the best speed performance out of CppAD ?

12.1.j.a: NDEBUG
You should compile your code with optimization, without debugging, and with the preprocessor symbol NDEBUG defined. (The 11.5: speed_cppad tests do this.) Note that defining NDEBUG will turn off all of the error checking and reporting that is done using 8.1: ErrorHandler .

12.1.j.b: Optimize
It is also possible that preforming a tape 5.7: optimization will improve the speed of evaluation more than the time required for the optimization.

12.1.j.c: Memory Allocation
You may also increase execution speed by calling hold_memory with 8.23.9.c: value equal to true.

12.1.k: Tape Storage: Disk or Memory
Does CppAD store the tape on disk or in memory ?

CppAD uses memory to store a different tape for recording operations for each AD<Base> type that is used. If you have a very large number calculations that are recorded on a tape, the tape will keep growing to hold the necessary information. Eventually, virtual memory may be used to store the tape and the calculations may slow down because of necessary disk access.
Input File: omh/appendix/faq.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.2: Directory Structure
A brief description of each of the CppAD directories is provided below:

12.2.a: Distribution Directory
The following table lists the sub-directories of the 2.1.b: distribution directory :
bin Scripts used for CppAD development.
bug Used to create a simple CppAD bug report or test.
build Used to build the libraries, examples, and tests.
cppad The CppAD include directory.
cppad_ipopt Example and tests for the deprecated cppad_ipopt library.
cppad_lib Source code corresponding to the CppAD library.
doc The program bin/run_omhelp.sh puts the user documentation here.
doxydoc The program bin/run_doxygen.sh puts its developer documentation here.
example Source code for the CppAD examples.
introduction Source code for the CppAD introduction.
omh Contains files that are only used for documentation.
pkgconfig Contains the CppAD pkg-config information.
speed The CppAD speed tests.
test_more Tests that are not part of the documentation.

12.2.b: Example Directory
The following table lists the sub-directories of the example directory.
atomic 4.4.7: atomic function examples.
deprecated examples for functions that have been 12.8: deprecated .
general general purpose examples.
get_started a good place to get started using CppAD.
ipopt_solve examples using the 9: ipopt_solve interface to CppAD.
multi_thread CppAD 7: multi_threading examples.
optimize examples using the 5.7: optimize operation.
print_for examples that used the 4.3.6: PrintFor operation.
sparse examples using 5.5: sparsity_patterns and 5.6: sparse_derivatives .
utility example using the CppAD 8: utilities .

Input File: omh/appendix/directory.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3: The Theory of Derivative Calculations

12.3.a: Contents
12.3.1: The Theory of Forward Mode
12.3.2: The Theory of Reverse Mode
12.3.3: An Important Reverse Mode Identity

Input File: omh/appendix/theory/theory.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1: The Theory of Forward Mode

12.3.1.a: Taylor Notation
In Taylor notation, each variable corresponds to a function of a single argument which we denote by t (see Section 10.2 of 12.5.c: Evaluating Derivatives ). Here and below @(@ X(t) @)@, @(@ Y(t) @)@, and Z(t) are scalar valued functions and the corresponding p-th order Taylor coefficients row vectors are @(@ x @)@, @(@ y @)@ and @(@ z @)@; i.e., @[@ \begin{array}{lcr} X(t) & = & x^{(0)} + x^{(1)} * t + \cdots + x^{(p)} * t^p + o( t^p ) \\ Y(t) & = & y^{(0)} + y^{(1)} * t + \cdots + y^{(p)} * t^p + o( t^p ) \\ Z(t) & = & z^{(0)} + z^{(1)} * t + \cdots + z^{(p)} * t^p + o( t^p ) \end{array} @]@ For the purposes of this section, we are given @(@ x @)@ and @(@ y @)@ and need to determine @(@ z @)@.

12.3.1.b: Binary Operators

12.3.1.b.a: Addition
@[@ \begin{array}{rcl} Z(t) & = & X(t) + Y(t) \\ \sum_{j=0}^p z^{(j)} * t^j & = & \sum_{j=0}^p x^{(j)} * t^j + \sum_{j=0}^p y^{(j)} * t^j + o( t^p ) \\ z^{(j)} & = & x^{(j)} + y^{(j)} \end{array} @]@
12.3.1.b.b: Subtraction
@[@ \begin{array}{rcl} Z(t) & = & X(t) - Y(t) \\ \sum_{j=0}^p z^{(j)} * t^j & = & \sum_{j=0}^p x^{(j)} * t^j - \sum_{j=0}^p y^{(j)} * t^j + o( t^p ) \\ z^{(j)} & = & x^{(j)} - y^{(j)} \end{array} @]@
12.3.1.b.c: Multiplication
@[@ \begin{array}{rcl} Z(t) & = & X(t) * Y(t) \\ \sum_{j=0}^p z^{(j)} * t^j & = & \left( \sum_{j=0}^p x^{(j)} * t^j \right) * \left( \sum_{j=0}^p y^{(j)} * t^j \right) + o( t^p ) \\ z^{(j)} & = & \sum_{k=0}^j x^{(j-k)} * y^{(k)} \end{array} @]@
12.3.1.b.d: Division
@[@ \begin{array}{rcl} Z(t) & = & X(t) / Y(t) \\ x & = & z * y \\ \sum_{j=0}^p x^{(j)} * t^j & = & \left( \sum_{j=0}^p z^{(j)} * t^j \right) * \left( \sum_{j=0}^p y^{(j)} * t^j \right) + o( t^p ) \\ x^{(j)} & = & \sum_{k=0}^j z^{(j-k)} y^{(k)} \\ z^{(j)} & = & \frac{1}{y^{(0)}} \left( x^{(j)} - \sum_{k=1}^j z^{(j-k)} y^{(k)} \right) \end{array} @]@
12.3.1.c: Standard Math Functions
Suppose that @(@ F @)@ is a standard math function and @[@ Z(t) = F[ X(t) ] @]@

12.3.1.c.a: Differential Equation
All of the standard math functions satisfy a differential equation of the form @[@ B(u) * F^{(1)} (u) - A(u) * F (u) = D(u) @]@ We use @(@ a @)@, @(@ b @)@ and @(@ d @)@ to denote the p-th order Taylor coefficient row vectors for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@ and @(@ D [ X (t) ] @)@ respectively. We assume that these coefficients are known functions of @(@ x @)@, the p-th order Taylor coefficients for @(@ X(t) @)@.

12.3.1.c.b: Taylor Coefficients Recursion Formula
Our problem here is to express @(@ z @)@, the p-th order Taylor coefficient row vector for @(@ Z(t) @)@, in terms of these other known coefficients. It follows from the formulas above that @[@ \begin{array}{rcl} Z^{(1)} (t) & = & F^{(1)} [ X(t) ] * X^{(1)} (t) \\ B[ X(t) ] * Z^{(1)} (t) & = & \{ D[ X(t) ] + A[ X(t) ] * Z(t) \} * X^{(1)} (t) \\ B[ X(t) ] * Z^{(1)} (t) & = & E(t) * X^{(1)} (t) \end{array} @]@ where we define @[@ E(t) = D[X(t)] + A[X(t)] * Z(t) @]@ We can compute the value of @(@ z^{(0)} @)@ using the formula @[@ z^{(0)} = F ( x^{(0)} ) @]@ Suppose by induction (on @(@ j @)@) that we are given the Taylor coefficients of @(@ E(t) @)@ up to order @(@ j-1 @)@; i.e., @(@ e^{(k)} @)@ for @(@ k = 0 , \ldots , j-1 @)@ and the coefficients @(@ z^{(k)} @)@ for @(@ k = 0 , \ldots , j @)@. We can compute @(@ e^{(j)} @)@ using the formula @[@ e^{(j)} = d^{(j)} + \sum_{k=0}^j a^{(j-k)} * z^{(k)} @]@ We need to complete the induction by finding formulas for @(@ z^{(j+1)} @)@. It follows for the formula for the 12.3.1.b.c: multiplication operator that @[@ \begin{array}{rcl} \left( \sum_{k=0}^j b^{(k)} t^k \right) * \left( \sum_{k=1}^{j+1} k z^{(k)} * t^{k-1} \right) & = & \left( \sum_{k=0}^j e^{(k)} * t^k \right) * \left( \sum_{k=1}^{j+1} k x^{(k)} * t^{k-1} \right) + o( t^p ) \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)} \right) \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \end{array} @]@ This completes the induction that computes @(@ e^{(j)} @)@ and @(@ z^{(j+1)} @)@.

12.3.1.c.c: Cases that Apply Recursion Above
12.3.1.1: exp_forward Exponential Function Forward Mode Theory
12.3.1.2: log_forward Logarithm Function Forward Mode Theory
12.3.1.3: sqrt_forward Square Root Function Forward Mode Theory
12.3.1.4: sin_cos_forward Trigonometric and Hyperbolic Sine and Cosine Forward Theory
12.3.1.5: atan_forward Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
12.3.1.6: asin_forward Inverse Sine and Hyperbolic Sine Forward Mode Theory
12.3.1.7: acos_forward Inverse Cosine and Hyperbolic Cosine Forward Mode Theory

12.3.1.c.d: Special Cases
12.3.1.8: tan_forward Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory

Input File: omh/appendix/theory/forward_theory.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.1: Exponential Function Forward Mode Theory

12.3.1.1.a: Derivatives
If @(@ F(x) @)@ is @(@ \R{exp} (x) @)@ or @(@ \R{expm1} (x) @)@ the corresponding derivative satisfies the equation @[@ 1 * F^{(1)} (x) - 1 * F (x) = d^{(0)} = \left\{ \begin{array}{ll} 0 & \R{if} \; F(x) = \R{exp}(x) \\ 1 & \R{if} \; F(x) = \R{expm1}(x) \end{array} \right. @]@ where the equation above defines @(@ d^{(0)} @)@. In the 12.3.1.c.a: standard math function differential equation , @(@ A(x) = 1 @)@, @(@ B(x) = 1 @)@, and @(@ D(x) = d^{(0)} @)@. We use @(@ a @)@, @(@ b @)@, @(@ d @)@, and @(@ z @)@ to denote the Taylor coefficients for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively.

12.3.1.1.b: Taylor Coefficients Recursion
For orders @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} z^{(0)} & = & F ( x^{(0)} ) \\ e^{(0)} & = & d^{(0)} + z^{(0)} \\ e^{(j+1)} & = & d^{(j+1)} + \sum_{k=0}^{j+1} a^{(j+1-k)} * z^{(k)} \\ & = & z^{(j+1)} \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \\ & = & x^{(j+1)} d^{(0)} + \frac{1}{j+1} \sum_{k=1}^{j+1} k x^{(k)} z^{(j+1-k)} \end{array} @]@
Input File: omh/appendix/theory/exp_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.2: Logarithm Function Forward Mode Theory

12.3.1.2.a: Derivatives
If @(@ F(x) @)@ is @(@ \R{log} (x) @)@ or @(@ \R{log1p} (x) @)@ the corresponding derivative satisfies the equation @[@ ( \bar{b} + x ) * F^{(1)} (x) - 0 * F (x) = 1 @]@ where @[@ \bar{b} = \left\{ \begin{array}{ll} 0 & \R{if} \; F(x) = \R{log}(x) \\ 1 & \R{if} \; F(x) = \R{log1p}(x) \end{array} \right. @]@ In the 12.3.1.c.a: standard math function differential equation , @(@ A(x) = 0 @)@, @(@ B(x) = \bar{b} + x @)@, and @(@ D(x) = 1 @)@. We use @(@ a @)@, @(@ b @)@, @(@ d @)@, and @(@ z @)@ to denote the Taylor coefficients for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively.

12.3.1.2.b: Taylor Coefficients Recursion
For orders @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} z^{(0)} & = & F ( x^{(0)} ) \\ e^{(j)} & = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)} \\ & = & \left\{ \begin{array}{ll} 1 & {\rm if} \; j = 0 \\ 0 & {\rm otherwise} \end{array} \right. \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \\ & = & \frac{1}{j+1} \frac{1}{ \bar{b} + x^{(0)} } \left( (j+1) x^{(j+1) } - \sum_{k=1}^j k z^{(k)} x^{(j+1-k)} \right) \end{array} @]@
Input File: omh/appendix/theory/log_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.3: Square Root Function Forward Mode Theory
If @(@ F(x) = \sqrt{x} @)@ @[@ F(x) * F^{(1)} (x) - 0 * F (x) = 1/2 @]@ and in the 12.3.1.c.a: standard math function differential equation , @(@ A(x) = 0 @)@, @(@ B(x) = F(x) @)@, and @(@ D(x) = 1/2 @)@. We use @(@ a @)@, @(@ b @)@, @(@ d @)@, and @(@ z @)@ to denote the Taylor coefficients for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively. It now follows from the general 12.3.1.c.b: Taylor coefficients recursion formula that for @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} z^{(0)} & = & \sqrt { x^{(0)} } \\ e^{(j)} & = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)} \\ & = & \left\{ \begin{array}{ll} 1/2 & {\rm if} \; j = 0 \\ 0 & {\rm otherwise} \end{array} \right. \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \\ & = & \frac{1}{j+1} \frac{1}{ z^{(0)} } \left( \frac{j+1}{2} x^{(j+1) } - \sum_{k=1}^j k z^{(k)} z^{(j+1-k)} \right) \end{array} @]@
Input File: omh/appendix/theory/sqrt_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory

12.3.1.4.a: Differential Equation
The 12.3.1.c.a: standard math function differential equation is @[@ B(u) * F^{(1)} (u) - A(u) * F (u) = D(u) @]@ In this sections we consider forward mode for the following choices:
  @(@ F(u) @)@ @(@ \sin(u) @)@ @(@ \cos(u) @)@ @(@ \sinh(u) @)@ @(@ \cosh(u) @)@
@(@ A(u) @)@ @(@ 0 @)@ @(@ 0 @)@ @(@ 0 @)@ @(@ 0 @)@
@(@ B(u) @)@ @(@ 1 @)@ @(@ 1 @)@ @(@ 1 @)@ @(@ 1 @)@
@(@ D(u) @)@ @(@ \cos(u) @)@ @(@ - \sin(u) @)@ @(@ \cosh(u) @)@ @(@ \sinh(u) @)@
We use @(@ a @)@, @(@ b @)@, @(@ d @)@ and @(@ f @)@ for the Taylor coefficients of @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively. It now follows from the general 12.3.1.c.b: Taylor coefficients recursion formula that for @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} f^{(0)} & = & D ( x^{(0)} ) \\ e^{(j)} & = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * f^{(k)} \\ & = & d^{(j)} \\ f^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} - \sum_{k=1}^j k f^{(k)} b^{(j+1-k)} \right) \\ & = & \frac{1}{j+1} \sum_{k=1}^{j+1} k x^{(k)} d^{(j+1-k)} \end{array} @]@ The formula above generates the order @(@ j+1 @)@ coefficient of @(@ F[ X(t) ] @)@ from the lower order coefficients for @(@ X(t) @)@ and @(@ D[ X(t) ] @)@.
Input File: omh/appendix/theory/sin_cos_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory

12.3.1.5.a: Derivatives
@[@ \begin{array}{rcl} \R{atan}^{(1)} (x) & = & 1 / ( 1 + x * x ) \\ \R{atanh}^{(1)} (x) & = & 1 / ( 1 - x * x ) \end{array} @]@If @(@ F(x) @)@ is @(@ \R{atan} (x) @)@ or @(@ \R{atanh} (x) @)@, the corresponding derivative satisfies the equation @[@ (1 \pm x * x ) * F^{(1)} (x) - 0 * F (x) = 1 @]@ and in the 12.3.1.c.a: standard math function differential equation , @(@ A(x) = 0 @)@, @(@ B(x) = 1 \pm x * x @)@, and @(@ D(x) = 1 @)@. We use @(@ a @)@, @(@ b @)@, @(@ d @)@ and @(@ z @)@ to denote the Taylor coefficients for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively.

12.3.1.5.b: Taylor Coefficients Recursion
For @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} z^{(0)} & = & F( x^{(0)} ) \\ b^{(j)} & = & \left\{ \begin{array}{ll} 1 \pm x^{(0)} * x^{(0)} & {\rm if} \; j = 0 \\ \pm \sum_{k=0}^j x^{(k)} x^{(j-k)} & {\rm otherwise} \end{array} \right. \\ e^{(j)} & = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)} \\ & = & \left\{ \begin{array}{ll} 1 & {\rm if} \; j = 0 \\ 0 & {\rm otherwise} \end{array} \right. \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)} \right) \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( (j+1) x^{(j+1)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \end{array} @]@
Input File: omh/appendix/theory/atan_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory

12.3.1.6.a: Derivatives
@[@ \begin{array}{rcl} \R{asin}^{(1)} (x) & = & 1 / \sqrt{ 1 - x * x } \\ \R{asinh}^{(1)} (x) & = & 1 / \sqrt{ 1 + x * x } \end{array} @]@If @(@ F(x) @)@ is @(@ \R{asin} (x) @)@ or @(@ \R{asinh} (x) @)@ the corresponding derivative satisfies the equation @[@ \sqrt{ 1 \mp x * x } * F^{(1)} (x) - 0 * F (u) = 1 @]@ and in the 12.3.1.c.a: standard math function differential equation , @(@ A(x) = 0 @)@, @(@ B(x) = \sqrt{1 \mp x * x } @)@, and @(@ D(x) = 1 @)@. We use @(@ a @)@, @(@ b @)@, @(@ d @)@ and @(@ z @)@ to denote the Taylor coefficients for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively.

12.3.1.6.b: Taylor Coefficients Recursion
We define @(@ Q(x) = 1 \mp x * x @)@ and let @(@ q @)@ be the corresponding Taylor coefficients for @(@ Q[ X(t) ] @)@. It follows that @[@ q^{(j)} = \left\{ \begin{array}{ll} 1 \mp x^{(0)} * x^{(0)} & {\rm if} \; j = 0 \\ \mp \sum_{k=0}^j x^{(k)} x^{(j-k)} & {\rm otherwise} \end{array} \right. @]@ It follows that @(@ B[ X(t) ] = \sqrt{ Q[ X(t) ] } @)@ and from the equations for the 12.3.1.3: square root that for @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} b^{(0)} & = & \sqrt{ q^{(0)} } \\ b^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \frac{j+1}{2} q^{(j+1) } - \sum_{k=1}^j k b^{(k)} b^{(j+1-k)} \right) \end{array} @]@ It now follows from the general 12.3.1.c.b: Taylor coefficients recursion formula that for @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} z^{(0)} & = & F ( x^{(0)} ) \\ e^{(j)} & = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)} \\ & = & \left\{ \begin{array}{ll} 1 & {\rm if} \; j = 0 \\ 0 & {\rm otherwise} \end{array} \right. \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)} \right) \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( (j+1) x^{(j+1)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \end{array} @]@
Input File: omh/appendix/theory/asin_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory

12.3.1.7.a: Derivatives
@[@ \begin{array}{rcl} \R{acos}^{(1)} (x) & = & - 1 / \sqrt{ 1 - x * x } \\ \R{acosh}^{(1)} (x) & = & + 1 / \sqrt{ x * x - 1} \end{array} @]@If @(@ F(x) @)@ is @(@ \R{acos} (x) @)@ or @(@ \R{acosh} (x) @)@ the corresponding derivative satisfies the equation @[@ \sqrt{ \mp ( x * x - 1 ) } * F^{(1)} (x) - 0 * F (u) = \mp 1 @]@ and in the 12.3.1.c.a: standard math function differential equation , @(@ A(x) = 0 @)@, @(@ B(x) = \sqrt{ \mp( x * x - 1 ) } @)@, and @(@ D(x) = \mp 1 @)@. We use @(@ a @)@, @(@ b @)@, @(@ d @)@ and @(@ z @)@ to denote the Taylor coefficients for @(@ A [ X (t) ] @)@, @(@ B [ X (t) ] @)@, @(@ D [ X (t) ] @)@, and @(@ F [ X(t) ] @)@ respectively.

12.3.1.7.b: Taylor Coefficients Recursion
We define @(@ Q(x) = \mp ( x * x - 1 ) @)@ and let @(@ q @)@ be the corresponding Taylor coefficients for @(@ Q[ X(t) ] @)@. It follows that @[@ q^{(j)} = \left\{ \begin{array}{ll} \mp ( x^{(0)} * x^{(0)} - 1 ) & {\rm if} \; j = 0 \\ \mp \sum_{k=0}^j x^{(k)} x^{(j-k)} & {\rm otherwise} \end{array} \right. @]@ It follows that @(@ B[ X(t) ] = \sqrt{ Q[ X(t) ] } @)@ and from the equations for the 12.3.1.3: square root that for @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} b^{(0)} & = & \sqrt{ q^{(0)} } \\ b^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \frac{j+1}{2} q^{(j+1) } - \sum_{k=1}^j k b^{(k)} b^{(j+1-k)} \right) \end{array} @]@ It now follows from the general 12.3.1.c.b: Taylor coefficients recursion formula that for @(@ j = 0 , 1, \ldots @)@, @[@ \begin{array}{rcl} z^{(0)} & = & F ( x^{(0)} ) \\ e^{(j)} & = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)} \\ & = & \left\{ \begin{array}{ll} \mp 1 & {\rm if} \; j = 0 \\ 0 & {\rm otherwise} \end{array} \right. \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)} \right) \\ z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } \left( \mp (j+1) x^{(j+1)} - \sum_{k=1}^j k z^{(k)} b^{(j+1-k)} \right) \end{array} @]@
Input File: omh/appendix/theory/acos_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory

12.3.1.8.a: Derivatives
@[@ \begin{array}{rcl} \tan^{(1)} ( u ) & = & [ \cos (u)^2 + \sin (u)^2 ] / \cos (u)^2 \\ & = & 1 + \tan (u)^2 \\ \tanh^{(1)} ( u ) & = & [ \cosh (u)^2 - \sinh (u)^2 ] / \cosh (u)^2 \\ & = & 1 - \tanh (u)^2 \end{array} @]@If @(@ F(u) @)@ is @(@ \tan (u) @)@ or @(@ \tanh (u) @)@ the corresponding derivative is given by @[@ F^{(1)} (u) = 1 \pm F(u)^2 @]@ Given @(@ X(t) @)@, we define the function @(@ Z(t) = F[ X(t) ] @)@. It follows that @[@ Z^{(1)} (t) = F^{(1)} [ X(t) ] X^{(1)} (t) = [ 1 \pm Y(t) ] X^{(1)} (t) @]@ where we define the function @(@ Y(t) = Z(t)^2 @)@.

12.3.1.8.b: Taylor Coefficients Recursion
Suppose that we are given the Taylor coefficients up to order @(@ j @)@ for the function @(@ X(t) @)@ and up to order @(@ j-1 @)@ for the functions @(@ Y(t) @)@ and @(@ Z(t) @)@. We need a formula that computes the coefficient of order @(@ j @)@ for @(@ Y(t) @)@ and @(@ Z(t) @)@. Using the equation above for @(@ Z^{(1)} (t) @)@, we have @[@ \begin{array}{rcl} \sum_{k=1}^j k z^{(k)} t^{k-1} & = & \sum_{k=1}^j k x^{(k)} t^{k-1} \pm \left[ \sum_{k=0}^{j-1} y^{(k)} t^k \right] \left[ \sum_{k=1}^j k x^{(k)} t^{k-1} \right] + o( t^{j-1} ) \end{array} @]@ Setting the coefficients of @(@ t^{j-1} @)@ equal, we have @[@ \begin{array}{rcl} j z^{(j)} = j x^{(j)} \pm \sum_{k=1}^j k x^{(k)} y^{(j-k)} \\ z^{(j)} = x^{(j)} \pm \frac{1}{j} \sum_{k=1}^j k x^{(k)} y^{(j-k)} \end{array} @]@ Once we have computed @(@ z^{(j)} @)@, we can compute @(@ y^{(j)} @)@ as follows: @[@ y^{(j)} = \sum_{k=0}^j z^{(k)} z^{(j-k)} @]@
Input File: omh/appendix/theory/tan_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.1.9: Error Function Forward Taylor Polynomial Theory

12.3.1.9.a: Derivatives
Given @(@ X(t) @)@, we define the function @[@ Z(t) = \R{erf}[ X(t) ] @]@ It follows that @[@ \begin{array}{rcl} \R{erf}^{(1)} ( u ) & = & ( 2 / \sqrt{\pi} ) \exp \left( - u^2 \right) \\ Z^{(1)} (t) & = & \R{erf}^{(1)} [ X(t) ] X^{(1)} (t) = Y(t) X^{(1)} (t) \end{array} @]@ where we define the function @[@ Y(t) = \frac{2}{ \sqrt{\pi} } \exp \left[ - X(t)^2 \right] @]@

12.3.1.9.b: Taylor Coefficients Recursion
Suppose that we are given the Taylor coefficients up to order @(@ j @)@ for the function @(@ X(t) @)@ and @(@ Y(t) @)@. We need a formula that computes the coefficient of order @(@ j @)@ for @(@ Z(t) @)@. Using the equation above for @(@ Z^{(1)} (t) @)@, we have @[@ \begin{array}{rcl} \sum_{k=1}^j k z^{(k)} t^{k-1} & = & \left[ \sum_{k=0}^j y^{(k)} t^k \right] \left[ \sum_{k=1}^j k x^{(k)} t^{k-1} \right] + o( t^{j-1} ) \end{array} @]@ Setting the coefficients of @(@ t^{j-1} @)@ equal, we have @[@ \begin{array}{rcl} j z^{(j)} = \sum_{k=1}^j k x^{(k)} y^{(j-k)} \\ z^{(j)} = \frac{1}{j} \sum_{k=1}^j k x^{(k)} y^{(j-k)} \end{array} @]@
Input File: omh/appendix/theory/erf_forward.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2: The Theory of Reverse Mode

12.3.2.a: Taylor Notation
In Taylor notation, each variable corresponds to a function of a single argument which we denote by t (see Section 10.2 of 12.5.c: Evaluating Derivatives ). Here and below @(@ X(t) @)@, @(@ Y(t) @)@, and Z(t) are scalar valued functions and the corresponding p-th order Taylor coefficients row vectors are @(@ x @)@, @(@ y @)@ and @(@ z @)@; i.e., @[@ \begin{array}{lcr} X(t) & = & x^{(0)} + x^{(1)} * t + \cdots + x^{(p)} * t^p + O( t^{p+1} ) \\ Y(t) & = & y^{(0)} + y^{(1)} * t + \cdots + y^{(p)} * t^p + O( t^{p+1} ) \\ Z(t) & = & z^{(0)} + z^{(1)} * t + \cdots + z^{(p)} * t^p + O( t^{p+1} ) \end{array} @]@ For the purposes of this discussion, we are given the p-th order Taylor coefficient row vectors @(@ x @)@, @(@ y @)@, and @(@ z @)@. In addition, we are given the partial derivatives of a scalar valued function @[@ G ( z^{(j)} , \ldots , z^{(0)}, x, y) @]@ We need to compute the partial derivatives of the scalar valued function @[@ H ( z^{(j-1)} , \ldots , z^{(0)}, x, y) = G ( z^{(j)}, z^{(j-1)} , \ldots , z^{(0)}, x , y ) @]@ where @(@ z^{(j)} @)@ is expressed as a function of the j-1-th order Taylor coefficient row vector for @(@ Z @)@ and the vectors @(@ x @)@, @(@ y @)@; i.e., @(@ z^{(j)} @)@ above is a shorthand for @[@ z^{(j)} ( z^{(j-1)} , \ldots , z^{(0)}, x, y ) @]@ If we do not provide a formula for a partial derivative of @(@ H @)@, then that partial derivative has the same value as for the function @(@ G @)@.

12.3.2.b: Binary Operators

12.3.2.b.a: Addition
The forward mode formula for 12.3.1.b.a: addition is @[@ z^{(j)} = x^{(j)} + y^{(j)} @]@ If follows that for @(@ k = 0 , \ldots , j @)@ and @(@ l = 0 , \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(k)} } \\ \\ \D{H}{ y^{(k)} } & = & \D{G}{ y^{(k)} } + \D{G}{ z^{(k)} } \\ \D{H}{ z^{(l)} } & = & \D{G}{ z^{(l)} } \end{array} @]@

12.3.2.b.b: Subtraction
The forward mode formula for 12.3.1.b.b: subtraction is @[@ z^{(j)} = x^{(j)} - y^{(j)} @]@ If follows that for @(@ k = 0 , \ldots , j @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } - \D{G}{ z^{(k)} } \\ \\ \D{H}{ y^{(k)} } & = & \D{G}{ y^{(k)} } - \D{G}{ z^{(k)} } \end{array} @]@

12.3.2.b.c: Multiplication
The forward mode formula for 12.3.1.b.c: multiplication is @[@ z^{(j)} = \sum_{k=0}^j x^{(j-k)} * y^{(k)} @]@ If follows that for @(@ k = 0 , \ldots , j @)@ and @(@ l = 0 , \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(j-k)} } & = & \D{G}{ x^{(j-k)} } + \sum_{k=0}^j \D{G}{ z^{(j)} } y^{(k)} \\ \D{H}{ y^{(k)} } & = & \D{G}{ y^{(k)} } + \sum_{k=0}^j \D{G}{ z^{(j)} } x^{(j-k)} \end{array} @]@

12.3.2.b.d: Division
The forward mode formula for 12.3.1.b.d: division is @[@ z^{(j)} = \frac{1}{y^{(0)}} \left( x^{(j)} - \sum_{k=1}^j z^{(j-k)} y^{(k)} \right) @]@ If follows that for @(@ k = 1 , \ldots , j @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} \\ \D{H}{ z^{(j-k)} } & = & \D{G}{ z^{(j-k)} } - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} y^{(k)} \\ \D{H}{ y^{(k)} } & = & \D{G}{ y^{(k)} } - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} z^{(j-k)} \\ \D{H}{ y^{(0)} } & = & \D{G}{ y^{(0)} } - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} \frac{1}{y^{(0)}} \left( x^{(j)} - \sum_{k=1}^j z^{(j-k)} y^{(k)} \right) \\ & = & \D{G}{ y^{(0)} } - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} z^{(j)} \end{array} @]@

12.3.2.c: Standard Math Functions
The standard math functions have only one argument. Hence we are given the partial derivatives of a scalar valued function @[@ G ( z^{(j)} , \ldots , z^{(0)}, x) @]@ We need to compute the partial derivatives of the scalar valued function @[@ H ( z^{(j-1)} , \ldots , z^{(0)}, x) = G ( z^{(j)}, z^{(j-1)} , \ldots , z^{(0)}, x) @]@ where @(@ z^{(j)} @)@ is expressed as a function of the j-1-th order Taylor coefficient row vector for @(@ Z @)@ and the vector @(@ x @)@; i.e., @(@ z^{(j)} @)@ above is a shorthand for @[@ z^{(j)} ( z^{(j-1)} , \ldots , z^{(0)}, x ) @]@

12.3.2.d: Contents
exp_reverse: 12.3.2.1Exponential Function Reverse Mode Theory
log_reverse: 12.3.2.2Logarithm Function Reverse Mode Theory
sqrt_reverse: 12.3.2.3Square Root Function Reverse Mode Theory
sin_cos_reverse: 12.3.2.4Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
atan_reverse: 12.3.2.5Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
asin_reverse: 12.3.2.6Inverse Sine and Hyperbolic Sine Reverse Mode Theory
acos_reverse: 12.3.2.7Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
tan_reverse: 12.3.2.8Tangent and Hyperbolic Tangent Reverse Mode Theory
erf_reverse: 12.3.2.9Error Function Reverse Mode Theory

Input File: omh/appendix/theory/reverse_theory.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.1: Exponential Function Reverse Mode Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. The zero order forward mode formula for the 12.3.1.1: exponential is @[@ z^{(0)} = F ( x^{(0)} ) @]@ and for @(@ j > 0 @)@, @[@ z^{(j)} = x^{(j)} d^{(0)} + \frac{1}{j} \sum_{k=1}^{j} k x^{(k)} z^{(j-k)} @]@ where @[@ d^{(0)} = \left\{ \begin{array}{ll} 0 & \R{if} \; F(x) = \R{exp}(x) \\ 1 & \R{if} \; F(x) = \R{expm1}(x) \end{array} \right. @]@ For order @(@ j = 0, 1, \ldots @)@ we note that @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } ( d^{(0)} + z^{(0)} ) \end{array} @]@ If @(@ j > 0 @)@, then for @(@ k = 1 , \ldots , j @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \frac{1}{j} k z^{(j-k)} \\ \D{H}{ z^{(j-k)} } & = & \D{G}{ z^{(j-k)} } + \D{G}{ z^{(j)} } \frac{1}{j} k x^{(k)} \end{array} @]@
Input File: omh/appendix/theory/exp_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.2: Logarithm Function Reverse Mode Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. The zero order forward mode formula for the 12.3.1.2: logarithm is @[@ z^{(0)} = F( x^{(0)} ) @]@ and for @(@ j > 0 @)@, @[@ z^{(j)} = \frac{1}{ \bar{b} + x^{(0)} } \frac{1}{j} \left( j x^{(j)} - \sum_{k=1}^{j-1} k z^{(k)} x^{(j-k)} \right) @]@ where @[@ \bar{b} = \left\{ \begin{array}{ll} 0 & \R{if} \; F(x) = \R{log}(x) \\ 1 & \R{if} \; F(x) = \R{log1p}(x) \end{array} \right. @]@ We note that for @(@ j > 0 @)@ @[@ \begin{array}{rcl} \D{ z^{(j)} } { x^{(0)} } & = & - \frac{1}{ \bar{b} + x^{(0)} } \frac{1}{ \bar{b} + x^{(0)} } \frac{1}{j} \left( j x^{(j)} - \sum_{k=1}^{j-1} k z^{(k)} x^{(j-k)} \right) \\ & = & - \frac{z^{(j)}}{ \bar{b} + x^{(0)} } \end{array} @]@ Removing the zero order partials are given by @[@ \begin{array}{rcl} \D{H}{ x^{(0)} } & = & \D{G}{ x^{(0)} } + \D{G}{ z^{(0)} } \D{ z^{(0)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(0)} } + \D{G}{ z^{(0)} } \frac{1}{ \bar{b} + x^{(0)} } \end{array} @]@ For orders @(@ j > 0 @)@ and for @(@ k = 1 , \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(0)} } & = & \D{G}{ x^{(0)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(0)} } - \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ \bar{b} + x^{(0)} } \\ \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ \bar{b} + x^{(0)} } \\ \D{H}{ x^{(j-k)} } & = & \D{G}{ x^{(j-k)} } - \D{G}{ z^{(j)} } \frac{1}{ \bar{b} + x^{(0)} } \frac{k}{j} z^{(k)} \\ \D{H}{ z^{(k)} } & = & \D{G}{ z^{(k)} } - \D{G}{ z^{(j)} } \frac{1}{ \bar{b} + x^{(0)} } \frac{k}{j} x^{(j-k)} \end{array} @]@
Input File: omh/appendix/theory/log_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.3: Square Root Function Reverse Mode Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. The forward mode formulas for the 12.3.1.3: square root function are @[@ z^{(j)} = \sqrt { x^{(0)} } @]@ for the case @(@ j = 0 @)@, and for @(@ j > 0 @)@, @[@ z^{(j)} = \frac{1}{j} \frac{1}{ z^{(0)} } \left( \frac{j}{2} x^{(j) } - \sum_{\ell=1}^{j-1} \ell z^{(\ell)} z^{(j-\ell)} \right) @]@ If @(@ j = 0 @)@, we have the relation @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{2 z^{(0)} } \end{array} @]@ If @(@ j > 0 @)@, then for @(@ k = 1, \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ z^{(0)} } & = & \D{G}{ z^{(0)} } + \D{G} { z^{(j)} } \D{ z^{(j)} }{ z^{(0)} } \\ & = & \D{G}{ z^{(0)} } - \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ z^{(0)} } \\ \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ 2 z^{(0)} } \\ \D{H}{ z^{(k)} } & = & \D{G}{ z^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} } \\ & = & \D{G}{ z^{(k)} } - \D{G}{ z^{(j)} } \frac{ z^{(j-k)} }{ z^{(0)} } \end{array} @]@
Input File: omh/appendix/theory/sqrt_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. In addition, we use the following definitions for @(@ s @)@ and @(@ c @)@ and the integer @(@ \ell @)@
Coefficients @(@ s @)@ @(@ c @)@ @(@ \ell @)@
Trigonometric Case @(@ \sin [ X(t) ] @)@ @(@ \cos [ X(t) ] @)@ 1
Hyperbolic Case @(@ \sinh [ X(t) ] @)@ @(@ \cosh [ X(t) ] @)@ -1
We use the value @[@ z^{(j)} = ( s^{(j)} , c^{(j)} ) @]@ in the definition for @(@ G @)@ and @(@ H @)@. The forward mode formulas for the 12.3.1.4: sine and cosine functions are @[@ \begin{array}{rcl} s^{(j)} & = & \frac{1 + \ell}{2} \sin ( x^{(0)} ) + \frac{1 - \ell}{2} \sinh ( x^{(0)} ) \\ c^{(j)} & = & \frac{1 + \ell}{2} \cos ( x^{(0)} ) + \frac{1 - \ell}{2} \cosh ( x^{(0)} ) \end{array} @]@ for the case @(@ j = 0 @)@, and for @(@ j > 0 @)@, @[@ \begin{array}{rcl} s^{(j)} & = & \frac{1}{j} \sum_{k=1}^{j} k x^{(k)} c^{(j-k)} \\ c^{(j)} & = & \ell \frac{1}{j} \sum_{k=1}^{j} k x^{(k)} s^{(j-k)} \end{array} @]@ If @(@ j = 0 @)@, we have the relation @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ s^{(j)} } c^{(0)} + \ell \D{G}{ c^{(j)} } s^{(0)} \end{array} @]@ If @(@ j > 0 @)@, then for @(@ k = 1, \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ s^{(j)} } \frac{1}{j} k c^{(j-k)} + \ell \D{G}{ c^{(j)} } \frac{1}{j} k s^{(j-k)} \\ \D{H}{ s^{(j-k)} } & = & \D{G}{ s^{(j-k)} } + \ell \D{G}{ c^{(j)} } k x^{(k)} \\ \D{H}{ c^{(j-k)} } & = & \D{G}{ c^{(j-k)} } + \D{G}{ s^{(j)} } k x^{(k)} \end{array} @]@
Input File: omh/appendix/theory/sin_cos_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. In addition, we use the forward mode notation in 12.3.1.5: atan_forward for @[@ B(t) = 1 \pm X(t) * X(t) @]@ We use @(@ b @)@ for the p-th order Taylor coefficient row vectors corresponding to @(@ B(t) @)@ and replace @(@ z^{(j)} @)@ by @[@ ( z^{(j)} , b^{(j)} ) @]@ in the definition for @(@ G @)@ and @(@ H @)@. The zero order forward mode formulas for the 12.3.1.5: atan function are @[@ \begin{array}{rcl} z^{(0)} & = & F ( x^{(0)} ) \\ b^{(0)} & = & 1 \pm x^{(0)} x^{(0)} \end{array} @]@ where @(@ F(x) = \R{atan} (x) @)@ for @(@ + @)@ and @(@ F(x) = \R{atanh} (x) @)@ for @(@ - @)@. For orders @(@ j @)@ greater than zero we have @[@ \begin{array}{rcl} b^{(j)} & = & \pm \sum_{k=0}^j x^{(k)} x^{(j-k)} \\ z^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } \left( j x^{(j)} - \sum_{k=1}^{j-1} k z^{(k)} b^{(j-k)} \right) \end{array} @]@ If @(@ j = 0 @)@, we note that @(@ F^{(1)} ( x^{(0)} ) = 1 / b^{(0)} @)@ and hence @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } \pm \D{G}{ b^{(j)} } 2 x^{(0)} \end{array} @]@ If @(@ j > 0 @)@, then for @(@ k = 1, \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ b^{(0)} } & = & \D{G}{ b^{(0)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(0)} } \\ & = & \D{G}{ b^{(0)} } - \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ b^{(0)} } \\ \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(j)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } \pm \D{G}{ b^{(j)} } 2 x^{(0)} \\ \D{H}{ x^{(0)} } & = & \D{G}{ x^{(0)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(0)} } \pm \D{G}{ b^{(j)} } 2 x^{(j)} \\ \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(k)} } \\ & = & \D{G}{ x^{(k)} } \pm \D{G}{ b^{(j)} } 2 x^{(j-k)} \\ \D{H}{ z^{(k)} } & = & \D{G}{ z^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ z^{(k)} } \\ & = & \D{G}{ z^{(k)} } - \D{G}{ z^{(j)} } \frac{k b^{(j-k)} }{ j b^{(0)} } \\ \D{H}{ b^{(j-k)} } & = & \D{G}{ b^{(j-k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(j-k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(j-k)} } \\ & = & \D{G}{ b^{(j-k)} } - \D{G}{ z^{(j)} } \frac{k z^{(k)} }{ j b^{(0)} } \end{array} @]@
Input File: omh/appendix/theory/atan_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. In addition, we use the forward mode notation in 12.3.1.6: asin_forward for @[@ \begin{array}{rcl} Q(t) & = & 1 \mp X(t) * X(t) \\ B(t) & = & \sqrt{ Q(t) } \end{array} @]@ We use @(@ q @)@ and @(@ b @)@ for the p-th order Taylor coefficient row vectors corresponding to these functions and replace @(@ z^{(j)} @)@ by @[@ ( z^{(j)} , b^{(j)} ) @]@ in the definition for @(@ G @)@ and @(@ H @)@. The zero order forward mode formulas for the 12.3.1.6: asin function are @[@ \begin{array}{rcl} q^{(0)} & = & 1 \mp x^{(0)} x^{(0)} \\ b^{(0)} & = & \sqrt{ q^{(0)} } \\ z^{(0)} & = & F( x^{(0)} ) \end{array} @]@ where @(@ F(x) = \R{asin} (x) @)@ for @(@ - @)@ and @(@ F(x) = \R{asinh} (x) @)@ for @(@ + @)@. For the orders @(@ j @)@ greater than zero we have @[@ \begin{array}{rcl} q^{(j)} & = & \mp \sum_{k=0}^j x^{(k)} x^{(j-k)} \\ b^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } \left( \frac{j}{2} q^{(j)} - \sum_{k=1}^{j-1} k b^{(k)} b^{(j-k)} \right) \\ z^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } \left( j x^{(j)} - \sum_{k=1}^{j-1} k z^{(k)} b^{(j-k)} \right) \end{array} @]@ If @(@ j = 0 @)@, we note that @(@ F^{(1)} ( x^{(0)} ) = 1 / b^{(0)} @)@ and hence @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(0)} } \D{ q^{(0)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} } \end{array} @]@ If @(@ j > 0 @)@, then for @(@ k = 1, \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ b^{(0)} } & = & \D{G}{ b^{(0)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(0)} } \\ & = & \D{G}{ b^{(0)} } - \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ b^{(0)} } - \D{G}{ b^{(j)} } \frac{ b^{(j)} }{ b^{(0)} } \\ \D{H}{ x^{(0)} } & = & \D{G}{ x^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(0)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(j)} }{ b^{(0)} } \\ \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(j)} } \\ & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} } \\ \D{H}{ b^{(j - k)} } & = & \D{G}{ b^{(j - k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(j - k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(j - k)} } \\ & = & \D{G}{ b^{(j - k)} } - \D{G}{ z^{(j)} } \frac{k z^{(k)} }{j b^{(0)} } - \D{G}{ b^{(j)} } \frac{ b^{(k)} }{ b^{(0)} } \\ \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(k)} } \\ & = & \D{G}{ x^{(k)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(j-k)} }{ b^{(0)} } \\ \D{H}{ z^{(k)} } & = & \D{G}{ z^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ z^{(k)} } \\ & = & \D{G}{ z^{(k)} } - \D{G}{ z^{(j)} } \frac{k b^{(j-k)} }{ j b^{(0)} } \end{array} @]@
Input File: omh/appendix/theory/asin_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. In addition, we use the forward mode notation in 12.3.1.7: acos_forward for @[@ \begin{array}{rcl} Q(t) & = & \mp ( X(t) * X(t) - 1 ) \\ B(t) & = & \sqrt{ Q(t) } \end{array} @]@ We use @(@ q @)@ and @(@ b @)@ for the p-th order Taylor coefficient row vectors corresponding to these functions and replace @(@ z^{(j)} @)@ by @[@ ( z^{(j)} , b^{(j)} ) @]@ in the definition for @(@ G @)@ and @(@ H @)@. The zero order forward mode formulas for the 12.3.1.7: acos function are @[@ \begin{array}{rcl} q^{(0)} & = & \mp ( x^{(0)} x^{(0)} - 1) \\ b^{(0)} & = & \sqrt{ q^{(0)} } \\ z^{(0)} & = & F ( x^{(0)} ) \end{array} @]@ where @(@ F(x) = \R{acos} (x) @)@ for @(@ - @)@ and @(@ F(x) = \R{acosh} (x) @)@ for @(@ + @)@. For orders @(@ j @)@ greater than zero we have @[@ \begin{array}{rcl} q^{(j)} & = & \mp \sum_{k=0}^j x^{(k)} x^{(j-k)} \\ b^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } \left( \frac{j}{2} q^{(j)} - \sum_{k=1}^{j-1} k b^{(k)} b^{(j-k)} \right) \\ z^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } \left( \mp j x^{(j)} - \sum_{k=1}^{j-1} k z^{(k)} b^{(j-k)} \right) \end{array} @]@ If @(@ j = 0 @)@, we note that @(@ F^{(1)} ( x^{(0)} ) = \mp 1 / b^{(0)} @)@ and hence @[@ \begin{array}{rcl} \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(0)} } \D{ q^{(0)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(j)} } \mp \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} } \end{array} @]@ If @(@ j > 0 @)@, then for @(@ k = 1, \ldots , j-1 @)@ @[@ \begin{array}{rcl} \D{H}{ b^{(0)} } & = & \D{G}{ b^{(0)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(0)} } \\ & = & \D{G}{ b^{(0)} } - \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ b^{(0)} } - \D{G}{ b^{(j)} } \frac{ b^{(j)} }{ b^{(0)} } \\ \D{H}{ x^{(0)} } & = & \D{G}{ x^{(0)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(0)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(j)} }{ b^{(0)} } \\ \D{H}{ x^{(j)} } & = & \D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(j)} } \\ & = & \D{G}{ x^{(j)} } \mp \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} } \\ \D{H}{ b^{(j - k)} } & = & \D{G}{ b^{(j - k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(j - k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(j - k)} } \\ & = & \D{G}{ b^{(j - k)} } - \D{G}{ z^{(j)} } \frac{k z^{(k)} }{j b^{(0)} } - \D{G}{ b^{(j)} } \frac{ b^{(k)} }{ b^{(0)} } \\ \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(k)} } \\ & = & \D{G}{ x^{(k)} } \mp \D{G}{ b^{(j)} } \frac{ x^{(j-k)} }{ b^{(0)} } \\ \D{H}{ z^{(k)} } & = & \D{G}{ z^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} } + \D{G}{ b^{(j)} } \D{ b^{(j)} }{ z^{(k)} } \\ & = & \D{G}{ z^{(k)} } - \D{G}{ z^{(j)} } \frac{k b^{(j-k)} }{ j b^{(0)} } \end{array} @]@
Input File: omh/appendix/theory/acos_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory

12.3.2.8.a: Notation
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@. In addition, we use the forward mode notation in 12.3.1.8: tan_forward for @(@ X(t) @)@, @(@ Y(t) @)@ and @(@ Z(t) @)@.

12.3.2.8.b: Eliminating Y(t)
For @(@ j > 0 @)@, the forward mode coefficients are given by @[@ y^{(j-1)} = \sum_{k=0}^{j-1} z^{(k)} z^{(j-k-1)} @]@ Fix @(@ j > 0 @)@ and suppose that @(@ H @)@ is the same as @(@ G @)@ except that @(@ y^{(j-1)} @)@ is replaced as a function of the Taylor coefficients for @(@ Z(t) @)@. To be specific, for @(@ k = 0 , \ldots , j-1 @)@, @[@ \begin{array}{rcl} \D{H}{ z^{(k)} } & = & \D{G}{ z^{(k)} } + \D{G}{ y^{(j-1)} } \D{ y^{(j-1)} }{ z^{(k)} } \\ & = & \D{G}{ z^{(k)} } + \D{G}{ y^{(j-1)} } 2 z^{(j-k-1)} \end{array} @]@

12.3.2.8.c: Positive Orders Z(t)
For order @(@ j > 0 @)@, suppose that @(@ H @)@ is the same as @(@ G @)@ except that @(@ z^{(j)} @)@ is expressed as a function of the coefficients for @(@ X(t) @)@, and the lower order Taylor coefficients for @(@ Y(t) @)@, @(@ Z(t) @)@. @[@ z^{(j)} = x^{(j)} \pm \frac{1}{j} \sum_{k=1}^j k x^{(k)} y^{(j-k)} @]@ For @(@ k = 1 , \ldots , j @)@, the partial of @(@ H @)@ with respect to @(@ x^{(k)} @)@ is given by @[@ \begin{array}{rcl} \D{H}{ x^{(k)} } & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} } \\ & = & \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \left[ \delta ( j - k ) \pm \frac{k}{j} y^{(j-k)} \right] \end{array} @]@ where @(@ \delta ( j - k ) @)@ is one if @(@ j = k @)@ and zero otherwise. For @(@ k = 1 , \ldots , j @)@ The partial of @(@ H @)@ with respect to @(@ y^{j-k} @)@, is given by @[@ \begin{array}{rcl} \D{H}{ y^{(j-k)} } & = & \D{G}{ y^{(j-k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ y^{(j-k)} } \\ & = & \D{G}{ y^{(j-k)} } \pm \D{G}{ z^{(j)} }\frac{k}{j} x^{k} \end{array} @]@

12.3.2.8.d: Order Zero Z(t)
The order zero coefficients for the tangent and hyperbolic tangent are @[@ \begin{array}{rcl} z^{(0)} & = & \left\{ \begin{array}{c} \tan ( x^{(0)} ) \\ \tanh ( x^{(0)} ) \end{array} \right. \end{array} @]@ Suppose that @(@ H @)@ is the same as @(@ G @)@ except that @(@ z^{(0)} @)@ is expressed as a function of the Taylor coefficients for @(@ X(t) @)@. In this case, @[@ \begin{array}{rcl} \D{H}{ x^{(0)} } & = & \D{G}{ x^{(0)} } + \D{G}{ z^{(0)} } \D{ z^{(0)} }{ x^{(0)} } \\ & = & \D{G}{ x^{(0)} } + \D{G}{ z^{(0)} } ( 1 \pm y^{(0)} ) \end{array} @]@
Input File: omh/appendix/theory/tan_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.2.9: Error Function Reverse Mode Theory

12.3.2.9.a: Notation
We use the reverse theory 12.3.2.c: standard math function definition for the functions @(@ H @)@ and @(@ G @)@.

12.3.2.9.b: Positive Orders Z(t)
For order @(@ j > 0 @)@, suppose that @(@ H @)@ is the same as @(@ G @)@. @[@ z^{(j)} = \frac{1}{j} \sum_{k=1}^j k x^{(k)} y^{(j-k)} @]@ For @(@ k = 1 , \ldots , j @)@, the partial of @(@ H @)@ with respect to @(@ x^{(k)} @)@ is given by @[@ \D{H}{ x^{(k)} } = \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} } = \D{G}{ x^{(k)} } + \D{G}{ z^{(j)} } \frac{k}{j} y^{(j-k)} @]@ For @(@ k = 1 , \ldots , j @)@ The partial of @(@ H @)@ with respect to @(@ y^{j-k} @)@, is given by @[@ \D{H}{ y^{(j-k)} } = \D{G}{ y^{(j-k)} } + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ y^{(j-k)} } = \D{G}{ y^{(j-k)} } + \D{G}{ z^{(j)} } \frac{k}{j} x^{k} @]@

12.3.2.9.c: Order Zero Z(t)
The @(@ z^{(0)} @)@ coefficient is expressed as a function of the Taylor coefficients for @(@ X(t) @)@ and @(@ Y(t) @)@ as follows: In this case, @[@ \D{H}{ x^{(0)} } = \D{G}{ x^{(0)} } + \D{G}{ z^{(0)} } \D{ z^{(0)} }{ x^{(0)} } = \D{G}{ x^{(0)} } + \D{G}{ z^{(0)} } y^{(0)} @]@
Input File: omh/appendix/theory/erf_reverse.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.3.3: An Important Reverse Mode Identity
The theorem and the proof below is a restatement of the results on page 236 of 12.5.c: Evaluating Derivatives .

12.3.3.a: Notation
Given a function @(@ f(u, v) @)@ where @(@ u \in B^n @)@ we use the notation @[@ \D{f}{u} (u, v) = \left[ \D{f}{u_1} (u, v) , \cdots , \D{f}{u_n} (u, v) \right] @]@

12.3.3.b: Reverse Sweep
When using 5.4.3: reverse mode we are given a function @(@ F : B^n \rightarrow B^m @)@, a matrix of Taylor coefficients @(@ x \in B^{n \times p} @)@, and a weight vector @(@ w \in B^m @)@. We define the functions @(@ X : B \times B^{n \times p} \rightarrow B^n @)@, @(@ W : B \times B^{n \times p} \rightarrow B @)@, and @(@ W_j : B^{n \times p} \rightarrow B @)@ by @[@ \begin{array}{rcl} X(t , x) & = & x^{(0)} + x^{(1)} t + \cdots + x^{(p-1)} t^{p-1} \\ W(t, x) & = & w_0 F_0 [X(t, x)] + \cdots + w_{m-1} F_{m-1} [X(t, x)] \\ W_j (x) & = & \frac{1}{j!} \Dpow{j}{t} W(0, x) \end{array} @]@ where @(@ x^{(j)} @)@ is the j-th column of @(@ x \in B^{n \times p} @)@. The theorem below implies that @[@ \D{ W_j }{ x^{(i)} } (x) = \D{ W_{j-i} }{ x^{(0)} } (x) @]@ A 5.4.3: general reverse sweep calculates the values @[@ \D{ W_{p-1} }{ x^{(i)} } (x) \hspace{1cm} (i = 0 , \ldots , p-1) @]@ But the return values for a reverse sweep are specified in terms of the more useful values @[@ \D{ W_j }{ x^{(0)} } (x) \hspace{1cm} (j = 0 , \ldots , p-1) @]@

12.3.3.c: Theorem
Suppose that @(@ F : B^n \rightarrow B^m @)@ is a @(@ p @)@ times continuously differentiable function. Define the functions @(@ Z : B \times B^{n \times p} \rightarrow B^n @)@, @(@ Y : B \times B^{n \times p }\rightarrow B^m @)@, and @(@ y^{(j)} : B^{n \times p }\rightarrow B^m @)@ by @[@ \begin{array}{rcl} Z(t, x) & = & x^{(0)} + x^{(1)} t + \cdots + x^{(p-1)} t^{p-1} \\ Y(t, x) & = & F [ Z(t, x) ] \\ y^{(j)} (x) & = & \frac{1}{j !} \Dpow{j}{t} Y(0, x) \end{array} @]@ where @(@ x^{(j)} @)@ denotes the j-th column of @(@ x \in B^{n \times p} @)@. It follows that for all @(@ i, j @)@ such that @(@ i \leq j < p @)@, @[@ \begin{array}{rcl} \D{ y^{(j)} }{ x^{(i)} } (x) & = & \D{ y^{(j-i)} }{ x^{(0)} } (x) \end{array} @]@

12.3.3.d: Proof
If follows from the definitions that @[@ \begin{array}{rclr} \D{ y^{(j)} }{ x^{(i)} } (x) & = & \frac{1}{j ! } \D{ }{ x^{(i)} } \left[ \Dpow{j}{t} (F \circ Z) (t, x) \right]_{t=0} \\ & = & \frac{1}{j ! } \left[ \Dpow{j}{t} \D{ }{ x^{(i)} } (F \circ Z) (t, x) \right]_{t=0} \\ & = & \frac{1}{j ! } \left\{ \Dpow{j}{t} \left[ t^i ( F^{(1)} \circ Z ) (t, x) \right] \right\}_{t=0} \end{array} @]@ For @(@ k > i @)@, the k-th partial of @(@ t^i @)@ with respect to @(@ t @)@ is zero. Thus, the partial with respect to @(@ t @)@ is given by @[@ \begin{array}{rcl} \Dpow{j}{t} \left[ t^i ( F^{(1)} \circ Z ) (t, x) \right] & = & \sum_{k=0}^i \left( \begin{array}{c} j \\ k \end{array} \right) \frac{ i ! }{ (i - k) ! } t^{i-k} \; \Dpow{j-k}{t} ( F^{(1)} \circ Z ) (t, x) \\ \left\{ \Dpow{j}{t} \left[ t^i ( F^{(1)} \circ Z ) (t, x) \right] \right\}_{t=0} & = & \left( \begin{array}{c} j \\ i \end{array} \right) i ! \Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x) \\ & = & \frac{ j ! }{ (j - i) ! } \Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x) \\ \D{ y^{(j)} }{ x^{(i)} } (x) & = & \frac{ 1 }{ (j - i) ! } \Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x) \end{array} @]@ Applying this formula to the case where @(@ j @)@ is replaced by @(@ j - i @)@ and @(@ i @)@ is replaced by zero, we obtain @[@ \D{ y^{(j-i)} }{ x^{(0)} } (x) = \frac{ 1 }{ (j - i) ! } \Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x) = \D{ y^{(j)} }{ x^{(i)} } (x) @]@ which completes the proof
Input File: omh/appendix/theory/reverse_identity.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.4: Glossary

12.4.a: AD Function
Given an 5: ADFun object f there is a corresponding AD of Base 12.4.g.b: operation sequence . This operation sequence defines a function @(@ F : B^n \rightarrow B^m @)@ where B is the space corresponding to objects of type Base , n is the size of the 5.1.5.d: domain space, and m is the size of the 5.1.5.e: range space. We refer to @(@ F @)@ as the AD function corresponding to the operation sequence stored in the object f . (See the 5.9.l: FunCheck discussion for possible differences between @(@ F(x) @)@ and the algorithm that defined the operation sequence.)

12.4.b: AD of Base
An object is called an AD of Base object its type is either AD<Base> (see the default and copy 4.1: constructors or VecAD<Base>::reference (see 4.6: VecAD ) for some Base type.

12.4.c: AD Type Above Base
If Base is a type, an AD type above Base is the following sequence of types:
     AD<
Base, AD< AD<Base> > , AD< AD< AD<Base> > > , ...

12.4.d: Base Function
A function @(@ f : B \rightarrow B @)@ is referred to as a Base function, if Base is a C++ type that represent elements of the domain and range space of f ; i.e. elements of @(@ B @)@.

12.4.e: Base Type
If x is an AD<Base> object, Base is referred to as the base type for x .

12.4.f: Elementary Vector
The j-th elementary vector @(@ e^j \in B^m @)@ is defined by @[@ e_i^j = \left\{ \begin{array}{ll} 1 & {\rm if} \; i = j \\ 0 & {\rm otherwise} \end{array} \right. @]@

12.4.g: Operation

12.4.g.a: Atomic
An atomic Type operation is an operation that has a Type result and is not made up of other more basic operations.

12.4.g.b: Sequence
A sequence of atomic Type operations is called a Type operation sequence. A sequence of atomic 12.4.b: AD of Base operations is referred to as an AD of Base operation sequence. The abbreviated notation AD operation sequence is often used when it is not necessary to specify the base type.

12.4.g.c: Dependent
Suppose that x and y are Type objects and the result of
     
x < y
has type bool (where Type is not the same as bool). If one executes the following code
     if( 
x < y )
          
y = cos(x);
     else 
y = sin(x);
the choice above depends on the value of x and y and the two choices result in a different Type operation sequence. In this case, we say that the Type operation sequence depends on x and y .

12.4.g.d: Independent
Suppose that i and n are size_t objects, and x[i] , y are Type objects, where Type is different from size_t. The Type sequence of operations corresponding to
     
y = Type(0);
     for(
i = 0; i < ni++)
          
y += x[i];
does not depend on the value of x or y . In this case, we say that the Type operation sequence is independent of y and the elements of x .

12.4.h: Parameter
All Base objects are parameters. An AD<Base> object u is currently a parameter if its value does not depend on the value of an 5.1.1: Independent variable vector for an 12.4.k.a: active tape . If u is a parameter, the function 4.5.4: Parameter(u) returns true and 4.5.4: Variable(u) returns false.

12.4.i: Row-major Representation
A 8.9: SimpleVector v is a row-major representation of a matrix @(@ M \in \B{R}^{m \times n} @)@ if v.size() == m * n and for @(@ i = 0 , \ldots , m-1 @)@, @(@ j = 0 , \ldots , n-1 @)@ @[@ M_{i,j} = v[ i \times n + j ] @]@

12.4.j: Sparsity Pattern
Suppose that @(@ A \in B^{m \times n} @)@ is a sparse matrix. CppAD has several ways to specify the elements of @(@ A @)@ that are possible non-zero.

12.4.j.a: Row and Column Index Vectors
A pair of non-negative integer vectors @(@ r @)@, @(@ c @)@ are a sparsity pattern for @(@ A @)@ if for every non-zero element @(@ A_{i,j} @)@, there is a @(@ k @)@ such that @(@ i = r_k @)@ and @(@ j = c_k @)@. Furthermore, for every @(@ \ell != k @)@, either @(@ r_\ell != r_k @)@ or @(@ c_\ell != c_k @)@.

12.4.j.b: Boolean Vector
A boolean vector @(@ b @)@, of length @(@ m n @)@, is a sparsity pattern for @(@ A @)@ if for every non-zero element @(@ A_{i,j} @)@, @(@ b_{i n + j} @)@ is true.

12.4.j.c: Vector of Sets
A vector of sets @(@ s @)@ of positive integers, of length @(@ m @)@, is a sparsity pattern for @(@ A @)@ if for every non-zero element @(@ A_{i,j} @)@, @(@ j \in s_k @)@.

12.4.k: Tape

12.4.k.a: Active
A new tape is created and becomes active after each call of the form (see 5.1.1: Independent )
     Independent(
x)
All operations that depend on the elements of x are recorded on this active tape.

12.4.k.b: Inactive
The 12.4.g.b: operation sequence stored in a tape must be transferred to a function object using the syntax (see 5.1.2: ADFun<Base> f(x, y) )
     ADFun<
Basefxy)
or using the syntax (see 5.1.3: f.Dependent(x, y) )
     
f.Dependent( xy)
After such a transfer, the tape becomes inactive.

12.4.k.c: Independent Variable
While the tape is active, we refer to the elements of x as the independent variables for the tape. When the tape becomes inactive, the corresponding objects become 12.4.h: parameters .

12.4.k.d: Dependent Variables
While the tape is active, we use the term dependent variables for the tape for any objects whose value depends on the independent variables for the tape. When the tape becomes inactive, the corresponding objects become 12.4.h: parameters .

12.4.l: Taylor Coefficient
Suppose @(@ X : B \rightarrow B^n @)@ is a is @(@ p @)@ times continuously differentiable function in some neighborhood of zero. For @(@ k = 0 , \ldots , p @)@, we use the column vector @(@ x^{(k)} \in B^n @)@ for the k-th order Taylor coefficient corresponding to @(@ X @)@ which is defined by @[@ x^{(k)} = \frac{1}{k !} \Dpow{k}{t} X(0) @]@ It follows that @[@ X(t) = x^{(0)} + x^{(1)} t + \cdots + x^{(p)} t^p + R(t) @]@ where the remainder @(@ R(t) @)@ divided by @(@ t^p @)@ converges to zero and @(@ t @)@ goes to zero.

12.4.m: Variable
An AD<Base> object u is a variable if its value depends on an independent variable vector for a currently 12.4.k.a: active tape . If u is a variable, 4.5.4: Variable(u) returns true and 4.5.4: Parameter(u) returns false. For example, directly after the code sequence
     Independent(
x);
     AD<double> 
u = x[0];
the AD<double> object u is currently a variable. Directly after the code sequence
     Independent(
x);
     AD<double> 
u = x[0];
     
u = 5;
u is currently a 12.4.h: parameter (not a variable).

Note that we often drop the word currently and just refer to an AD<Base> object as a variable or parameter.
Input File: omh/appendix/glossary.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.5: Bibliography

12.5.a: Abramowitz and Stegun
Handbook of Mathematical Functions, Dover, New York.

12.5.b: The C++ Programming Language
Bjarne Stroustrup, The C++ Programming Language, Special ed., AT&T, 2000

12.5.c: Evaluating Derivatives
Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Andreas Griewank, SIAM, Philadelphia, 2000

12.5.d: Numerical Recipes
Numerical Recipes in Fortran: The Art of Scientific Computing, Second Edition, William H. Press, William T. Vetterling, Saul, A. Teukolsky, Brian R. Flannery, Cambridge University Press, 1992

12.5.e: Shampine, L.F.
Implementation of Rosenbrock Methods, ACM Transactions on Mathematical Software, Vol. 8, No. 2, June 1982.
Input File: omh/appendix/bib.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.6: The CppAD Wish List

12.6.a: Multi-Threading
The TMB packages has a special version of the 4.4.7.1: checkpoint class that enables one checkpoint function to be my multiple OpenMp threads. Perhaps it would be useful to extend the CppAD multi-threading environment to allow for this. In addition, perhaps a multi-threading version of an ADFun object would be useful.

12.6.b: Atomic

12.6.b.a: Subgraph
The 5.5.11: subgraph_sparsity calculation treats each atomic function call as if all of its outputs depend on all of its inputs; see 5.5.11.d: atomic function . These sparsity patterns could be made more efficient (could have fewer possibly non-zeros entries) by using the sparsity patterns for the atomic functions.

12.6.b.b: New API
A new API for atomic functions could be created that uses 8.27: sparse_rc for sparsity patterns and an interface like
     
afun.jac_sparsity(select_domainselect_rangepattern_out)
     
afun.hes_sparsity(select_domainselect_rangepattern_out)
see 5.5.11: subgraph_sparsity . This would be simpler for the user.

12.6.b.c: Sparsity
Add an 4.4.7.2.2: atomic_option that checks if the sparsity patterns calculated by user atomic functions have elements for arguments that are know to be parameters and could be more efficient. For example, the user's version of for_sparse_jac could check 4.4.7.2.6.d.d: x to see for which components are variables; i.e., 8.11.d: isnan(x[j]) is true for a particular call. Note that 4.4.7.2.8.d.a: vx should be removed, because the method above can be used to determine this information.

12.6.b.d: Element-wise Operations
Add user atomic functions for element-wise addition, subtraction, multiplication, and division. Where the operands are 8.9: simple vectors with elements of type AD<Base> .

12.6.c: check_finite
  1. Sometimes one only gets infinite value during zero order forward and nan when computing corresponding derivatives. Change 5.10: check_for_nan to check_finite (not infinite or nan) so that error detection happens during zero order forward instead of later.
  2. In addition, the current 5.10: check_for_nan writes the corresponding zero order values to a temporary file. It would be nice if the check_finite routine made writing the zero order values optional.


12.6.d: test_boolofvoid
For general purpose use, the 8.6: test_boolofvoid should be usable without including a memory check at the end.

12.6.e: Eigen
Use a wrapper class for 10.5.g: eigen vectors so that the size member function returns a size_t instead of an int. This would allow 10.5: TESTVECTOR to be a true template class; i.e., to use the syntax
     TESTVECTOR<
Scalar>

12.6.f: Example
Split the 10.4: example list into separate groups by the corresponding example subdirectory.

12.6.g: Optimization

12.6.g.a: Taping
Perhaps some of the optimization done while taping forward mode should be delayed to the optimization step.

12.6.g.b: Special Operators
Add special operators that can be implemented more efficiently, e.g.,
     square(
x) = x * x
and have the optimizer recognize when they should be used. (They could also be in the user API, but it would not be expected that the user would use them.)

12.6.g.c: Memory
The 5.7: optimize command seems to use a lot of memory when the tape is large. We should create a test case that demonstrates this and then work on reducing the amount of memory needed by this operation.

12.6.h: checkpoint

12.6.h.a: Retape
Perhaps there should be a version of the 4.4.7.1: checkpoint class that uses a tapeless AD package to compute the derivative values. This would allow for algorithms where the operations sequence depends on the independent variable values. There is a question as to how sparsity patterns would be determined in this case. Perhaps they would be passed into the constructor. If it was known to be constant, the user could compute the pattern using CppAD. Otherwise, the user could input a conservative estimate of the pattern that would be correct.

12.6.h.b: Testing
There should be some examples and tests for both speed and memory use that demonstrate that checkpointing is useful.

12.6.i: Compilation Speed
Create a library corresponding to AD<double> so that one does not need to re-compile all the header files every time.

12.6.j: Base Requirements
Change the 4.7: Base requirements to use template specialization instead of functions so that there is a default value for each function. The default would result in a 8.1.2.e: known assert when the operation is used and not defined by the base class. An example of this type of template specialization can be found in the implementation of 8.25: to_string .

12.6.k: Adolc
Create a documentation page that shows how to convert Adolc commands to CppAD commands.

12.6.l: Forward Mode Recomputation
If the results of 5.3.4: forward_order have already been computed and are still stored in the 5: ADFun object (see 5.3.6: size_order ), then they do not need to be recomputed and the results can just be returned.

12.6.m: Iterator Interface
All of the CppAD simple vector interfaces should also have an iterator version for the following reasons:
  1. It would not be necessary to copy information to simple vectors when it was originally stored in a different type of container.
  2. It would not be necessary to reallocate memory for a result that is repeatedly calculated (because an iterator for the result container would be passed in).


12.6.n: Operation Sequence
It is possible to detect if the AD of Base 12.4.g.b: operation sequence does not depend on any of the 12.4.k.c: independent variable values. This could be returned as an extra 5.1.5: seq_property .

12.6.o: Software Guidelines
The following is a list of some software guidelines taken from boost (http://www.boost.org/development/requirements.html#Guidelines) . These guidelines are not followed by the current CppAD source code, but perhaps they should be:
  1. Names (except as noted below) should be all lowercase, with words separated by underscores. For example, acronyms should be treated as ordinary names (xml_parser instead of XML_parser).
  2. Template parameter names should begin with an uppercase letter.
  3. Use spaces rather than tabs. Currently, CppAD uses a tabs stops at column multiples of 5. Five columns were chosen to avoid high levels of indenting and to allow for
     
         if( expression )
              statement
         else statement
    
    with a tab after the else. Automatic conversion to actual spaces should be easy.


12.6.p: Tracing
Add tracing the operation sequence to the user API and documentation. Tracing the operation sequence is currently done by changing the CppAD source code. Use the command
 
     grep '^# *define *CPPAD_.*_TRACE' cppad/local/*.hpp
to find all the possible tracing flags.

12.6.q: atan2
The 4.4.3.1: atan2 function could be made faster by adding a special operator for it.

12.6.r: BenderQuad
See the 12.10.1.c: problem with the current BenderQuad specifications.
Input File: omh/appendix/wish_list.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7: Changes and Additions to CppAD

12.7.a: Introduction
The sections listed below contain a list of the changes to CppAD in reverse order by date. The purpose of these sections is to assist you in learning about changes between various versions of CppAD.

12.7.b: This Year
12.7.1: whats_new_17

12.7.c: Previous Years
12.7.2: whats_new_16 12.7.3: whats_new_15 12.7.4: whats_new_14 12.7.5: whats_new_13 12.7.6: whats_new_12 12.7.7: whats_new_11 , 12.7.8: whats_new_10 , 12.7.9: whats_new_09 , 12.7.10: whats_new_08 , 12.7.11: whats_new_07 , 12.7.12: whats_new_06 , 12.7.13: whats_new_05 , 12.7.14: whats_new_04 , 12.7.15: whats_new_03 .
Input File: omh/appendix/whats_new/whats_new.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.1: Changes and Additions to CppAD During 2017

12.7.1.a: API Changes
Speed tests no longer automatically compile in release mode; see 11.b: debug_which

12.7.1.b: 12-14
Add the 5.6.5.2: subgraph_hes2jac.cpp example which computes sparse Hessians using subgraphs and Jacobians.

12.7.1.c: 12-08
  1. A wish list item for a 12.6.b.b: new API for user defined 4.4.7.2: atomic functions.
  2. A 12.6.a: multi-threading wish list item was added.


12.7.1.d: 12-06
A 12.6: wish_list item to enable one to iterate through a const 5: ADFun operation sequence was completed. In addition, the 5.8.1.b: f argument to the abs_normal operation was converted to be const.

12.7.1.e: 12-05
The internal data object used to represent sparsity patterns as vectors of integers was improved; see 5.5.1.i: internal_bool in for_jac_sparsity and other 5.5.a: preferred sparsity pattern routines.

12.7.1.f: 12-04
Back out the hold_reverse_memory option.

12.7.1.g: 12-01
The hold_reverse_memory option was added.

12.7.1.h: 11-30
Edit the 2.1: download instructions.

12.7.1.i: 11-23
The ADFun function 5.7: optimizer was not handling hash code collisions properly. To be specific, only the arguments that were variables where checked for a complete match. The arguments that are constants need to also be checked. This has been fixed.

12.7.1.j: 11-20
  1. Add the 5.6.5: subgraph_jac_rev method for computing sparse Jacobians.
  2. Add the 11.1.f.f: subgraph option to the CppAD speed tests.


12.7.1.k: 11-19
Add the 5.4.4: subgraph_reverse method for computing sparse derivatives. This was inspired by the TMB (https://cran.r-project.org/web/packages/TMB/index.html) package.

12.7.1.l: 11-15
  1. Add wish list item for 5.5.11: subgraph_sparsity when 12.6.b.a: atomic functions are present.
  2. Fix 2.3: cmake_check when 2.2.5: ipopt_prefix is not present on the 2.2: cmake command line (make was trying to build some of the ipopt tests).


12.7.1.m: 11-13
  1. Add the 11.1.f.e: hes2jac option to the CppAD speed tests.
  2. Implement the 11.1.g.c: subsparsity option for the CppAD 11.1.6: sparse_hessian test.
  3. Fix detection of invalid options in the speed test programs; see the 11.1.f: global and 11.1.g: sparsity options.


12.7.1.n: 11-12
Add the 11.1.g.c: subsparsity option to the CppAD speed tests.

12.7.1.o: 11-08
Add the 5.5.11: subgraph_sparsity method for computing dependency and sparsity. This was inspired by the TMB (https://cran.r-project.org/web/packages/TMB/index.html) package.

12.7.1.p: 11-06
More information has been added to the operation sequence. To be specific, the extra amount of
     
f.size_op() * sizeof(tape_addr_type)
was added to the value returned by 5.1.5.m: size_op_seq .

12.7.1.q: 11-04
The method for iterating through the tape has been changed. It now includes an extra data structure that makes it faster, but also requires slightly more memory. To be specific, the term
     
f.size_op() * sizeof(tape_addr_type) * 2
was added to the value returned by 5.1.5.m: size_op_seq . In addition, some minor corrections were made to the 2.2.r: tape_addr_type requirements.

12.7.1.r: 10-23
  1. Require cmake.3.1 or greater and fix a cmake warning by always using the new CMP0054 policy.
  2. Fix a g++ 7.2.1 warning about a possibly uninitialized value in the file cppad/local/optimize/hash_code.hpp.


12.7.1.s: 09-16
An 12.6.g.c: optimization memory entry was added to the wish list and the 12.6.c: check_finite entry was modified.

12.7.1.t: 08-30
  1. If 2.2.2: colpack_prefix was not specified, one would get the following waring during the 2.2: cmake command:
         Policy CMP0046 is not set: Error on non-existent dependency in
    This has been fixed by not adding the dependency when it is not needed.
  2. There was a problem running 2.3: make check when 2.2.m: cppad_cxx_flags was not specified. This has been fixed. This was probably introduced on 12.7.1.ah: 05-29 .


12.7.1.u: 08-29
There was a problem on some systems that created an error when specializing the is_pod template function in the CppAD::local namespace. This has been fixed by testing for compatibility at during the 2.2: cmake command and creating the file cppad/local/is_pod.hpp.

12.7.1.v: 08-09
Add the 12.6.d: test_boolofvoid wish list item.

12.7.1.w: 08-08
  1. The 10.2.4.1: eigen_plugin.hpp was put back in the 10.2.4: cppad_eigen.hpp definitions. This makes CppAD incompatible with older versions of eigen; e.g., eigen-3.2.9. The plugin was removed on 12.7.1.ak: 05-12 .
  2. Fix some minor typos in the documentation. To be specific: The font, in the 8.27: sparse_rc and 8.28: sparse_rcv syntax, for the text
         
    target = pattern
    The font, in 5.3.8: capacity_order , for the text
         
    xq.size() == f.Domain()
    Remove a percent sign %, in 8.22: CppAD_vector , in the text
         # include <cppad/utility/vector.hpp>

12.7.1.x: 07-25
  1. Fix warnings related to type conversions that occurred when one used -Wconversion with g++ version 6.3.1.
  2. The warning were not fixed for complex AD types; e.g., 4.7.9.6.1: complex_poly.cpp . The 10.6: wno_conversion include file was added to deal with cases like this.


12.7.1.y: 07-03
  1. The 5.8.7: min_nso_linear abs-normal example was added.
  2. Fix bug in 5.8.1: abs_normal_fun , to be specific, the multiplication of a variable on the left by a parameter was not handled.


12.7.1.z: 07-01
the 5.8: abs_normal examples were converted from using quadratic programming problems to using linear programming problems.

12.7.1.aa: 06-28
The 5.8.1: abs-normal representation of non-smooth functions has been added. Examples and utilities that use this representation have also been included, see 5.8: abs_normal .

12.7.1.ab: 06-11
The user atomic functions base class 4.4.7.2: atomic_base makes more of an effort to avoid false sharing cache misses. This may the speed of multi-threaded applications with user atomic functions; e.g., see 7.2.9: multi_atomic.cpp .

12.7.1.ac: 06-10
  1. Add the multi-threading user atomic function example 7.2.9: multi_atomic.cpp .
  2. The example/multi_thread/test_multi directory used to have an example using the deprecated 12.8.11: old_atomic functions in a multi-threading setting (that only built with the deprecated 12.8.13: autotools ). This have been removed.


12.7.1.ad: 06-07
The multi-threading examples 7.2.8: harmonic.cpp and 7.2.10: multi_newton.cpp were re-organized. To be specific, the source code for each example was moved to one file. In addition, for each example, the documentation for each of the routines has been separated and placed next to its source code.

12.7.1.ae: 06-04
Most all the 12.8: deprecated features have been removed from the examples with the exception of those in the example/deprecated directory.

12.7.1.af: 06-03
Add the fact that the pair ( row , 11.1.6.g: col ) is lower triangular to the speed test link_sparse_hessian routine.

12.7.1.ag: 06-01
  1. There was a bug in the 5.6.3: sparse_hes routine and it was using the general coloring algorithm when 5.6.3.j.a: cppad.symmetric was specified. This has been fixed and improves the efficiency in this case.
  2. Some bugs were fixed in the use of 2.2.2: colpack as the coloring algorithm for sparse Jacobian and Hessian calculations. This has improved the efficiency of Colpack colorings for computing Hessians (when colpack.symmetric is used).
  3. The colpack.star coloring method for sparse Hessians has been deprecated; see 5.6.3.j.e: sparse_hes and 5.6.4.i.b: sparse_hessian . Use the colpack.symmetric method instead; see 5.6.3.j.c: sparse_hes and 5.6.3.j.d: sparse_hes .


12.7.1.ah: 05-29
  1. Add the capability to compile so that CppAD debug and release mode can be mixed; see 6.b.a: CPPAD_DEBUG_AND_RELEASE .
  2. Add the 2.2.s: cppad_debug_which flags that determines which files are compiled for debugging versus release during the CppAD testing; see 2.3: cmake_check .
  3. There was a problem linking the proper libraries for using newer versions of 2.2.6: sacado . This has been fixed.


12.7.1.ai: 05-19
Most all the examples have been moved to example directory and grouped as sub-directories; e.g., the 9: ipopt_solve examples are in the example/ipopt_solve directory.

12.7.1.aj: 05-14
  1. The file build.sh was moved to bin/autotools.sh, and `auto tools' has been changed to 12.8.13: autotools .
  2. The README file was replace by readme.md and AUTHORS was moved to authors.
  3. The NEWS, INSALL, and ChangeLog files are no longer necessary for autotools build and have been removed.
  4. The file test_more/sparse_jacobian.cpp generated a warning under some gcc compiler options. This has been fixed.
  5. Specifications were added so that 8.25: to_string yields exact results for integer types and machine precision for floating point types.
  6. Some editing was done to the 12.8.13: autotools instructions.


12.7.1.ak: 05-12
  1. The 12.1: Faq has been updated.
  2. Remove includes of cppad/cppad.hpp from the cppad/speed/*.hpp files. This avoids an incompatibility between sacado and newer versions of eigen, when eigen is used for the 10.5.g: test vector .
  3. The 2.2.3: eigen package changed its requirements for defining Scalar types (some where between eigen-3.2.9 and eigen-3.3.3). The member variable 4.4.6.i: digit10 was added to the numeric_limits to accommodate this change.
  4. Note that this fix required adding digits10 to the user defined Base type 4.7: requirements ; see 4.7.6: base_limits .
  5. In addition, it is no longer necessary to add the typedef
         typedef Scalar value_type;
    so the file cppad/example/eigen_plugin.hpp has been removed. (This type definition was previously necessary for eigen vectors to be 8.9: simple vectors .)


12.7.1.al: 04-08
The 5.7: optimization , with a large number of 4.4.4: conditional expressions , was performing many memory allocations and deallocations. This has been reduced.

12.7.1.am: 04-02
Fix a bug in the optimization of conditional expressions; see, 5.7.d.a: no_conditional_skip .

12.7.1.an: 03-31
Fix some valgrind errors that occurred while running the CppAD test set.

12.7.1.ao: 03-29
The following valgrind error might occur when the optimize skipped setting values that did not affect the dependent variables:
     Conditional jump or move depends on uninitialised value(s)
This was not a bug, the code has been changed to avoid this error in order to make it easier to use valgrind with CppAD.

12.7.1.ap: 03-25
  1. The 5.6.3: sparse_hes function was more efficient if there were more entries in each row of the requested 5.6.3.h: subset . This has been changed to more entries in each column, and documentation to this effect was included.
  2. The 5.7: optimize routine was using to much memory when it was optimizing conditional skip operations; see 5.7.d.a: no_conditional_skip . This has been fixed.


12.7.1.aq: 03-20
There was a mistake in 5.6.1: sparse_jac that caused the following assert to mistakenly occur:
 
sparse_jac_rev: work is non-empty and conditions have changed
Error detected by false result for
    color.size() == 0 || color.size() == n
A test that using a previously stores work vector has been added to 5.6.1.2: sparse_jac_rev.cpp and this bug has been fixed.

12.7.1.ar: 03-13
The documentation for the Hessian in 5.5.5: rev_hes_sparsity was transposed; i.e., the sense of 5.5.5.i: transpose was reversed.

12.7.1.as: 03-11
Add sparse assignment statements; see target for 8.27.e: sparse_rc and 8.28.g: sparse_rcv .

12.7.1.at: 03-10
Add the a sizing constructor to the 8.27.a: sparse_rc syntax ; i.e., a constructor that sets the number of row, number of columns, and number of possibly non-zero values in the sparsity pattern.

12.7.1.au: 03-06
Fix a bug in the sparsity computation using the internal representation for 12.4.j.c: vectors of sets ; i.e., when internal_bool was false in any of the 5.5: sparsity_pattern calculations; e.g., 5.5.1.i: for_jac_sparsity .

12.7.1.av: 03-04
Fix a bug in the optimization of conditional expressions; see 5.7.d.a: no_conditional_skip .

12.7.1.aw: 02-26
  1. Fix warning during 2.2: cmake command, on cygwin (https://www.cygwin.com/) systems, about WIN32 not being defined.
  2. Add 12.6.b.d: element-wise operations to the wish list.


12.7.1.ax: 02-21
  1. Minor improvements to syntax and documentation for 8.27: sparse_rc and 8.28: sparse_rcv .
  2. Separate preferred sparsity versions in 5.5: sparsity_pattern and 5.6: sparse_derivative .


12.7.1.ay: 02-19
  1. Remove the bool_sparsity.cpp example and add the 5.5.10: rc_sparsity.cpp example.
  2. Check for duplicate entries during 8.27.m: row_major and col_major in sparse_rc.


12.7.1.az: 02-15
Fix bug when using 5.5.8: ForSparseHes with atomic functions; i.e., 4.4.7.2.8: atomic_for_sparse_hes .

12.7.1.ba: 02-13
Improve 4.4.7.2.16.1.g.d: for_sparse_jac calculation in eigen_mat_mul.hpp example. It now checks for the parameter zero and does not propagate any sparsity in this case (because the result is always zero).

12.7.1.bb: 02-11
  1. Remove the 'Under Construction' heading from the 8.27: sparse_rc and 8.28: sparse_rcv documentation; i.e., they are ready for public use (part of the CppAD API).
  2. Fix some warning that occur when using 2.2.7.e: eigen for the CppAD test vector. (The Eigen vector size() function returns an int instead of size_t.)
  3. Fix a bug in 5.6.1: sparse_jac_rev .


12.7.1.bc: 02-10
  1. The subset of deprecated features corresponding to 2.2.t: cppad_deprecated=YES have been completely removed.
  2. Fix problems with 12.8.13: autotools build (started near 02-01 while working on sparsity branch).
  3. Reorder (better organize) the 5: ADFun documentation section.


12.7.1.bd: 02-09
  1. Remove the sparsity pattern wish list item. For sparsity patterns, this was completed by 8.27: sparse_rc and the sparsity pattern routines that used it; e.g., 5.5.1: for_jac_sparsity . For sparse matrices, it was completed by 8.28: sparse_rcv and the sparse matrix routines that use it; e.g., 5.6.1: sparse_jac .
  2. Add the Deprecated and 12.6.f: example items to the wish list. (The Deprecated item was partially completed and partially removed.)


12.7.1.be: 02-08
  1. Make coloring a separate argument to 5.6.1.l: sparse_jac and 5.6.3.j: sparse_hes .
  2. Add the 5.6.1.h: group_max argument to the sparse_jac_for function.


12.7.1.bf: 02-05
  1. Add the 5.6.1: sparse_jac_for routine which uses 8.27: sparse_rc sparsity patterns and 8.28: sparse_rcv matrix subsets.
  2. Order for 8.27: sparse_rc row-major and column-major was switched. This has been fixed.


12.7.1.bg: 02-03
Add the 5.5.3: rev_jac_sparsity 5.5.5: rev_hes_sparsity , and 5.5.7: for_hes_sparsity interfaces to sparsity calculations. These use 8.27: sparse_rc sparsity patterns.

12.7.1.bh: 02-02
Change 5.5.1.e.a: size_forward_bool and Change 5.5.1.e.b: size_forward_set so that they are a better approximation of the number of bytes (unsigned characters) being used. The exact same sparsity pattern might use different memory in two different function objects (because memory is allocated in chunks). The 5.1.2.1: fun_assign.cpp example has been changed to reflect this fact.

12.7.1.bi: 02-01
Add the 5.5.1: for_jac_sparsity interface for the sparse Jacobian calculations. This is the first use of 8.27: sparse_rc , a sparsity pattern class that uses row and column 12.4.j.a: index vectors .

12.7.1.bj: 01-30
Move the 5.5: sparsity_pattern examples from example to example/sparse subdirectory. This included the sparse 5.2: driver examples.

12.7.1.bk: 01-29
Move the 8: utility examples from example to example/utility subdirectory.

12.7.1.bl: 01-27
Add a 12.11: addon link to cppad_swig (http://www.seanet.com/~bradbell/cppad_swig) , a C++ AD object library and swig interface to Perl, Octave, and Python.

12.7.1.bm: 01-19
Convert more examples / tests to use a multiple of machine epsilon instead of 1e-10.

12.7.1.bn: 01-18
  1. Fix developer doxydoc (https://www.coin-or.org/CppAD/Doc/doxydoc/html/) documentation so that it works with newer version of doxygen.
  2. Fix a Visual C++ 2015 compilation problem in friend declarations in the file cppad/local/ad_tape.hpp.


12.7.1.bo: 01-17
Change computed assignment to 4.4.1.4: compound assignment .
Input File: omh/appendix/whats_new/whats_new_17.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.2: Changes and Additions to CppAD During 2016

12.7.2.a: Introduction
The sections listed below contain a list of the changes to CppAD in reverse order by date. The purpose of these sections is to assist you in learning about changes between various versions of CppAD.

12.7.2.b: 12-23
Added a way for the user to determine what tests options are available; see 2.2.c: make check .

12.7.2.c: 12-20
Change the optimize 5.7.e: examples to use 8.2: NearEqual for floating point tests (instead of exactly equal). There were some other exactly equal floating point tests that were failing on a mingw system. Theses have also been fixed.

12.7.2.d: 12-18
Add the 5.7.d.c: no_print_for_op to the optimize routine.

12.7.2.e: 12-13
  1. Fix a bug in 5.5.8: ForSparseHes . To be more specific, there was a bug in handling the cumulative summations operator in this routine. This could only come up when used an 5.7: optimized 5.5.8.c: f ,
  2. Add the 5.7.6: nest_conditional.cpp example.


12.7.2.f: 12-11
Improve the 5.7: optimize documentation. This includes making examples that demonstrate specific aspects of the optimization; see 5.7.1: forward_active.cpp , 5.7.2: reverse_active.cpp , 5.7.3: compare_op.cpp , 5.7.5: conditional_skip.cpp , 5.7.7: cumulative_sum.cpp .

12.7.2.g: 12-09
The 5.7.d: options argument was added to the optimize routine.

12.7.2.h: 11-18
Move classes and functions that are part of the user API from the cppad/local directory to the cppad/core directory. The remaining symbols, in the cppad/local directory, are now in the CppAD::local namespace. Note that a class in the CppAD name space, may have a member function that is not part of the user API.

12.7.2.i: 11-14
Increase the speed of the sparse_pack class. This improves the speed for 12.4.j.b: vector of boolean sparsity pattern calculations.

12.7.2.j: 11-13
Merged in the sparse branch which has const_iterator, instead of next_element for the sparse_list and sparse_pack classes. These classes are not part of the CppAD API and hence their specifications can change (as in this case). They can be used to get more efficient representations of 12.4.j: sparsity patterns .

12.7.2.k: 10-27
The optional 4.4.7.1.l: optimize option was added to the checkpoint functions.

12.7.2.l: 10-12
  1. Change 8.5.1: elapsed_seconds to use std::chrono::steady_clock instead of std::chrono::high_resolution_clock.
  2. The test for C++11 features was failing on a Mac system because the elapsed time was returning as zero (between two events). This test has been made more robust by add a one millisecond sleep between the two clock accesses.


12.7.2.m: 09-29
The multiple directions version of 5.3.5: forward was missing 4.4.2.18: erf function in the case where C++ 2011 was supported; see issue 16 (https://github.com/coin-or/CppAD/issues/16) . This has been fixed.

12.7.2.n: 09-27
Change the implementation of 4.4.7.2.18.2: atomic_eigen_cholesky.hpp so that the computation of @(@ M_k @)@ exactly agrees with the corresponding 4.4.7.2.18.1: theory .

12.7.2.o: 09-26
  1. A possible bug in the 5.7: optimize command was fixed. To be specific, a warning of the form indentations;
         warning: this 'if' clause does not guard... [-Wmisleading-indentation]
    using the gcc-6.2.1 compiler, was fixed and it may have fixed a bug.
  2. There was a problem with the 11.7: sacado where the symbol HAS_C99_TR1_CMATH was being defined twice. This has been fixed by leaving it up to the sacado install to determine if this symbol should be defined.


12.7.2.p: 09-16
Fix a problem using the 11.1.g.d: colpack option to the speed_cppad program. (There was a problem whereby the speed_cppad program did not properly detect when colpack was available.)

12.7.2.q: 09-13
Test third order and fix bug in 4.4.7.2.18.2: atomic_eigen_cholesky.hpp for orders greater than or equal to three.

12.7.2.r: 08-30
Add the 4.4.7.2.18: atomic_eigen_cholesky.cpp example.

12.7.2.s: 08-25
  1. Fix some missing include files in optimize.hpp and set_union.hpp (when compiling with MS Visual Studio 2015).
  2. Fix a warning in atanh.hpp (when compiling with MS Visual Studio 14).
  3. Fix a typo in the 4.4.7.2.17.1.c.c: Reverse section of the eigen_mat_inv.hpp example.


12.7.2.t: 07-17
Add documentation for only needing to compute a 5.6.4.f.c: column subset of the sparsity pattern when computing a subset of a sparse Hessians. In addition, improve the corresponding example 5.6.4.3: sparse_sub_hes.cpp .

12.7.2.u: 07-14
Correct title in 5.5.8: ForSparseHes (change Reverse to Forward).

12.7.2.v: 06-30
Change the 4.4.7.2.19: atomic_mat_mul.cpp example so that on atomic object works for matrices of any size.

12.7.2.w: 06-29
Change the 4.4.7.2: atomic_base examples so they do no longer use the deprecated 12.8.c: atomic function interfaces to for_sparse_jac, rev_sparse_jac, for_sparse_hes, and rev_sparse_hes.

12.7.2.x: 06-27
  1. Improve the 4.4.7.2.16.1: atomic_eigen_mat_mul.hpp and 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp examples. Most importantly, one atomic object now works for matrices of any size.
  2. Add the vector x , that contains the parameters in an atomic function call to the user following atomic functions: 4.4.7.2.7.d.d: for_sparse_jac , 4.4.7.2.7.d.d: rev_sparse_jac , 4.4.7.2.7.d.d: for_sparse_hes , 4.4.7.2.7.d.d: rev_sparse_hes . This enables one to pass parameter information to these functions; e.g., the dimensions of matrices that the function operates on.


12.7.2.y: 06-25
Add more entries to the optimization 12.6.g: wish_list .

12.7.2.z: 06-10
Add a 12.6.c: check_finite wish list item.

12.7.2.aa: 05-05
  1. Add documentation for 4.3.6.h: redirecting output for the PrintFor function.
  2. Change distributed version to build examples as debug instead of release version. (Was changed to release version while checking for compiler warnings; see 04-17 below).


12.7.2.ab: 04-17
Fix all some compiler warnings that occurred when compiling the 10: examples with 12.1.j.a: NDEBUG defined.

12.7.2.ac: 03-27
  1. Fix a bug in the calculation of the 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp 4.4.7.2.17.1.f.c: reverse example.
  2. Use a very simple method (that over estimates variables) for calculating 4.4.7.2.4.g: vy in the 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp 4.4.7.2.17.1.f.b: forward example.


12.7.2.ad: 03-26
  1. Implement and test the 4.4.7.2.17: atomic_eigen_mat_inv.cpp 4.4.7.2.17.1.f.c: reverse is implemented.
  2. Fix a bug in the calculation of 4.4.7.2.4.g: vy in the 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp 4.4.7.2.17.1.f.b: forward example.


12.7.2.ae: 03-25
  1. Start construction of the 4.4.7.2.17: atomic_eigen_mat_inv.cpp example, currently only 4.4.7.2.17.1.f.b: forward is implemented and tested.
  2. More improvements to 4.4.7.2.16: atomic_eigen_mat_mul.cpp example.


12.7.2.af: 03-24
  1. Fix build of example/atomic.cpp when 2.2.3: eigen_prefix is not available (bug introduced when 4.4.7.2.16: atomic_eigen_mat_mul.cpp was added).
  2. Extend 4.4.7.2.16: atomic_eigen_mat_mul.cpp example to include 4.4.7.2.16.1.g.d: for_sparse_jac , 4.4.7.2.16.1.g.e: rev_sparse_jac , 4.4.7.2.16.1.g.f: for_sparse_hes , 4.4.7.2.16.1.g.g: rev_sparse_hes .
  3. Fix a bug in the 5.5.8: ForSparseHes routine.
  4. Edit 4.4.7.2.9: atomic_rev_sparse_hes documentation.


12.7.2.ag: 03-23
  1. Fix bug in autotools file example/atomic/makefile.am (introduced on 03-22).
  2. Improve the 4.4.7.2.16: atomic_eigen_mat_mul.cpp example and extend it to include reverse mode.


12.7.2.ah: 03-22
  1. Start construction of the 4.4.7.2.16: atomic_eigen_mat_mul.cpp example.
  2. change atomic_ode.cpp to 4.4.7.1.3: checkpoint_ode.cpp and atomic_extended_ode.cpp to 4.4.7.1.4: checkpoint_extended_ode.cpp .


12.7.2.ai: 03-21
Change the 4.4.7.2.19.1: atomic_mat_mul.hpp class name from mat_mul to atomic_mat_mul. This example use of the name mat_mul in the 4.4.7.2.19: atomic_mat_mul.cpp example / test.

12.7.2.aj: 03-20
  1. Include the sub-directory name to the include guards in *.hpp files. For example,
     
         # ifndef CPPAD_UTILITY_VECTOR_HPP
         # define CPPAD_UTILITY_VECTOR_HPP
    
    appears in the file cppad/utility/vector.hpp. This makes it easier to avoid conflicts when choosing 12.11: addon names.
  2. Add the 8.26: set_union utility and use it to simplify the 4.4.7: atomic examples that use 12.4.j.c: vector of sets sparsity patterns.


12.7.2.ak: 03-19
  1. Move 4.4.7.2.19: atomic_mat_mul.cpp to atomic_mat_mul_xam.cpp (moved back on 12.7.2.ai: 03-21 .
  2. Move atomic_matrix_mul.hpp to 4.4.7.2.19.1: atomic_mat_mul.hpp .


12.7.2.al: 03-17
Add the atomic_ode.cpp and atomic_extended_ode.cpp examples.

12.7.2.am: 03-12
  1. Move the example reverse_any.cpp to 5.4.3.2: reverse_checkpoint.cpp .
  2. Add the 4.4.7.1.2: atomic_mul_level.cpp example.


12.7.2.an: 03-05
The following atomic function examples were added These examples are for a specific atomic operation. In addition, the domain and range dimensions for these examples are not one and not equal to each other: 4.4.7.2.4.1: atomic_forward.cpp , 4.4.7.2.5.1: atomic_reverse.cpp , 4.4.7.2.6.1: atomic_for_sparse_jac.cpp , 4.4.7.2.7.1: atomic_rev_sparse_jac.cpp , 4.4.7.2.8.1: atomic_for_sparse_hes.cpp , 4.4.7.2.9.1: atomic_rev_sparse_hes.cpp .

12.7.2.ao: 03-01
  1. Improve documentation of implementation requirements for the atomic 4.4.7.2.7.d: rev_sparse_jac .
  2. Make some corrections to the 4.4.7.2.8: atomic_for_sparse_hes documentation. and fix a bug in how CppAD used these functions.


12.7.2.ap: 02-29
  1. Merged sparse into master branch. This makes the 5.5.8: ForSparseHes routine available for use.
  2. Changed the 11.1.f: global options in the speed test main program to use one global variable with prototype
    
         extern std::map<std::string, bool> global_option;
    

12.7.2.aq: 02-28
Fix a mistake in the old atomic example/sparsity/sparsity.cpp example. This example has since been changed to 4.4.7.2.14: atomic_set_sparsity.cpp .

12.7.2.ar: 02-27
The --with-sparse_set and --with-sparse_set options were removed from the 12.8.13: autotools install procedure.

12.7.2.as: 02-26
The condition that the operation sequence in f is 12.4.g.d: independent of the independent variables was added to the statement about the validity of the sparsity patterns; see x in 5.5.2.d: ForSparseJac , 5.5.4.d: RevSparseJac , and 5.5.6.d: RevSparseHes .

12.7.2.at: 02-25
The 2.2: cmake command line argument cppad_sparse_list has been removed (because it is so much better than the other option).

12.7.2.au: 02-23
A new version of the cppad_sparse_list class (not part of user API) uses reference counters to reduce the number of copies of sets that are equal. This improved the speed of sparsity pattern computations that use the 12.4.j.c: vector of sets representation. For example, the results for the 11.5.6: cppad_sparse_hessian.cpp test compare as follows:
 
     sparse_hessian_size     = [  100,    400,   900,  1600, 2500 ]
     sparse_hessian_rate_old = [ 1480, 265.21, 93.33, 41.93, 0.86 ]
     sparse_hessian_rate_new = [ 1328, 241.61, 92.99, 40.51, 3.80 ]
Note that the improvement is only for large problems. In fact, for large problems, preliminary testing indicates that the new vector of sets representation preforms better than the 12.4.j.b: vector of boolean representation.

12.7.2.av: 01-21
Fix a valgrind warning about use of uninitialized memory in the test test_more/checkpoint.cpp (the problem was in the test).

12.7.2.aw: 01-20
  1. Fix a valgrind warning about use of uninitialized memory when using the 4.7.9.3: adouble base type. This required an optional 4.7.8: base_hash function and the special 4.7.9.3.r: adouble hash_code implementation.
  2. The adouble 8.25: to_string functions required a special implementation; see 4.7.9.3.q: adouble to_string .
  3. Add the 4.7.9.1.t: to_string and 4.7.9.1.u: hash_code examples to the base_alloc.hpp example.


12.7.2.ax: 01-18
  1. Fix ambiguity between CppAD::sin and std::sin, and other standard math functions, when using
     
         using namespace std;
         using namespace CppAD;
    
    This is OK for simple programs, but not generally recommended. See double version of base class definitions for 4.7.9.5.h: Unary Standard Math for more details.
  2. Change Eigen array example 10.2.4.2: eigen_array.cpp to use member function version of sin function (as per Eigen's array class documentation).

Input File: omh/appendix/whats_new/whats_new_16.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.3: CppAD Changes and Additions During 2015

12.7.3.a: Introduction
This section contains a list of the changes to CppAD during 2015 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.3.b: 12-31
The 2.1: download instructions were modified to have more mention of using 2.1.g.a: git and less mention of 2.1.g.b: subversion .

12.7.3.c: 12-29
Separate 8.25: to_string from 4.3.3: ad_to_string so that it can be used without the rest of CppAD; i.e., by including
     # include <cppad/utility/to_string.hpp>

12.7.3.d: 12-28
  1. Add the 8.25: to_string utility.
  2. Add 4.7.7: base_to_string to the Base type requirements.
  3. A 12.6.j: Base requirements item was added to the wish list.
  4. The 12.6: wish_list item to reorganize the include directory has been removed. It was completed when the utilities was moved to cppad/utility; see 12.7.3.g: 11-30 .


12.7.3.e: 12-08
  1. A convention was included for addon 12.11.c: library files .
  2. Change new 8: utility specifications to allow for individual file includes; e.g., <cppad/utility/vector.hpp>.


12.7.3.f: 12-01
Fix problem with 12.8.13: autotools install handling of the deprecated files. This included changing the autotools --with-implicit_ctor option to 12.8.13: autotools . This was removed on 12.7.1.bc: 2017-02-10 .

12.7.3.g: 11-30
  1. The library section has been moved to the 8: utilities section. In addition, the corresponding source code files in cppad have been moved to cppad/utility.
  2. The individual 8: utility include files have been deprecated; see 12.8.1: include_deprecated . For example,
     
         # include <cppad/runge_45.hpp>
    
    You should us the utility include instead; i.e.,
     
         # include <cppad/utility.hpp>
    
  3. The 12.10: numeric_ad routines where moved from the library the a separate documentation section.
  4. Change cmake_install_prefix to 2.2.f: cppad_prefix and Change cmake_install_postfix to 2.2.g: cppad_postfix .
  5. Change cppad_implicit_ctor_from_any_type to 2.2.t: cppad_deprecated and change its specifications to refer to all deprecated features.


12.7.3.h: 11-25
  1. CppAD now installs the object library
     
         -lcppad_lib
    
    to be included when linking. Currently, it is only required when 2.2.2: colpack_prefix is specified on the 2.2.b: cmake command .
  2. It is no longer necessary to compile and link the file
     
         cppad_colpack.cpp
    
    when 2.2.2: colpack_prefix is specified during the install process; see 2.2.b: cmake command . (It is included in cppad_lib).


12.7.3.i: 11-24
  1. The check_for_nan output now includes the first dependent variable 5.10.f.c: index that is nan in its error message.
  2. Change the 12.8.1: deprecated include reference pow_int.h to pow_int.hpp in 8.12: pow_int .


12.7.3.j: 11-14
There was a bug in the new 5.10.g: get_check_for_nan feature that writes independent variable values to a temporary file; see 12.7.3.l: 11-06 below. This has been fixed.

12.7.3.k: 11-08
  1. Fixed a bug in the 5.5.4: RevSparseJac routine. To be specific, the argument 5.5.4.h: r was transposed from what the documentation said. (This has no effect in the usual case where r is the identity.)
  2. Added the bool_sparsity.cpp examples which show how to conserve memory when computing sparsity patterns. (This has since been replaced by 5.5.10: rc_sparsity.cpp .)
  3. Modified the 9: ipopt_solve procedure to take advantage of the memory conserving sparsity pattern calculations when 9.f.a: retape is false.
  4. Added the 8.22.m.b: bit_per_unit function to the vectorBool class. (This aids the memory conservation mentioned above.)


12.7.3.l: 11-06
It is often difficult to determine what cause a nan result during an operation with an 5: ADFun object. The new feature 5.10.g: get_check_for_nan was added to make this easier.

12.7.3.m: 10-21
There was a mistake in the documentation for 8.24: index_sort , the argument 8.24.c: ind is not const.

12.7.3.n: 10-16
Add a 4.3.6: PrintFor optimization wish list item. This has been done, see 5.7.d.c: no_print_for_op .

12.7.3.o: 10-06
  1. Add 6.b.d: CPPAD_USE_CPLUSPLUS_2011 , CPPAD_NUMERIC_LIMITS, and CPPAD_STANDARD_MATH_UNARY, to the 6: preprocessor section. In addition, remove checking that all user API preprocessor symbols are in this section form the 12.6: wish_list .
  2. Alphabetize and make some corrections to 10.4: list of examples .
  3. The documentation for some of the 12.8: deprecated features was missing the date when they were deprecated. This has been fixed; e.g., see 12.8.13.a: Deprecated 2012-12-26 .


12.7.3.p: 10-04
  1. 4.7: base_require : Add the macro 4.7.6.b: CPPAD_NUMERIC_LIMITS to aid in setting the numeric limits for a user defined Base class.
  2. 4.7: base_require : The 4.4.6.h: quiet_NaN function has been added to the CppAD numeric_limits. Note the reason for not using 4.4.6.c: std::numeric_limits .
  3. The 8.11.f: nan(zero) function computes a nan by dividing zero by zero which results in a warning when using some compilers. This function has been deprecated and the corresponding 12.6: wish_list item has been removed.
  4. Move documentation for 12.8.12: zdouble to 12.8: deprecated section and documentation for 4.4.6: numeric_limits to 4.4: ADValued .
  5. Remove all uses of, and references to, 12.8.12: zdouble from the 10: examples .


12.7.3.q: 10-03
4.7: base_require : It is no longer necessary to define the specialization for CppAD::epsilon<Base>() for each Base type.

12.7.3.r: 10-02
There was a bug in test_more/azmul.cpp whereby the vector z had the wrong dimension (in two places). This has been fixed.

12.7.3.s: 09-28
  1. Use the current 4.4.7.2.2: atomic_option setting to determine which type of sparsity patterns to use for 5.5.9: dependency calculations during 5.7: optimize procedure. It used to be that the 4.4.7.2.2.b.b: bool_sparsity_enum was used when 4.4.7.2.2.b.a: pack_sparsity_enum was specified.
  2. It is not longer an error to take the derivative of the square root function, because the result may be the part of a 4.4.4: conditional expression that is not used.
  3. Update the 12.6: wish_list section.


12.7.3.t: 09-27
  1. It is no longer necessary to use the 12.8.12: zdouble class when computing with 10.2.10: multiple levels of AD 4.4.4: conditional expressions and 5.4: reverse mode .
  2. The zdouble class has been deprecated. Use the 4.4.3.3: azmul function for absolute zero (when it is needed).


12.7.3.u: 09-25
4.7: base_require : 4.7.i: absolute zero multiplication is now required for user defined base types. This makes it possible to combine 4.4.4: conditional expression , 10.2.10: multiple levels , 5.4: reverse , and a base type that has standard ieee multiplication; e.g., double. In other words, not all multiplications will need to have an absolute zero (as is the case with the 12.8.12: zdouble base class.

12.7.3.v: 09-24
Fix some Visual Studio 2013 C++ level four /W4 warnings (previous warnings were are level 3). In addition, disable warning 4100 unreferenced formal parameter, and warning 4127 conditional expression is constant.

12.7.3.w: 09-23
CppAD can optionally test its use with the external packages 2.2.3.1: eigen , 2.2.5.1: ipopt , and 2.2.2.5: colpack . In addition, it can compare its 11: speed with the external AD packages 2.2.1.1: adolc , 2.2.4.1: fadbad , and 2.2.6.1: sacado . The scripts that download and install a local copy of these external packages have been modified to automatically skip installation when it has already been done.

12.7.3.x: 09-21
Improve discussion of 2.1.i: windows download and testing .

12.7.3.y: 09-20
  1. Add the 2.2.n: cppad_profile_flag to the list of possible cmake command arguments.
  2. More of the warnings generated by Visual Studio 2013 have been fixed. One remaining warning is about asctime and gmtime not being thread safe.


12.7.3.z: 09-19
  1. There was a bug in the 4.7.9.1.s: numeric_limits section of the example user defined base type. This has been fixed.
  2. There were some compile and link errors when running the tests using Visual Studio 2013. These have been fixed.
  3. Many of the warnings generated by Visual Studio 2013 have been fixed.


12.7.3.aa: 09-16
The conditional expressions, 4.4.4: CondExp , were not working for the type < CppAD::AD<adouble> > where adouble is the ADOL-C AD type. This has been fixed by adding a call to 4.7.9.3.e: CPPAD_COND_EXP_REL in base_adolc.hpp.

12.7.3.ab: 09-03
  1. There was a bug in the 8.22.m: vectorBool 8.22.e: assignment . To be specific, it not allow a size zero vector to be assigned using a vector any other size. This has been fixed.
  2. The addition of the 4.4.7.2.2.b.a: pack option on 08-31 introduced a bug in the calculation of 5.5.6: RevSparseHes . The 4.4.7.1.1: checkpoint.cpp example was changed to demonstrate this problem and the bug was fixed.


12.7.3.ac: 09-02
The 5.5.9.b: dependency pattern was not being computed correctly for the 4.4.2.21: sign , 4.4.5: Discrete , and 4.6: VecAD operations. This has been fixed. This could have caused problems using 4.4.7.1: checkpoint functions that used any of these operations.

12.7.3.ad: 08-31
  1. Mention the fact that using checkpoint functions can make 4.4.7.1.c.b: recordings faster .
  2. Add the 4.4.7.2.2.b.a: pack sparsity option for 4.4.7.2: atomic_base operations.
  3. Add the pack sparsity option to 4.4.7.1.k: checkpoint functions.
  4. Added the example/atomic/sparsity.cpp example.
  5. Remove mention of OpenMP from 8.23.5: thread_alloc::thread_num (8.23: thread_alloc never was OpenMP specific).


12.7.3.ae: 08-30
  1. The 4.4.7.2.1.c.d: sparsity argument was added to the atomic_base constructor and the 4.4.7.1.k: checkpoint constructor.
  2. Make 4.4.7.2.12: atomic_norm_sq.cpp an example with no set sparsity and 4.4.7.2.13: atomic_reciprocal.cpp an example with no bool sparsity.
  3. Improve discussion of Independent and 5.1.1.h: parallel mode .


12.7.3.af: 08-29
Some asserts in the 4.4.7.1: checkpoint implementation were not using the CppAD 8.1: ErrorHandler . This has been fixed.

12.7.3.ag: 08-28
Free 4.4.7.1: checkpoint function sparsity patters during 5.3: forward operations that use its atomic operation. (They kept between sparsity calculations because they do not change.)

12.7.3.ah: 08-26
Fix a bug in 5.5.4: RevSparseJac when used to compute sparsity pattern for a subset of the rows in a 4.4.7.1: checkpoint function.

12.7.3.ai: 08-25
Reduce the amount of memory required for 4.4.7.1: checkpoint functions (since sparsity patterns are now being held so they do not need to be recalculated).

12.7.3.aj: 08-20
Added an example that computes the sparsity pattern for a subset of the 5.5.6.2.b: Jacobian and a subset of the 5.5.6.2.c: Hessian .

12.7.3.ak: 08-17
  1. Do some optimization of the 4.4.7.1: checkpoint feature so that sparsity patterns are stored and not recalculated.
  2. Fix a warning (introduced on 08-11) where the CppAD::vector 8.22.l: data function was being shadowed by a local variable.
  3. The source code control for CppAD has a link to compile, instead of real file. This sometimes caused problems with the deprecated 12.8.13: autotools install procedure and has been fixed.


12.7.3.al: 08-16
  1. Improve the documentation for checkpoint functions. To be specific, change the 4.4.7.1.a: syntax to use the name atom_fun . In addition, include the fact that atom_fun must not be destructed for as along as the corresponding atomic operations are used.
  2. Add the 4.4.7.1.m: size_var function to the checkpoint objects.


12.7.3.am: 08-09
Add the preservation of data to the specifications of a CppAD::vector during a 8.22.j: resize when the capacity of the vector does not change. In addition, added and example of this to 8.22.1: cppad_vector.cpp .

12.7.3.an: 08-06
The 12.8.12: zdouble 4.7.6: numeric_limits were not being computed properly. This has been fixed.

12.7.3.ao: 07-31
Added the 5.6.4.3: sparse_sub_hes.cpp example, a way to compute the sparsity for a subset of variables without using 10.2.10: multiple levels of AD .

12.7.3.ap: 06-16
  1. There were some 8.1.2.f: unknown asserts when the sparsity pattern p in 5.6.2.e: sparse_jacobian and 5.6.4.f: sparse_hessian was not properly dimensioned. These have been changed to 8.1.2.e: known asserts to give better error reporting.
  2. In the special case where sparse Hessian 5.6.4.i: work or sparse Jacobian 5.6.4.i: work was specified and the set of elements to be computed was empty, the work vector is empty after the call (and it appears to need to be calculated on subsequent calls). This resulted in a bug when the sparsity pattern was not provided on subsequent calls (and has been fixed).


12.7.3.aq: 06-11
  1. Some C++11 features were not being taken advantage of after the change on 12.7.3.av: 05-10 . To be specific, move semantics, the high resolution clock, and null pointers. This has been fixed.
  2. In the example 12.8.12.1: zdouble.cpp , the vector a1z was not properly dimensioned. This has been fixed and the dimensions of all the variables have been clarified.


12.7.3.ar: 06-09
Add an 5.1.1.f: abort_op_index item to the wish list. It has since been removed (domain errors may not affect the results due to 4.4.4: conditional expressions ).

12.7.3.as: 06-07
Add a 4.7.i: absolute zero item and a 4.4.6: numeric_limits item to the wish list. The absolute zero item has been completed and the numeric limit item was modified on implementation. Remove the multiple directions with list item.

12.7.3.at: 05-26

12.7.3.at.a: cond_exp_1
There was a problem using 4.4.4: conditional expressions with 10.2.10: multiple levels of AD where the result of the conditional expression might not be determined during forward mode. This would generate an assert of the form:
     Error detected by false result for
          IdenticalPar(
side)
    at line 
number in the file
          
.../cppad/local/cskip_op.hpp
where side was left or right and number was the line number of an assert in cskip_op.hpp. This has been fixed.

12.7.3.at.b: cond_exp_2
There was a problem with using 4.4.4: conditional expressions and 5.4: reverse mode with 10.2.10: multiple levels of AD . This was problem was represented by the file bug/cond_exp_2.sh.
  1. The problem above has been fixed by adding the base type zdouble, see 12.8.12.d.b: CppAD motivation for this new type. (It is no longer necessary to use zdouble to get an absolute zero because CppAD now uses 4.4.3.3: azmul where an absolute zero is required.)
  2. The sections 10.2.10: mul_level , 10.2.10.2: change_param.cpp , 10.2.10.1: mul_level.cpp , and 10.2.12: mul_level_ode.cpp were changed to use 12.8.12: zdouble .
  3. The 2.2.1: adolc multi-level examples 4.7.9.3.1: mul_level_adolc.cpp and 10.2.13: mul_level_adolc_ode.cpp were changed to mention the limitations because Adolc does not have an 12.8.12.b: absolute zero .
  4. The example above were also changed so that AD variable names that indicated the level of AD for the variable.
  5. 4.7: base_require : The base type requirements were modified to include mention of 4.7.i: absolute zero . In addition, the base type requirements 4.7.c: API warning is now more informative.


12.7.3.au: 05-11
Reorganize the 4.4.2: unary_standard_math documentation.

12.7.3.av: 05-10
  1. Add the exponential minus one function 4.4.2.20: log1p .
  2. 4.7: base_require : If you are defining your own base type, note that 4.7.5.d: log1p was added to the base type requirements.
  3. Use the single preprocessor flag CPPAD_USE_CPLUSPLUS_2011 to signal that the functions 4.7.5.d: erf, asinh, acosh, atanh, expm1, log1p are part of the base type requirements.


12.7.3.aw: 05-09
  1. Add the exponential minus one function 4.4.2.19: expm1 . If you are defining your own base type, note that 4.7.5.d: expm1 was added to the base type requirements.
  2. Fix some warnings about comparing signed and unsigned integers when using 2.2.7.e: eigen for the CppAD test vector. (The eigen vector size() function returns an int instead of a size_t.)


12.7.3.ax: 05-08
  1. Add the inverse hyperbolic sine function 4.4.2.17: atanh . If you are defining your own base type, note that 4.7.5.d: atanh was added to the base type requirements.
  2. Fix a bug in the implementation of the acosh multiple direction forward mode 5.3.5: forward_dir (when compiler has 4.4.2.15.d: acosh ).


12.7.3.ay: 05-07
Add the inverse hyperbolic sine function 4.4.2.15: acosh . If you are defining your own base type, note that 4.7.5.d: acosh was added to the base type requirements.

12.7.3.az: 05-05
Add the inverse hyperbolic sine function 4.4.2.16: asinh . If you are defining your own base type, note that 4.7.5.d: asinh was added to the base type requirements.

12.7.3.ba: 04-18
In the sparse jacobian and sparse hessian calculations, If work is present, and has already been computed, the sparsity pattern p is not used. This has been added to the documentation; see 5.6.2.h.b: sparse jacobian and 5.6.4.i.c: sparse hessian documentation for work and p .

12.7.3.bb: 03-13
Remove the syntax
     AD<
Basey = x
for the 4.1: AD constructor documentation because it does not work when the constructor is 4.1.c.b: explicit . Also document the restriction that the constructor in the 4.2: assignment must be implicit.

12.7.3.bc: 03-06
The developers of the TMB (https://github.com/kaskr/adcomp) package reported that for large 5: ADFun tapes, the 5.7: optimize routine uses a large amount of memory because it allocates a standard set for each variable on the tape. These sets are only necessary for variables in 4.4.4: conditional expressions that can be skipped once the independent variables have a set value. The problem has been reduced by using a NULL pointer for the empty set and similar changes. It still needs more work.

12.7.3.bd: 02-28
It used to be the case that the 5.4: Reverse mode would propagate 8.11: nan through the 4.4.4: conditional expression case that is not used. For example, if
 
     Independent(ax);
     AD<double> aeps = 1e-10;
     ay[0] = CondExpGt( ax[0], aeps, 1.0/ax[0], 1.0/aeps );
     ADFun<double> f(ax, ay);
The corresponding reverse mode calculation, at x[0] = 0.0, would result in
 
     Error detected by false result for
     ! ( hasnan(value) && check_for_nan_ )
This has been fixed so that only the conditional expression case that is used affects the reverse mode results. The example 4.4.4.1: cond_exp.cpp was changed to reflect this (a check for nan was changed to a check for zero). Note that this fix only works when 4.7.3.b.a: IdenticalPar is true for the base type of the result in the conditional expression; e.g., one can still get a nan effect from the case that is not selected when using AD< AD<double> > conditional expressions.

12.7.3.be: 02-18
If the compiler supports the c++11 feature std::chrono:high_resolution_clock then use it for the 8.5.1: elapsed_seconds function.

12.7.3.bf: 02-16
The new example 5.6.4.2: sub_sparse_hes.cpp shows one way to compute a Hessian for a subset of variables without having to compute the sparsity pattern for the entire functions.

12.7.3.bg: 02-14
Fix another bug in the derivative calculations for the c++11 version of the error function; see 4.4.2.18.d: CPPAD_USE_CPLUSPLUS_2011 .

12.7.3.bh: 02-11
Fix a bug in the optimization of conditional expressions. To be specific, if 12.1.j.a: NDEBUG is not defined, one could get an assert with the message:
 
     Error detected by false result for
          var_index_ >= NumRes(op_)

12.7.3.bi: 02-10
The change on 12.7.4.h: 2014-12-23 introduced a bug when the c++11 version of the error function was used with an 5.7: optimized function. see 4.4.2.18.d: CPPAD_USE_CPLUSPLUS_2011 . There was also a bug in the sparsity calculations for when this erf function was included. These bugs have been fixed.

12.7.3.bj: 02-09
The test test_more/optimize.cpp was failing on some systems because an exactly equality check should have been a near equal check. This has been fixed.

12.7.3.bk: 02-07
On some systems, the library corresponding to speed/src could not be found. This library is only used for testing and so has been changed to always be static (hence does not need to be found at run time).

12.7.3.bl: 02-06
There was a bug in the coloring method change on 12.7.3.bw: 2015-01-07 . To be specific, work.color_method was not being set to "cppad.symmetric" after work.color_method.clear() . This has been fixed.

12.7.3.bm: 02-04
  1. Enable the same install of CppAD to be used both with and without C++11 features; e.g., with both g++ --std=c++11 and with g++ --std=c++98. Previously if the 2.2.m: cppad_cxx_flags specified C++11, then it could only be used in that way.
  2. The 2.2.b: cmake command now requires the version of cmake to be greater than or equal 2.8 (due a bug in cmake version 2.6).


12.7.3.bn: 02-03
Improved the searching for the boost multi-threading library which is used for by the 7.2.11.2: team_bthread.cpp case of the 7.2: thread_test.cpp example and test.

12.7.3.bo: 02-02
Improve the documentation for the 2.2.b: cmake command line options
     cmake_install_
dir
for dir equal to prefix, postfix, includedirs, libdirs, datadir, and docdir.

12.7.3.bp: 01-30
Fix bug in 11.1.6: link_sparse_hessian speed test introduced on 12.7.3.bv: 01-09 below.

12.7.3.bq: 01-29
Fix some warnings generated by g++ 4.9.2.

12.7.3.br: 01-26
The change of global variables to local in cppad/local/op_code.hpp on 12.7.4.z: 2014-50-14 created a bug in 7.1: parallel_ad (some local statics needed to be initialized). This has been fixed.

12.7.3.bs: 01-23
There was a bug in the 2.2: cmake install detection of compiler features. One symptom of this bug was that on systems that had the gettimeofday function, the cmake install would sometimes report
     cppad_has_gettimeofday = 0
This has been fixed.

12.7.3.bt: 01-21
The deprecated 12.8.13: autotools procedure had a bug in the detection of when the size of an unsigned int was the same as the size of a size_t. This has been fixed.

12.7.3.bu: 01-20
  1. The new 5.3.7: compare_change interface has been created and the old 12.8.3: CompareChange function has been deprecated; see the 5.3.7.1: compare_change.cpp example. This enables one to determine the source code during taping that corresponds to changes in the comparisons during 5.3.1: zero order forward operations; see 5.1.1.f: abort_op_index .
  2. This new 5.3.7: compare_change interface can detect comparison changes even if 12.1.j.a: NDEBUG is defined and even if 5.7: f.optimize() has been called. The deprecated function CompareChange used to always return zero after
         
    f.optimize()
    and was not even defined when NDEBUG was defined. There was a resulting speed effect for this; see 5.7.d.b: no_compare_op .
  3. The date when some features where deprecated has been added to the documentation. For example, see 12.8.1.b: Deprecated 2006-12-17 .


12.7.3.bv: 01-09
  1. The change 01-07 below included (but did not mention) using a sparse, instead of full, structure for the Hessian in the test. This has also been done for the 11.1.7: sparse Jacobian test.
  2. For both the 11.1.7: sparse_jacobian and 11.1.6: sparse_hessian tests, the sparse function is only chosen once (it used to be different for every repeat). This reduced the amount of computation not connected what is being tested. It also make the 11.1.f.a: onetape a valid option for these tests.
  3. There was a bug in the 5.3.5: multiple direction forward routine. Results for function values that are 4.5.4: parameter were not being computed properly (all the derivatives are zero in this case). This has been fixed.


12.7.3.bw: 01-07
The following changes were merged in from the color_hes branch:
  1. Specify the type of 5.6.4.i.a: coloring for the sparse hessian calculations. To be specific, instead of "cppad" and "colpack", the choices are "cppad.symmetric", "cppad.general", and "colpack.star". This is not compatible with the change on 12.7.3.bx: 01-02 , which was so recent that this should not be a problem.
  2. The 11.1.6.i: n_sweep values were not being returned properly by 11.5.6: cppad_sparse_hessian.cpp and 11.4.6: adolc_sparse_hessian.cpp . The CppAD version has been fixed and the ADOL-C version has been set to zero.
  3. The 11.1.6: link_sparse_hessian example case was to sparse for good testing (by mistake). This has been fixed.
  4. Add n_sweep to 11.1.6.i: link_sparse_hessian and 11.1.i.a: speed_main .
  5. Change the cppad sparse Hessian 5.6.4.i.a: color_method to take advantage of the symmetry of the Hessian (in a similar fashion to the colpack coloring method).


12.7.3.bx: 01-02
Added to option to uses 2.2.2: colpack for the sparse Hessian 5.6.4.i.a: coloring method ; see the example 2.2.2.3: colpack_hes.cpp .
Input File: omh/appendix/whats_new/whats_new_15.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.4: CppAD Changes and Additions During 2014

12.7.4.a: Introduction
This section contains a list of the changes to CppAD during 2014 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.4.b: 12-30
There was a bug in the 2.2: cmake whereby it would sometimes mistakenly exit with the error message
 
     cppad_max_num_threads is not an integer greater than or equal 4
This has been fixed.

12.7.4.c: 12-29
The example not_complex_ad.cpp was using the type
     std::complex< CppAD::AD<double> >
and was failing to compile with the clang complier. This example has been removed because it is not consistent with the C++ standard; see 12.1.d: complex FAQ .

12.7.4.d: 12-28
  1. Fix some warnings generated by clang 3.5 about local functions that were not being used; e.g., sub-tests that were not being executed.
  2. Fix cmake setting cppad_implicit_ctor_from_any_type Note that this cmake option has since been replaced by 2.2.t: cppad_deprecated .
  3. The clang++ compiler was optimizing out the calculations in the 8.5.2: time_test.cpp and 8.3.1: speed_test.cpp examples. This caused these tests to hang while trying to determine how many times to repeat the test. This has been fixed.


12.7.4.e: 12-27
More work on the bug in 4.4.4.j: optimizing conditional expressions.

12.7.4.f: 12-26
A minimal example for computing cross terms in atomic operation Hessian sparsity patterns 4.4.7.2.9.1: atomic_rev_sparse_hes.cpp was added.

12.7.4.g: 12-25
More work on the bug in 4.4.4.j: optimizing conditional expressions.

12.7.4.h: 12-23
The c++11 standard includes the error function 4.4.2.18: erf in cmath. If the c++ compiler has the error function defined in cmath, the complier version of the error function is used and it corresponds to an atomic operation.

Fix typo in tangent reverse mode theory for 12.3.2.8.c: Positive Orders .

12.7.4.i: 12-22
There was a bug related to 4.4.4.j: optimizing conditional expressions. This has been fixed.

12.7.4.j: 12-17
Fix some compiler warnings and 11: speed program names when using the deprecated 12.8.13: autotools install procedure.

12.7.4.k: 12-16
If the c++11 include file <cstdint> defines all the standard types, they can be used by to specify 2.2.r.a: cppad_tape_addr_type and 2.2.q.a: cppad_tape_id_type .

12.7.4.l: 12-15
Correct the title and 14: _index entries for 5.3.3: forward_two from first to second order.

12.7.4.m: 11-28
Improve the 14: index and search using a new version of the omhelp documentation tool.

12.7.4.n: 11-27
  1. Add alignment to the 8.23.6.g: get_memory and 8.23.12.h: create_array specifications and 8.23.1: thread_alloc example .
  2. Advance the deprecated 12.8.13: unix install utilities to autoconf-2.69 and automake-1.13.4.


12.7.4.o: 09-28
Fix more bugs related to optimizing condition conditional expressions.
  1. Using old instead of new operator indices.
  2. Not properly following dependence back through atomic operations.
  3. Aborting during forward order zero, when skipping computation for a variable that was already completed (the skip is still useful for higher orders and for reverse mode).
  4. Reverse mode not properly handling the variable number of arguments in the conditional skip operation.
  5. Reverse mode tracing not properly handling the variable number of argument operations; i.e., conditional skip and cumulative summation.


12.7.4.p: 09-27
Fix a bug that occurred when 5.7: f.optimize was used with a function f that contained calls to user defined 4.4.7: atomic operations and 4.4.4: conditional expressions .

12.7.4.q: 09-25
Fix a bug that occurred when 5.7: f.optimize was used with a function f that contained 4.4.5: discrete functions.

12.7.4.r: 09-21
Fix a typo in documentation for 5.4.3: any order reverse . To be specific, @(@ x^{(k)} @)@ was changed to be @(@ u^{(k)} @)@.

12.7.4.s: 05-28
  1. Change the 11.1.g.a: boolsparsity so that it only affects the sparsity speed tests 11.1.7: sparse_jacobian and 11.1.6: sparse_hessian ; i.e., it is now ignored by the other tests.
  2. Improve the 11: speed documentation page.


12.7.4.t: 05-27
  1. The cppad_colpack.cpp file was not being copied to the specified directory. In addition, the specified directory was changed from an include directory to data directory because cppad_colpack.cpp is not an include file.
  2. If 2.2.2: colpack_prefix was specified, the CppAD 2.4: pkgconfig file was missing some information. This has been fixed.


12.7.4.u: 05-23
The 11: speed test instructions were converted from using the old autotools 12.8.13: unix install instructions to use the 2.2: cmake install instructions. These instructions should work on any system, not just unix.

12.7.4.v: 05-22
  1. Add multiple direction for mode 5.3.5: forward_dir and use it to speed up the forward version of 5.6.2: sparse_jacobian . Below is an example run of 11.5.7: cppad_sparse_jacobian.cpp results before this change:
     
         cppad_sparse_jacobian_size = [ 100, 400, 900, 1600, 2500 ]
         cppad_sparse_jacobian_rate = [ 2973, 431.94, 142.25, 78.64, 26.87 ]
    
    and after this change:
     
         cppad_sparse_jacobian_size = [ 100, 400, 900, 1600, 2500 ]
         cppad_sparse_jacobian_rate = [ 6389, 954.26, 314.04, 180.06, 56.95 ]
    
    Due to the success of this change, multiple direction items were added to the wish list (they were later removed).
  2. Improve the forward mode tracing of arguments to, and results from, user defined 4.4.7: atomic operations.


12.7.4.w: 05-20
  1. Move speed/adolc/alloc_mat.hpp to speed/adolc/adolc_alloc_mat.hpp so it has the same name is its # ifndef command.
  2. Fix # ifndef command in cppad/ipopt/solve_callback.hpp.
  3. Add # ifndef command to test_more/extern_value.hpp.


12.7.4.x: 05-19
In the files cppad/local/asin_op.hpp and cppad/local/acos_op.hpp there were assignments of the form uj = 0. where u_j has type 12.4.e: Base . These may have not be defined operations in certain cases and have been converted to uj = Base(0).

12.7.4.y: 05-16
There was a mistake in printing the arguments for CSumOp and CSkipOp when using the undocumented TRACE option during forward mode (available in files of the form cppad/local/*sweep.hpp/ ). This has been fixed.

12.7.4.z: 05-14
  1. There were some global variables in the file cppad/local/op_code.hpp that might have caused multiple definitions during link time for CppAD applications. These variables have been changed to be local so that this cannot happen.
  2. There was a mistaken assert that the number of arguments for the BeginOp was zero that caused an abort when using the undocumented TRACE option available in files of the form cppad/local/*sweep.hpp/ . This has been fixed.


12.7.4.aa: 03-18
  1. The 12.8.2.i: size_taylor and 12.8.2.j: capacity_taylor functions were deprecated; use 5.3.6: size_order and 5.3.8: capacity_order instead.
  2. The documentation for 5.3: forward and the examples 5.3.4.1: forward.cpp , 5.3.4.2: forward_order.cpp , have been improved. To be more specific, 5.3.4: forward_order now references the special cases 5.3.1: forward_zero , 5.3.2: forward_one and the new case 5.3.3: forward_two .


12.7.4.ab: 03-17
The 8.22.e.c: move semantics version of the CppAD::vector assignment statement was not checking vector sizes. This has been fixed so that things work the same with compilers that do not have move semantics. (Note that with move semantics, no extra memory allocation is done even if the target vector has a different size.)

12.7.4.ac: 03-09
The documentation links forwardzero, forwardone, and forwardany have been changed to 5.3.1: forward_zero , 5.3.2: forward_one , and 5.3.4: forward_order respectively. This may affect links from other web pages to the CppAD documentation.

12.7.4.ad: 03-05
The names p and q in the 5.3.4: forward , 5.4.3: reverse , 4.4.7.2.4: atomic_forward , and 4.4.7.2.5: atomic_reverse functions were reverse so that p <= q . This is only a notational change to make the arguments easier to remember.

12.7.4.ae: 03-02
  1. In the output for the speed 11.1.d.a: correct test, mention which tests are not available. Note that the set of available tests can depend on the 11.1.f: list of options .
  2. In the documentation for 5.6.2.i: n_sweep , mention that it is equal to the number of colors determined by the 5.6.2.h.a: color_method .
  3. The 11.5: speed_cppad tests were simplified by removing the printing of auxillary information related to the 11.1.f.c: optimize option. Future auxillary information will be passed back in a manner similar to 11.1.7.j: n_sweep for the sparse jacobian test.
  4. 8.22.e.c: Move semantics were added to the CppAD::vector assignment operator.


12.7.4.af: 03-01
  1. Change the prototype for row and col in the 11.1.7: link_sparse_jacobian speed test to be const; i.e., they are not changed.
  2. Move x near end of 11.1.6: link_sparse_hessian speed test parameter list, (as is the case for 11.1.7: link_sparse_jacobian ).


12.7.4.ag: 02-28
The 8.22.l: data function was added to the CppAD::vector template class.

12.7.4.ah: 02-27
The CppAD developer documentation for the subdirectory cppad/ipopt was not being built by the command bin/run_doxygen.sh. This has been fixed.

12.7.4.ai: 02-26
  1. The 11.4: adolc and 11.5: cppad sparse jacobian speed tests now print out 5.6.2.i: n_sweep .
  2. The size of some of the 11: speed test cases has been increased to test behavior for larger problems.
  3. A link to 11.2.7: ode_evaluate was missing in the 11.2.b: Speed Utility Routines table. This has been fixed.


12.7.4.aj: 02-23
The 5.6.2.h.a: color_method option was added to the sparse Jacobian calculations. This enables one to use 2.2.2: ColPack do color the rows or columns. The speed test 11.1.g.d: colpack option was also added (but not yet implemented for 11.1.6: sparse_hessian speed tests).

12.7.4.ak: 02-22
The program names in 7.2: thread_test.cpp where changes from threading_test to multi_thread_threading where threading is openmp, pthread or bthread.

12.7.4.al: 02-17
Fix ambiguous call to 8.11.d: isnan during MS Visual Studio 2012 compilation.

12.7.4.am: 02-15
  1. The 11.1.g.a: boolsparsity option was added to the 11.1: speed_main program.
  2. The retape option what changed to 11.1.f.a: onetape so that the default is to retape (option not present). This was done because 2.2.4: fadbad and 2.2.6: sacado always retape.
  3. The documentation, and example source code, for all the speed 11.1.f: options was improved (made clearer).
  4. Improve the success rate for 8.3.1: speed_test.cpp and 8.5.2: time_test.cpp .


12.7.4.an: 01-26
The destination directory for the 2.2.k: cppad documentation is now set separately from the data directory using the cmake option
     -D cmake_install_docdir=
cmake_install_docdir
This has increased the flexibility of the documentation installation and removed the need for the option
     -D cppad_documentation=
yes_or_no
which has been removed.

12.7.4.ao: 01-21
The destination directory for the cppad documentation used to be one of the following:
     
prefix/datadir/doc/cppad-version
     
prefix/datadir/doc/postfixcppad-version
This has been changed by dropping the version number at the end of the directory name.

12.7.4.ap: 01-10
The change on 12.7.5.c: 2013-12-27 caused a conversion error in 4.4.3.1: atan2 when it was used with AD< AD<double> > arguments (and other similar cases). This has been fixed.
Input File: omh/appendix/whats_new/whats_new_14.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.5: CppAD Changes and Additions During 2013

12.7.5.a: Introduction
This section contains a list of the changes to CppAD during 2013 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.5.b: 12-29
  1. The include file 10.2.4: cppad_eigen.hpp now automatically includes cppad.hpp.
  2. There was a problem with this automation when eigen was used for the cppad 10.5: testvector . This has been fixed.
  3. There was a problem with deprecated 12.8.13: autotools (created when optional implicit constructor from any type was added). This has been fixed by adding the --with-implicit_ctor option (later removed on 12.7.1.bc: 2017-02-10 .)


12.7.5.c: 12-27
The constructor from an arbitrary type to AD<Base> was implicit, but there was no specification to this effect. The caused problems when using CppAD with 2.2.3: eigen 3.2 (scheduled to be fixed in 3.3). The default for this constructor has been changed to be 4.1.c.b: explicit . In addition, other 4.1.c.a: implicit constructors are now listed in the documentation.

If you get a compiler error on an constructor / assignment of the form
     AD<
Base> x = y
(that used to work) try changing the constructor call to
     AD<
Base>( y )
A deprecated alternative is to make this constructor implicit using the 2.2.t: cppad_deprecated option during the install procedure.

12.7.5.d: 12-26
Document fact that 2.1.h: monthly versions of the CppAD compressed tar file last till the end of the year.

12.7.5.e: 12-24
The interface to 10.2.4: eigen defined a function
     NumTraits< CppAD::AD<
Base> >::dummy_epsilon()
that should have been named dummy_precision(). This has been fixed.

12.7.5.f: 11-27
  1. Fix bug when using 5.7: optimize with an 5: ADFun object containing the 4.4.2.21: sign function.
  2. Add 4.4.7.2.12: atomic_norm_sq.cpp , an atomic function example with domain dimension two and range dimension one.


12.7.5.g: 11-13
It used to be that one had to define the std::set version of 4.4.7.2.7: atomic_rev_sparse_jac for each atomic function that was part of an 5: ADFun object that was 5.7: optimized . Now the current 4.4.7.2.2.b: atomic_sparsity setting is used to determine if the bool or std::set version of rev_sparse_jac is used by the optimization process.

12.7.5.h: 11-12
Error detection and reporting (when NDEBUG is not defined) has been added for the following case: Using 5.7: optimize with 4.4.7.2: atomic_base functions that have not defined 5.7.h.a: rev_sparse_jac .

12.7.5.i: 10-29
The 4.4.4.j: optimization now handles nested conditional expressions. For example, give the code
 
     x = CondExpLt(left_x, right_x, true_x, false_x)
     y = CondExpGt(left_y, right_y, true_y, false_y)
     z = CondExpEq(left_z, right_z, x, y)
only two of the conditional expressions will be evaluated (one will be skipped depending on the result of left_z == right_z). For more details, see 5.7.6: optimize_nest_conditional.cpp .

12.7.5.j: 10-23
  1. Fix a bug in the optimization of calls to 4.4.7: atomic functions. This bug existed before recent change to optimizing conditional expressions. This required adding the 5.5.4.g: dependency argument to the reverse Jacobian sparsity pattern calculation.
  2. Fix the deprecated autotools install (see 12.8.13: autotools ) which was broken by the changes on 10-22. To be specific, the example for 5.3.9: number_skip was not being included.


12.7.5.k: 10-22
  1. Add 5.7: optimization of conditional expressions; see 4.4.4.j: CondExp .
  2. Add a phantom argument at the beginning of the operations sequence; 5.1.5.j: size_op_arg and 5.1.5.1: seq_property.cpp . (This helps with the optimization mentioned above.)
  3. Add the function 5.3.9: number_skip to measure how much optimization of the conditional expressions there is.


12.7.5.l: 10-16
Fix bug in 12.6.p: Tracing 4.4.7: atomic functions.

12.7.5.m: 10-15
The documentation for the class 8.22.m: vectorBool was improved.

12.7.5.n: 10-14
The script 2.2.1.1: get_adolc.sh was added (for downloading and installing ADOL-C (https://projects.coin-or.org/ADOL-C) ) in the build directory. Note that this local install of Adolc requires ColPack; see 2.2.2.5: get_colpack.sh . In addition, the requirement that ColPack and Adolc are installed with the same prefix was added.

12.7.5.o: 10-13
Make sure that all of the e: preprocessor symbols , that are not part of the CppAD API, are undefined when the <cppad/cppad.hpp> file concludes.

12.7.5.p: 10-12
  1. Change 2.2.3.1: get_eigen.sh so that it will reuse install information when it is present. In addition document reuse for 2.2.3.1.f: get_eigen.sh , 2.2.5.1.f: get_ipopt.sh , and 2.2.6.1.f: get_sacado.sh .
  2. Fix following g++ error on OSX system:
     
    error: no match for 'operator|=' (operand types are
    'std::vector<bool>::reference {aka std::_Bit_reference}' and 'bool')
        Check[i * n + j] |= F2[i * n + k] & r[ k * n + j];
                         ^
    

12.7.5.q: 09-20
  1. Add lines for 4.4.7.2: atomic_base function documentation to both the definition and use of each operation. This required adding sub-headings in the example usages corresponding to the function documentation sections. For example; see 4.4.7.2.4.l: atomic forward examples .
  2. Improve the documentation for 4.4.7.2.10: atomic_base_clear and remove its use from the 4.4.7.2.e: atomic_base examples (because it is not needed).


12.7.5.r: 09-19
Add links from the 4.4.7.2: atomic_base functions documentation to the corresponding examples. This required adding headings in the examples that correspond to the function documentation sections. For example; see 4.4.7.2.4.l: atomic forward examples .

12.7.5.s: 09-18
  1. A segmentation fault would occur if an 5: ADFun object used an 4.4.7: atomic function that had been deleted. This has been fixed so that when NDEBUG is not defined, an error message is generated.
  2. A mistake in the documentation for 8.22.n: Memory and Parallel Mode has been fixed. This corresponds to the change in the specifications for 8.22.j: CppAD::vector::resize made on 12.7.6.ai: 2012-07-30
  3. There was a bug during the 5.10: checking for nan during 5.4: reverse mode. This has been fixed.
  4. It appears, from inspecting the Ipopt source file Ipopt/src/Algorithm/IpIpoptAlg.cpp that the option sb to yes suppress the printing of the Ipopt banner. The Ipopt examples and tests have been changed to use this option (although it is not in the ipopt documentation).
  5. Fix the a typo in the documentation for ipopt_solve 9.f.e: Integer options (Numeric was changed to Integer).


12.7.5.t: 09-07
There was a bug in the cumulative sum operator (which is used by 5.7: optimize ) for 5.3: Forward orders greater than zero. This was detected by the 4.4.7.1: checkpoint tests when optimize was used to make the checkpoint functions faster. The bug has been fixed and the checkpoint functions now use optimize (and hence should be faster).

12.7.5.u: 08-12
  1. The ability to turn on and off checking for 8.11: nan in 5.3: Forward mode results has been added; see 5.10: check_for_nan .
  2. Use this option to remove the need to handel nan as a special case in 4.4.7.1: checkpoint functions that 5.7.h: atomic functions in within another function is optimized.
  3. Check 5.4.3: reverse mode results when 5.10: check_for_nan is true. (It used to be the case that only 5.3.4: forward results were checked for nan.)


12.7.5.v: 08-11
If an 4.4.7: atomic function had arguments that did not affect the final dependent variables in f , 5.7: f.optimize() would fail. This has been fixed. In addition, documentation about using optimize with 5.7.h: atomic functions has been added.

12.7.5.w: 08-06
Fix a case where the test test_more/num_limits.cpp failed because
 
     double inf   = std::numeric_limits<double>::infinity();
     double check = std::complex<double>(inf) / std::complex<float>(1.)
can result in the imaginary part of check being - nan.

12.7.5.x: 07-26
Allow for use of const::string& as a possible type for 4.4.7.2.1.c.c: name in the atomic_base constructor.

12.7.5.y: 05-28
Remove ok return flag from 4.4.7.1.o: checkpoint algo and 4.4.7.1.p: checkpoint atom_fun .

12.7.5.z: 05-21
  1. Deprecate the 12.8.11: old_atomic interface and replace it by the 4.4.7.2: atomic_base and 4.4.7.1: checkpoint interfaces.
  2. There was a problem with the 2.2: cmake command if the 2.2.m: cppad_cxx_flags was not specified. This has been fixed.


12.7.5.aa: 05-17
  1. Add the 5.5.2.f: transpose option to 5.5.2: ForSparseJac .
  2. Add the 5.5.6.f: transpose option to 5.5.6: RevSparseHes .


12.7.5.ab: 05-15
Change 5.5.4: RevSparseJac parameter names to be closer to the 5.5.2: ForSparseJac names so the difference is clearer.

12.7.5.ac: 05-14
  1. The 4.4.7.1: checkpoint class has been added. This is a much easier way to do checkpointing than the old checkpoint example. The old checkpointing example is now the 5.4.3.2: reverse_checkpoint.cpp example.
  2. Fix bug in 5.5.4: RevSparseJac for case when 5.5.4.e: q was not equal to m (range dimension) and sparsity pattern was a vector of bool.
  3. Add the 5.5.4.f: transpose option to 5.5.4: RevSparseJac .


12.7.5.ad: 05-12
The sparse hessian example in 12.8.11.1: old_reciprocal.cpp was not being run. This has been fixed.

12.7.5.ae: 05-11
The 12.8.11.t: old_atomic examples names were all changed to begin with user.

12.7.5.af: 05-04
The option to compute 5.3.4.g.b: multiple orders was added. The 12.8.11.3: old_usead_2.cpp example shows the need for this. The problem is that a new atomic function interface needs to be designed with checkpointing as a particular application. Multiple order forward mode is the first step in this direction.

12.7.5.ag: 04-28
  1. The scripts 2.2.3.1: get_eigen.sh and 2.2.6.1: get_sacado.sh were added. If you are using Unix, and you do not have Eigen (http://eigen.tuxfamily.org) or Sacado (http://trilinos.sandia.gov/packages/sacado) installed on your system, you can use the corresponding script to download and install a local copy for use when testing CppAD.
  2. The code std::cout << X , would generate a compile error when X was an Eigen matrix with CppAD::AD<Base> elements. This has been fixed.


12.7.5.ah: 04-27
There was a problem using the output operator << with and 10.2.4: eigen matrix of AD elements. This has been fixed using a template partial specialization of
 
     template<typename Scalar, bool IsInteger>
     struct significant_decimals_default_impl
because the original template requires definition of a implicit conversion from the scalar type to an int and this is dangerous for AD types (note that 4.3.2: Integer is used for explicit conversions).

12.7.5.ai: 04-26
  1. The example 12.8.11.3: old_usead_2.cpp was completed. This is a more realistic, but also more complicated, example of using AD to computed derivatives inside an atomic function.
  2. The script 2.2.4.1: get_fadbad.sh has been added. If you are using Unix, and you do not have FADBAD (http://www.fadbad.com) installed on your system, you can use this script to download and install a local copy for use when testing CppAD.

Input File: omh/appendix/whats_new/whats_new_13.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.6: CppAD Changes and Additions During 2012

12.7.6.a: Introduction
This section contains a list of the changes to CppAD during 2012 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.6.b: 12-30
  1. Merge changes in branches/ipopt_solve to trunk, delete that branch, and advance version number to cppad-20121230.
  2. Remove cppad/configure.hpp from repository because it is built by the configuration process (even for MS Visual Studio, now that we are using 2.2: cmake ).
  3. Add the AD<Base> input stream operator 4.3.4: >> .


12.7.6.c: 12-29
In branches/ipopt_solve:
  1. Complete implementation of sparse Jacobian and Hessian calculations and add options that allow to user to choose between forward and reverse sparse Jacobians.
  2. The 9: ipopt_solve routine seems to be faster and simpler than 12.8.10: cppad_ipopt_nlp . More speed comparisons would be good to have.
  3. All of the 5.2: ADFun Drivers have added specifications for the zero order Taylor coefficients after the routine is called. For example, see 5.2.2.i: Hessian uses forward .


12.7.6.d: 12-28
In branches/ipopt_solve:
  1. Add the 9.2: ipopt_solve_retape.cpp and 9.3: ipopt_solve_ode_inverse.cpp examples.
  2. Use ipopt::solve 9.f: options argument (and not a file) for all the Ipopt options. As well as allowing for adding ipopt::solve special options; e.g., 9.f.a: retape .


12.7.6.e: 12-27
In branches/ipopt_solve: Change documentation section names that begin with cppad_ipopt to begin with 12.8.10: ipopt_nlp to distinguish them from 9: CppAD::ipopt::solve .

12.7.6.f: 12-26
In branches/ipopt_solve:
  1. Convert documentation most all documentation references from the deprecated 12.8.13: autotools instructions to the new 2.2: cmake instructions.
  2. Include the 3: Introduction programs in the 2.3: cmake_check built using 2.2: cmake .
  3. Deprecate 12.8.10: cppad_ipopt_nlp and replace it by 9: ipopt_solve which is easier to use. This is a first version of ipopt_solve and its speed and memory use needs to be improved.


12.7.6.g: 12-23
Copy development trunk to branches/ipopt_solve.

12.7.6.h: 12-22
Define a doxygen module (group) for each file that has doxygen documentation.

12.7.6.i: 12-20
  1. The 2.a: install instructions were installing cppad/CMakeLists.txt and cppad/configure.hpp.in in the cppad include directory. This has been fixed so that only *.h and *.hpp files get installed in the cppad include directory.
  2. Advance the version number to cppad-20121220.


12.7.6.j: 12-19
The files <stdbool.h> and <sys/time.h> do not exist for all C compilers, and this caused a problem when using the Windows compiler. This has been fixed by defining the type bool inside the compare_c/det_by_minor.c source code.

12.7.6.k: 12-17
There was a mistake in a check for a valid op code in the file hash_code.hpp. This mistake could generate a C++ assertion with an unknown error source. It has been fixed.

12.7.6.l: 12-15
  1. Advance version number from 20121120 to 20121215. Note that the CppAD version number no longer automatically advances with the date and is rather chosen to advance to the current date.
  2. The 2.2: cmake installation was putting the cppad.pc 2.4: pkgconfig file in
         cppad_prefix
    /cmake_install_datadir/cppad.pc
    This has been fixed and is now
         cppad_prefix
    /cmake_install_datadir/pkgconfig/cppad.pc
  3. The 2.4: pkgconfig documentation has been improved.
  4. The command for running the 2.2.1.c: adolc examples and 2.2.3.c: eigen examples was fixed (changed from make check to make check_example).


12.7.6.m: 12-14
  1. Fix the old 12.8.13: autotools so that it works with the new cppad.pc.
  2. Fix the old installation 12.8.13.h: --with-Documentation option (it was attempting to copy from the wrong directory).


12.7.6.n: 12-13
  1. Include documentation for 2.2.5: ipopt_prefix
  2. Fix the cppad.pc 2.4: pkgconfig file so that it includes the necessary libraries and include commands when 2.2.5: ipopt_prefix is specified; see 2.4.b: pkgconfig usage .


12.7.6.o: 11-28
Update the 12.6: wish_list :
  1. Remove Microsoft compiler warning item that has been fixed.
  2. Remove faster sparse set operations item that was completed using cppad_sparse_list (not part of user API).
  3. Remove 2.2: cmake items that have been completed.
  4. Remove 4.4.4: CondExp items related to using AD< std::complex<double> > types because it is better to use std::complex< AD<double> >.
  5. Remove 8.23: thread_alloc memory chunk item that has been completed.
  6. Remove 4.6: VecAD item about slicing from floating point type to int (not important).
  7. Change an Ipopt item to a 12.8.10: cppad_ipopt_nlp (which was removed because cppad_ipopt_nlp is now deprecated). Add new cppad_ipopt_sum item to the wish list. (This has been removed because 4.4.7.1: checkpointing can now be used for this purpose.)
  8. Add new old_atomic 12.6: wish_list item (since removed).


12.7.6.p: 11-21
  1. Fix the 2.1.c: version number in link to the current download files.
  2. Change the subversion download instructions to use the export instead of checkout command. This avoids downloading the source code control files.


12.7.6.q: 11-20
  1. The cmake variables cmake_install_includedir and cmake_install_libdir were changed to 2.2.h: cmake_install_includedirs and 2.2.i: cmake_install_libdirs to signify the fact that they can now be a list of directories.
  2. Advance version number to cppad-20121120.


12.7.6.r: 11-17
  1. Finish documenting the new 2.2: cmake configuration instructions and deprecate the old 12.8.13: unix instructions.
  2. Change the specifications for 7.b: CPPAD_MAX_NUM_THREADS to allow for a value of one. This enables one to have more tapes during a program execution.
  3. Include the 12.9: C versus C++ speed comparison in the 2.2: cmake build.


12.7.6.s: 11-16
Fix a warning that occurred in 8.18: Rosen34 when it was compiled with the preprocessor symbol NDEBUG defined.

12.7.6.t: 11-14
Advanced the CppAD version to cppad-20121114.
  1. Started documenting the 2.2: cmake configuration procedure during installation. This included factoring out the 2.1: download procedure as a separate section so that the same download instruction also apply to the 12.8.13: unix install procedure.
  2. Changed 5.3.7.1: example/compare_change.cpp to just return true when NDEBUG is defined. This enabled all the tests in the example directory to be compiled with NDEBUG is defined and to pass.
  3. In the case where NDEBUG is defined, removed detection of nan during forward mode from test_more/forward.cpp%. This enables all the tests in the test_more directory to be compiled with NDEBUG is defined and to pass.
  4. Started a wish list for CppAD's use of 2.2: cmake . The wish list items were completed and removed.


12.7.6.u: 11-09
The 7.2.11.3: team_pthread.cpp was failing to link on Ubuntu 12.04 because the libraries were not at the end of the link line. This has been fixed.

12.7.6.v: 11-06
  1. Remove some remaining references to the old licenses CPL-1.0 and GPL-2.0; see 12.7.6.aa: 10-24 .
  2. Remove out of date Microsoft project files from the distribution. The build system is being converted to use cmake (http://www.cmake.org) which builds these files automatically and thereby keeps them up to date. This feature is not yet documented, but one can inspect the file bin/run_cmake.sh to see how to use cmake with CppAD.


12.7.6.w: 11-04
Add missing return value to the example base_alloc 4.7.9.1.g: CondExpOp function. This has been fixed and the comments for this example have been improved.

12.7.6.x: 10-31
The CppAD 12.8.13.f: profiling was not compiling the speed/src/*.cpp files with the profiling flag. This has been changes (only for the profiling speed test).

12.7.6.y: 10-30
The 12.8.13.q: fadbad_dir directory install instructions were changed. To be specific, FADBAD++ was changed to include/FADBAD++. This makes it more like the other optional packages.

12.7.6.z: 10-25
The test 8.17.1: runge45_1.cpp was failing when using gcc-4.5.2. This has been fixed by properly defining fabs(x) where x is a double (without the std in front).

12.7.6.aa: 10-24
Change the CppAD licenses from CPL-1.0 and GPL-2.0 to EPL-1.0 and GPL-3.0.

12.7.6.ab: 10-12
  1. Change all the multiple levels of AD examples to start with 10.2.10: mul_level . To be specific, move ode_taylor.cpp to 10.2.12: mul_level_ode.cpp and ode_taylor_adolc.cpp to 10.2.13: mul_level_adolc_ode.cpp .
  2. Add 10.2.14: ode_taylor.cpp as a example of Taylor's method for solving ODEs, (10.2.12: mul_level_ode.cpp is an application of this method to multi-level AD.)


12.7.6.ac: 10-04
  1. Change 11.1: speed_main so that it outputs small rates (less than 1000) with two decimal points of precision (instead of as integers). In addition, flush result for each size when it finishes to give user more feedback about how things are progressing.
  2. Add the optional 8.5.g: test_size argument to the time_test routine.


12.7.6.ad: 10-03
Change the hold_memory speed to option to just 11.1.f.b: memory . In addition, in the speed test output, include all of the options that are present in the output variable name; see 11.1.i: speed results .

12.7.6.ae: 10-02
Fix another problem with Debian's /bin/sh shell executing example/multi_thread/test.sh; see 12.7.6.bt: 03-17

12.7.6.af: 09-24
Improve documentation for the 12.8.11: old_atomic 12.8.11.r: rev_hes_sparse argument 12.8.11.r.g: v . In addition, add sparsity calculations to the 12.8.11.1: old_reciprocal.cpp example.

12.7.6.ag: 09-11
Add user_simple.cpp, a simpler 12.8.11: old_atomic example.

12.7.6.ah: 08-05
  1. A new type was added for the internal representation of 12.4.j.c: vector of sets sparsity patterns; see the configure --with-sparse_option (since removed).
  2. A new speed test, 12.9: compare_c , compares the speed of the same source code compiled with C and C++.


12.7.6.ai: 07-30
  1. The 8.22.k: clear function was added to CppAD::vector.
  2. Warning !!: The CppAD::vector 8.22.j: resize specifications were changed so that x.resize(0) no longer frees the corresponding memory (use x.clear() instead).
  3. Fix a bug in error checking during 5.7: optimize procedure had the following valgrind symptom during the optimize.cpp example:
     
         ==6344== Conditional jump or move depends on uninitialised value(s)
    
  4. Fix mistake in 12.8.11.4: old_tan.cpp where w[2] = 0 was missing before the call
     
              dw    = F.Reverse(1, w);
    

12.7.6.aj: 07-08
  1. Improve the documentation for 4.4.3.2: pow and 8.12: pow_int .
  2. Change all the example section names to be same as corresponding file names; e.g. change vectorBool.cpp to 8.22.2: vector_bool.cpp for the example example/utility/vector_bool.cpp.


12.7.6.ak: 07-07
Add the CPPAD_TAPE_ID_TYPE argument to the 12.8.13.d: configure command line.

12.7.6.al: 07-05
Deprecate 12.8.9: CPPAD_TEST_VECTOR and use 10.5: CPPAD_TESTVECTOR in its place. This fixes a problem introduced by changes on 07-03 whereby code that used CPPAD_TEST_VECTOR would no longer work.

12.7.6.am: 07-04
  1. Replace the requirement that the 8.9: SimpleVector 8.9.h: size function return a size_t value to the requirement that it can be converted to a size_t value.
  2. The 12.8.13.d: --with-eigenvector option was added to the configure command line.


12.7.6.an: 07-03
Fix bug in 12.8.11: old_atomic functions identification of variables that caused 12.8.11.4: old_tan.cpp to fail with error message
 
Error detected by false result for
    y_taddr > 0
at line 262 in the file cppad/local/dependent.hpp

12.7.6.ao: 07-02
Add eigen_plugin.hpp so that an Eigen vector can be used as a 8.9: SimpleVector . This has since been removed; see 12.7.1.ak: 2017-05-11 .

12.7.6.ap: 07-01
  1. Change 10.2.4: cppad_eigen.hpp to match new specifications and example in eigen help files on customizing and extending eigen. (http://eigen.tuxfamily.org/dox/TopicCustomizingEigen.html)
  2. Fix bug whereby a newly constructed 4.6: VecAD object was a 4.5.4: variable (instead of a parameter) directly after construction (when no previous 5.1.2: ADFun object had been created).
  3. Change a ok != a == 0. to ok &= a == 0. in the example 4.1.1: ad_ctor.cpp .
  4. Add the 10.2.4.2: eigen_array.cpp example.


12.7.6.aq: 06-17
  1. Move 12.8.8: epsilon to 4.4.6: numeric_limits and add the functions min and max in CppAD::numeric_limits<Type> .
  2. Convert use of the deprecated 12.8.8: epsilon in examples to use of numeric_limits 4.4.6.e: epsilon .
  3. Complete 10.2.4: cppad_eigen.hpp interface to lowest and highest functions for all non-complex AD types.


12.7.6.ar: 06-16
Add the example 10.2.4.3: eigen_det.cpp that uses the Eigen (http://eigen.tuxfamily.org) linear algebra package.

12.7.6.as: 06-15
Include the 4.7.9.3: base_adolc.hpp as <cppad/example/base_adolc.hpp> under the 12.8.13.g: prefix_dir directory.

12.7.6.at: 06-12
Increase the size and of the 11.1.7: sparse Jacobian speed tests .

12.7.6.au: 06-10
  1. Add the 11.1.f.b: hold_memory option to the speed test main program. This was changed to just memory; see 12.7.6.ad: 10-03 .
  2. In 11.5.7: cppad_sparse_jacobian.cpp , change USE_BOOL_SPARSITY from true to false. In addition, change the number of non-zeros per row from about approximately three to approximately ten.


12.7.6.av: 06-09
Change 11.4.7: adolc_sparse_jacobian.cpp to use the sparse adolc Jacobian (instead of the full Jacobian) driver. This was also done for 11.4.6: adolc_sparse_hessian.cpp , but there is a problem with the test that is being investigated.

12.7.6.aw: 06-08
Implement the matrix multiply speed test 11.1.3: link_mat_mul for all packages (there is a problem with the 11.6.3: fadbad_mat_mul.cpp implementation and it is being looked into).

12.7.6.ax: 06-07
Make all the speed tests implementations (for the specific packages) uniform by having a Specification and Implementation heading and similar indexing. For example, see 11.4.1: adolc_det_minor.cpp , 11.5.1: cppad_det_minor.cpp , 11.3.1: double_det_minor.cpp , 11.6.1: fadbad_det_minor.cpp , and 11.7.1: sacado_det_minor.cpp .

12.7.6.ay: 06-05
Add the 11.7.4: sacado_ode.cpp speed test.

12.7.6.az: 06-04
  1. The specifications for 8.17: Runge45 where changes so that it uses the fabs function instead of the < operation. This enabled the a more precise statement about its 8.17.c: operation sequence .
  2. The fabs function as added to the CppAD standard math library (see 4.4.2.14: abs ) and the 4.7.5: base type requirements . This enables one to write code that works with AD<double> as well as double without having to define abs for double arguments (and similarly for float).
  3. Add the 11.4.4: adolc_ode.cpp and 11.6.4: fadbad_ode.cpp speed tests (and edit the 11.5.4: cppad_ode.cpp test).


12.7.6.ba: 06-03
  1. The CppAD::vector class was extended to allow assignment with the target of size zero and the source of non-zero size; see 8.22.e.a: check size .
  2. A memory leak and a bug in cppad_mat_mul.cpp were fixed (the bug was related to the change to CppAD::vector above).


12.7.6.bb: 06-02
  1. Remove the deprecated symbol 12.8.9.a: CppADvector from the 11.2.1: det_by_lu speed test source code 11.2.1.2: det_by_lu.hpp .
  2. Include 12.8.7: memory_leak in the list of 12.8: deprecated features.
  3. Change the 11.2.7: ode_evaluate speed test utility so that its 12.4.g.b: operation sequence does not depend on the repetition; see 11.2.7.f.a: p == 0 in its documentation.
  4. Use same argument for taping and derivative evaluation when retape speed test option is true.
  5. Implement the retape == false option in 11.5.4: cppad_ode.cpp .
  6. Have 11.5.2: cppad_det_lu.cpp , 11.5.1: cppad_det_minor.cpp , and 11.5.5: cppad_poly.cpp , return false when one of the specified options is not supported. Do the same for package_test.cpp for package equal to adolc, fadbad, and sacado and for test equal to det_lu, det_minor, poly.


12.7.6.bc: 06-01
Change 11.5.6: cppad_sparse_hessian.cpp and 11.5.7: cppad_sparse_jacobian.cpp to use the row , col interface to 5.6.4: sparse_hessian . In addition, implement the speed test retape speed test option for these tests.

12.7.6.bd: 05-31
Add the cppad_print_optimize routine to so that the corresponding code does not need to be reproduced for all the 11.5: speed_cppad tests. In addition, during CppAD speed tests, print out the optimization results for each test size.

12.7.6.be: 05-30
Change specifications for 11.1.6: link_sparse_hessian so that the row and column indices are inputs (instead of being chosen randomly by the test for each repetition). This enables use of the retape speed test option during sparse Hessian speed tests.

12.7.6.bf: 05-29
Add 8.24: index_sort to the general purpose template 8: utilities so that it can be used by the implementations of 11.1.7: link_sparse_jacobian and 11.1.6: link_sparse_hessian .

12.7.6.bg: 05-27
Split the sparse Jacobian and Hessian test function the separate function 11.2.8: sparse_jac_fun and 11.2.9: sparse_hes_fun (do not use sparse Hessian for both). In addition, change row and column indices from i and j to row and col .

12.7.6.bh: 05-24
Merged in changes from branches/sparse:
  1. A new interface was added to 5.6.2: sparse_jacobian and 5.6.4: sparse_hessian . This interface returns a sparse representation of the corresponding matrices using row and column index vectors.
  2. The examples 5.6.2.1: sparse_jacobian.cpp and 5.6.4.1: sparse_hessian.cpp were improved and extended to include the new interface.
  3. The definition of an 12.4.a: AD function was improved to include definition of the corresponding n and m .


12.7.6.bi: 04-19
The 12.8.13.o: BOOST_DIR configure command line value has been changed to be the corresponding prefix during the installation of boost. To be specific, it used to be that boost_dir/boost was the boost include directory, now boost_dir/include is the boost include directory. This make it the same as the other directory arguments on the configure command line. In addition, it fixes some bugs in the detection of the boost multi-threading library.

12.7.6.bj: 04-18
Add documentation and testing for not using 8.23.14: free_all and 12.8.11.s: old_atomic clear while in 8.23.4: parallel mode.

12.7.6.bk: 04-17
Fix bug when using 12.8.11: old_atomic functions with 7: multi_threading .

12.7.6.bl: 04-10
Add control of the 12.8.13.j: max_num_threads argument to the unix 12.8.13.d: configure command.

12.7.6.bm: 04-06
  1. A change was made to the way that the tapes were managed to reduce false sharing during 7: multi-threading . Because of this change, it is now suggest that the user call 7.1: parallel_ad after the multi-threading section of the program.
  2. The routine 8.23.14: ta_free_all was created to make it easier to manage memory and the routine 12.8.7: memory_leak was deprecated.
  3. Add the -lteuchos flag to the link line for the 11.7: speed_sacado tests. (This was not necessary for trilinos-10.8.3 but is necessary for trilinos-10.10.1)


12.7.6.bn: 04-05
The restriction was added that 7.1: parallel_ad cannot be called while a tape is being recorded. This was necessary inorder to initialize some new statics in the tape.

12.7.6.bo: 04-01
Fixed a race condition when using CppAD with 7: multi-threading . This has been fixed and the error message below no longer occurs. Suppose that you ran the CppAD 12.8.13.d: configure command in the work directory. If you then edited the file work/multi_thread/makefile and changed
 
     # AM_CXXFLAGS     = -g $(CXX_FLAGS)
     AM_CXXFLAGS = -DNDEBUG -O2 $(CXX_FLAGS)
to
 
     AM_CXXFLAGS     = -g $(CXX_FLAGS)
     # AM_CXXFLAGS = -DNDEBUG -O2 $(CXX_FLAGS)
and then executed the commands
 
     make clean
     make pthread_test
     valgrind --tool=helgrind ./pthread_test simple_ad
The following error message would result:
     ... snip ...

==7041== Possible data race during write of size 4 at 0x8077460 by thread #1
==7041==    at 0x804FE23: CppAD::AD<double>::tape_new() (tape_link.hpp:221)
     ... snip ...

12.7.6.bp: 03-27
Reduce the amount of memory allocation and copying of information during a 5.1.3: Dependent operation or an ADFun 5.1.2.g: sequence constructor .

12.7.6.bq: 03-26
Calling taylor_capacity, with to with capacity equal to zero, was not 5.3.8.d.b: freeing memory . This has been fixed.

12.7.6.br: 03-23
  1. Improve, the multi-threading examples 7.2.4: simple_ad_openmp.cpp , 7.2.5: simple_ad_bthread.cpp , and 7.2.6: simple_ad_pthread.cpp . This includes separating generic code that can be used for all applications from problem specific code.
  2. Add initialization of statics in 7.1.d: CheckSimpleVector during parallel_ad call. These statics are required to use 8.22: CppAD::vector .
  3. Add a debugging check to make sure 8.10: CheckSimpleVector is initialized in sequential mode.


12.7.6.bs: 03-21
Fix an incorrect error check in thread_alloc that did not allow 8.23.7: ta_return_memory to return memory in sequential execution mode that was allocated by a different thread during parallel execution.

12.7.6.bt: 03-17
Debian recently converted the default shell corresponding to /bin/sh to dash (which caused example/multi_thread/test.sh to fail). This has been fixed. In general, Debian's policy is that bin/sh will be a Posix Shell (http://pubs.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html) .

12.7.6.bu: 03-11
There was a bug in 8.23: thread_alloc where extra memory was held onto even if 8.23.9: hold_memory was never called and only one thread was used by the program. This caused
valgrind --leak-check=full --show-reachable=yes
to generate an error message. If 7: multiple threads are used, one should free this 8.23.8.b.a: extra memory for threads other than thread zero. If hold_memory is used, one should call 8.23.8: free_available for all threads.

12.7.6.bv: 03-03
  1. Add the examples 7.2.4: simple_ad_openmp.cpp , 7.2.5: simple_ad_bthread.cpp and 7.2.6: simple_ad_pthread.cpp .
  2. Fix bug in finding boost multi-threading library (due to fact that 12.8.13.o: boost_dir is not the prefix during the boost installation).


12.7.6.bw: 03-02
  1. Change the name simple_ad.cpp to 7.2.7: team_example.cpp
  2. The multi-threading team_example.cpp example was changed to use @(@ f(x) = \sqrt{ x^2 } @)@ instead of the function @(@ {\rm atan2} [ \sin(x) , \cos (x) ] @)@ (both functions should behave like the identity function @(@ f(x) = x @)@). This enabled the removal of example/multi_thread/arc_tan.cpp.
  3. In 7.2.7: team_example.cpp check that all of the threads pass their individual test; i.e. work_all_[thread_num].ok is true for all thread_num .


12.7.6.bx: 02-11
  1. The requirements in 4.7.1: base_member were missing from the 4.7: base_require documentation. In addition, the 4.7.9.2: base_require.cpp example has been added.
The specifications for 12.8.7: memory_leak where changes so that calling routine specifies the amount of static memory to add. In addition, it is now possible to call memory_leak when 8.23.3: num_threads is greater than one (still can't be in parallel mode).

12.7.6.by: 02-10
  1. Add the missing Base class requirements in the entire 4.7.1: base_member section and under the 4.7.g: Output Operator in the 4.7: base_require section.
  2. Add the 4.7.9.1: base_alloc.hpp example.


12.7.6.bz: 02-09
  1. Add the set_static to 12.8.7: memory_leak . This is necessary for testing base types that allocate memory for each element.
  2. Fix memory allocation bug in cppad/local/pod_vector.hpp when each element of the 4.7: Base type allocated memory.


12.7.6.ca: 01-30
Make another attempt to fix linking with boost threads where the wrong version of the library is in the system include directory; i.e., to have 12.8.13.o: boost_dir override the default library.

12.7.6.cb: 01-27
There were some problems with 12.8.13.d: configure's automatic detection of the boost multi-threading library. These have been fixed.

12.7.6.cc: 01-24
It used to be that 8.23: thread_alloc did not hold onto memory when num_threads was one in the previous call to 8.23.2: parallel_setup . Holding onto memory is now controlled by the separate routine 8.23.9: hold_memory . This give the user more control over the memory allocator and the ability to obtain a speed up even when there is only one thread. To convert old code to the new interface, after each call to
thread_alloc::parallel_setup(
num_threadsin_parallelthread_num);
put the following call
thread_alloc::hold_memory(
num_threads > 1);

12.7.6.cd: 01-23
Change variable notation and use 5.7: optimize in 10.2.10.1: mul_level.cpp .

12.7.6.ce: 01-20
  1. Add the example 10.2.10.2: change_param.cpp which shows how to compute derivatives of functions that have parameters that change, but derivatives are not computed with respect to these parameters.
  2. The documentation for machine 12.8.8: epsilon has been improved. (The fact that it can be used for Base types was missing.)


12.7.6.cf: 01-19
  1. In cases where test.sh is trivial, put its operations in corresponding makefile.
  2. Fix problem compiling cppad/speed/sparse_evaluate.hpp under gcc on Fedora 17.
  3. Run example/multi_thread/test.sh from source directory (no need to copy to build directory).


12.7.6.cg: 01-16
The test program example/multi_thread/test.sh failed if the 12.8.13.l: openmp_flags not present in the configure command. This has been fixed. In addition, this test.sh has been made faster by cycling through the available threading systems instead of doing every system for every test.

12.7.6.ch: 01-15
Fix make test so it works when 12.8.13.d: configure is run in the distribution directory cppad-yyyymmdd (not just when it is run in a different directory).

12.7.6.ci: 01-12
The -lpthread library was missing from the 7: multi_thread test program linker command. This has been fixed.

12.7.6.cj: 01-07
  1. A duplicated code block beginning with
     
    if( fabs( fcur ) <= epsilon_ )
    
    was removed from the routine multi_newton_worker.
  2. The distance between solutions that are joined to one solution has been corrected from @(@ (b - a) / (2 n ) @)@ to @(@ (b - a) / n @)@; see 7.2.10.5.f: xout . The correction was in the file 7.2.10: multi_newton.cpp where sub_length_ / 2 was change to sub_length_.


12.7.6.ck: 01-02
  1. The 8.23: thread_alloc memory allocator was changed to avoid certain false sharing situations (cases where two different thread were changing and using memory that is on the same page of cache). On one tests machine, the execution time for the 32 thread case for the test
     
    ./openmp_test multi_newton 1 32 1000 4800 10 true
    
    improved from 0.0302 seconds to 0.0135 seconds.
  2. There was a problem with the correctness test section of the 7.2.10.6: multi_newton_time test. The convergence criteria, and correctness criteria, needed to be scaled by the largest argument values. This was a problem with over a hundred zeros were included in the test (and the largest argument value was @(@ 100 \pi @)@ or more).
  3. There was a problem with the way that 7.2.10.4: multi_newton_takedown joined two solutions into one. It is possible that one of the solutions that needs to be joined is on the boundary and very close to a solution in the next (or previous interval) that is not on the boundary. In this case, the one with the smaller function value is chosen.
for the previous
Input File: omh/appendix/whats_new/whats_new_12.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.7: Changes and Additions to CppAD During 2011

12.7.7.a: Introduction
This section contains a list of the changes to CppAD during 2011 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.7.b: 12-30
  1. There was a bug when using 4.4.2.14: abs with an AD< AD<double> > argument, whereby the corresponding AD<double> operation sequence depended on the value of the argument to the abs function.
  2. Change the specifications for the derivative of the 4.4.2.14: abs function to be the 4.4.2.21: sign function instead of a directional derivative.
  3. Add the 4.4.2.21: sign function to the AD<Base> list of available functions. In addition, add the 4.7.5.e: sign function to the list of 4.7: base type requirements .


12.7.7.c: 12-28
The file 8.5.d: time_test.hpp was not being included by cppad/cppad.hpp. This has been fixed.

12.7.7.d: 12-21
The types SizeVector, NumberVector, ADNumber, and ADVector, were in the global namespace and this was causing warnings about the shadowing of these declarations. The 12.8.10.d: cppad_ipopt namespace was created to avoid these problems. The simplest way to make old 12.8.10: cppad_ipopt_nlp code work with this change is to use the command
 
     using namespace cppad_ipopt;

12.7.7.e: 12-20
  1. Change team_start to 7.2.11.d: team_create and team_stop to 7.2.11.f: team_destroy .
  2. Change NDEBUG mentions to include link to 12.1.j.a: NDEBUG .
  3. Improve 12.8.7: memory_leak documentation.


12.7.7.f: 11-29
THe 8.5: time_test routine was still executing the test at least twice, even if that was not necessary for the specified minimum time. This has been fixed.

12.7.7.g: 11-27
Move multi_thread.cpp to 7.2: thread_test.cpp and fix its 7.2.e: running instructions.

12.7.7.h: 11-24
Create 6: preprocessor section with pointers to all the preprocessor symbols that are in the CppAD API.

12.7.7.i: 11-21
Separate 12.8.13.i: --with-boostvector for 12.8.13.o: boost_dir . This enables one to specify boost_dir for 7.2.11.2: team_bthread.cpp with out using boost vectors.

12.7.7.j: 11-20
  1. Move sum_i_inv.cpp to 7.2.8: harmonic.cpp .
  2. Include the date, time, CppAD version, and 7.2.11.g: team_name in the 7.2: thread_test.cpp output.


12.7.7.k: 11-18
  1. The 7.2: thread_test.cpp program was truncating test_time to the nearest integer. This has been fixed.
  2. The 8.5: time_test routine has been made more efficient and now check for the case where only one execution of the test is necessary to achieve the desired test_time (it used to always run at least two).
  3. The sum_i_inv_time.cpp and 7.2.10: multi_newton.cpp routines were calling the test an extra time at the end to check for correctness. The results of the last test are now cached and used for the correctness test so that an extra call is not necessary (to make the tests run faster when only a few repetitions are necessary).


12.7.7.l: 11-17
  1. Create another speed testing routine 8.5: time_test which is like 8.3: speed_test but it returns the time instead of rate and as a double instead of a size_t. The use it for the timing tests in sum_i_inv_time.cpp and 7.2.10.6: multi_newton_time .
  2. Add test_time as a command line argument to the multi-threading sum_i_inv and 7.2.j: multi_newton timing tests.


12.7.7.m: 11-09
Change thread_team.hpp to 7.2.11: team_thread.hpp and do the same for all other names that ended in _team; e.g., 7.2.11.1: team_openmp.cpp .

12.7.7.n: 11-07
The users choice for 12.8.9: test_vector was not actually changing the tests that the user ran. This have been fixed.

12.7.7.o: 11-06
Make all the output generated by 7.2.10: multi_newton.cpp valid matlab and octave input so it is easy to plot the results.

12.7.7.p: 11-04
Use thread specific data to simplify 7.2.11.1: team_openmp.cpp .

12.7.7.q: 11-01
Make 7.2.11.2: team_bthread.cpp more similar to 7.2.11.3: team_pthread.cpp .

12.7.7.r: 10-30
  1. Reorganize and improve the 7: multi_thread section and its subsections.
  2. There was a bug in 7.2.10: multi_newton.cpp that only showed up when the number of threads was greater than or equal 4. This has been fixed. In addition, 7.b: CPPAD_MAX_NUM_THREADS was increased from 2 to 4 (to enable testing for this bug).
  3. The accuracy expected in the sum_i_inv.cpp results were failing when mega_sum was large. This check has been modified to include a correction for mega_sum .


12.7.7.s: 10-29
The following changes were merged in from branches/thread:
  1. Move openmp to example/multi_thread/openmp. and create example/multi_thread/bthread, multi_thread/pthread with similar tests.
  2. Put multi-threading common code in multi_thread directory and threading system specific code in example/multi_thread/threading for threading equal to openmp, bthread, and pthread.
  3. Update the README file.
  4. Remove the bug/optimize.sh file (no longer a bug).
  5. Make arc_tan.cpp utility that can be used by multiple multi-threading tests.
  6. Create 7.2.11: team_thread.hpp specifications, move OpenMP management to 7.2.11.1: team_openmp.cpp , Boost thread management to 7.2.11.2: team_bthread.cpp , and pthread management to 7.2.11.3: team_pthread.cpp .
  7. All of the make files were modified so that the command
     
         make test
    
    would run the tests for the current directory.
  8. Extend the multi-threading speed tests sum_i_inv.cpp and 7.2.10: multi_newton.cpp so they run using Boost threads and pthreads (as well as OpenMP threads).


12.7.7.t: 10-14
Fix some compiler warnings about shadowed variables that were detected by g++ version 4.6.1 20110908.

12.7.7.u: 10-12
  1. The MAC version of the pthread library does not include the pthread_barrier_wait function; i.e., is not compliant with the IEEE Std 1003.1, 2004 Edition for pthread. This caused the pthread_simple_ad.cpp to fail to compile on the MAC. This has been fixed by not compiling the pthread examples unless pthread_barrier_wait is present.
  2. The 12.8.10: cppad_ipopt_nlp routine has been changed to 5.7: optimize the functions @(@ r_k (u) @)@ such that retape(k) is false.


12.7.7.v: 09-06
  1. Add the boost multi-threading (http://www.boost.org/doc/libs/1_47_0/doc/html/thread.html) examples 7.2.2: a11c_bthread.cpp and bthread_simple_ad.cpp.
  2. Improve documentation for 8.23.2.f: thread_num argument to parallel_setup.
  3. More simplification of bthread_simple_ad.cpp example.


12.7.7.w: 09-05
Simply and fix some problems with pthread_simple_ad.cpp, including avoiding a 7.2.11.3.a: Bug in Cygwin .

12.7.7.x: 09-02
  1. The OpenMP speed test program openmp/run.cpp was not setting the number of threads for the one thread case (so dynamic thread adjustment was used). This has been fixed.
  2. The 8.23.1: thread_alloc.cpp example was missing from the Microsoft example/example.vcproj file and a attempt was made to link to missing OpenMP routines (this has been fixed). In addition, some Microsoft compiler warning have been fixed; see the examples and tests in the Windows install instructions.
  3. There was an oversight, and CPPAD_MAX_NUM_THREAD was being set to 2 when _OPENMP was not defined. This has been fixed and 7.b: CPPAD_MAX_NUM_THREADS has been documented and is now part of the CppAD API.
  4. The pthread_simple_ad.cpp test failed under cygwin. This was because the previous test openmp_ad.cpp was set up calls to OpenMP routines that were still in effect when pthread/simple_ad ran. This has been fixed by making num_threads == 1 a special case in 8.23.2: parallel_setup .


12.7.7.y: 09-01
  1. Modify the CppAD trunk using the changes from svn revision 2060 to revision 2081 in the branch
     
         https://projects.coin-or.org/svn/CppAD/branches/pthread
    
    These changes are described below under the headings 12.7.7.y.e: 08-21 through 12.7.7.y.a: 08-31 .
  2. There was a bug in the 12.8.11: old_atomic functions in the case where none of the elements of the argument to the function was a 12.4.m: variable . This has been fixed. In addition, 12.8.11.4: old_tan.cpp generated an assert for this case and this has also been fixed (in addition to including an example for this case).


12.7.7.y.a: 08-31
  1. Move the sum_i_inv_time.cpp test from openmp/run.sh to openmp/run.cpp.
  2. Change --with-openmp to 12.8.13.l: OPENMP_FLAGS=openmp_flags configure command line argument.


12.7.7.y.b: 08-30
  1. Create the openmp/run.cpp program and move the openmp_multi_newton.cpp test from openmp/run.sh to openmp/run.cpp. This uses 12.8.13.d: configure information for building the tests.
  2. Document the --with-openmp configure command line argument.
  3. Move openmp/multi_newton.hpp to openmp/newton_method.hpp and openmp/multi_newton.cpp to openmp/newton_example.cpp.


12.7.7.y.c: 08-25
  1. Replace 12.8.6: omp_alloc by 8.23: thread_alloc in 7: multi_thread , the section on how to use CppAD in parallel.
  2. Implement 12.8.6: omp_alloc as links to corresponding 8.23: thread_alloc sections.
  3. Create the pthread_simple_ad.cpp example that does AD using the pthread library. In addition, fix some problems in openmp_simple_ad.cpp
  4. Move openmp/example_a11c.cpp to 7.2.1: example/a11c_openmp.cpp .
  5. Move openmp/parallel_ad.cpp to openmp_simple_ad.cpp.


12.7.7.y.d: 08-23
Beginning steps in replacing 12.8.6: omp_alloc by 8.23: thread_alloc :
  1. Replace 12.8.6: omp_alloc by 8.23: thread_alloc in the 8: utilities .
  2. move 12.8.6: omp_alloc to the deprecated section of the documentation.
  3. Change all 12.8.6: omp_alloc section names to begin with omp_, and change all 8.23: thread_alloc section names to begin with new_.
  4. Convert 8.22: CppAD_vector from using 12.8.6: omp_alloc to using 8.23: thread_alloc for memory allocation.
  5. Extend the 12.8.7: memory_leak routine to also check the 8.23: thread_alloc allocator.


12.7.7.y.e: 08-21
Create the OpenMP and pthread examples 7.2.1: a11c_openmp.cpp , 7.2.3: a11c_pthread.cpp , and openmp_simple_ad.cpp. These OpenMP examples were originally in the openmp directory, and have been moved, and modified to conform, to the normal example directory.

12.7.7.z: 08-11
Modify the CppAD trunk using the changes from svn revision 2044 to revision 2056 in the branch
 
     https://projects.coin-or.org/svn/CppAD/branches/base_require
These changes are described below under the headings 12.7.7.z.f: 08-04 through 12.7.7.z.a: 08-10 .

12.7.7.z.a: 08-10
  1. Add the output stream optional argument s in
         
    f.Forward(0, xs)
    See 5.3.1: zero order forward mode and 4.3.6: PrintFor .
  2. Improve 12.8.6.13: omp_alloc.cpp example.


12.7.7.z.b: 08-09
  1. 4.7: base_require : Add 4.4.6.e: epsilon to the Base type requirements.
  2. Extend epsilon to AD types.


12.7.7.z.c: 08-08
  1. Improve the 4.7: base_require documentation for 4.7.5: standard math functions .
  2. 4.7: base_require : Add abs_geq to the 4.7: requirements for a user defined Base type.
  3. Check that zero order forward mode results are approximately equal, instead of exactly equal, after an 5.7: optimize operation. This fixes a bug in the optimize correctness check (The order of operations can be changed by optimize and hence the zero order forward mode results may not be exactly the same.)


12.7.7.z.d: 08-07
Improve the 4.7: base_require documentation for 4.7.3.a: EqualOpSeq , 4.7.3.b: Identical 4.7.h: Integer , and 4.7.4: Ordered operations.

12.7.7.z.e: 08-06
Add the 4.7.2.d: CondExpRel paragraph to the base requirements documentation. This was missing and are required for 4.4.4: CondExp to work with AD<Base> arguments and a non-standard Base type.

12.7.7.z.f: 08-04
  1. 4.7: base_require : Change the 4.7.e: include file name to 4.7: base_require.hpp .
  2. Use 4.7.9.4: base_float.hpp and 4.7.9.5: base_double.hpp as additional examples for the 4.7.2: CondExp Base requirements requirements.


12.7.7.aa: 08-03
Change 4.3.6: PrintFor condition from less than or equal zero to not greater than zero;i.e., not positive. This makes nan print because it results in false for all comparisons.

12.7.7.ab: 08-02
  1. Change 4.3.6: PrintFor so it no longer aborts execution when there is no operation sequence being recording; see 5.1.1.c: start recording .
  2. Improve the 4.3.6.1: print_for_cout.cpp example.


12.7.7.ac: 07-31
Add a conditional version of the 4.3.6: PrintFor command
     PrintFor(
textyz)
which only prints when z <= 0 . This is useful for error reporting during forward mode; i.e., reporting when the argument to the log function is not valid.

12.7.7.ad: 07-29
  1. The routines 12.8.6.1: set_max_num_threads and get_max_num_threads were created. User's will need to replace calls to 12.8.6.12: max_num_threads by calls to set_max_num_threads.
  2. The functions 12.8.6.11: omp_efficient was deprecated because it has not been shown to be useful.


12.7.7.ae: 07-28
  1. Change 12.8.6.5: omp_return_memory so that if 12.8.6.1: omp_max_num_threads is one (the default), 12.8.6: omp_alloc does not hold onto memory (keep it available for the corresponding thread).
  2. Add files that were missing from the Microsoft Visual Studio example and test_more subdirectory project files.
  3. Fix some warnings generated by Microsoft Visual Studio 2010 build.


12.7.7.af: 07-27
Make tan and tanh 12.4.g.a: atomic operations; see 12.3.1.8: tan_forward and 12.3.2.8: tan_reverse .

12.7.7.ag: 07-25
Finish the 12.8.11: old_atomic example 12.8.11.4: old_tan.cpp . This is also a design and implementation of the routines necessary to make tan and tanh CppAD atomic operations.

12.7.7.ah: 07-18
The reverse mode formulas for @(@ Z(t) @)@ need to involve the lower order Taylor coefficients for @(@ Y(t) @)@. This has been fixed in 12.3.2.8: tan_reverse .

12.7.7.ai: 07-17
  1. Fix bug in 12.8.11: old_atomic functions. To be specific, the Taylor coefficients for @(@ y @)@, of order less than k , were not passed into the old_atomic 12.8.11.n: forward callback function.
  2. Derive the theory for including the tangent and hyperbolic tangent as CppAD atomic operations 12.3.1.8: tan_forward and 12.3.2.8: tan_reverse ; see the wish list item Tan and Tanh.
  3. Implement and test forward mode calculation of derivative for the tangent and hyperbolic tangent functions; see the new 12.8.11: old_atomic example 12.8.11.4: old_tan.cpp .


12.7.7.aj: 07-14
  1. The 12.8.13: autotools instructions for running the individual correctness and speed tests were out of date. This has been fixed; see 12.8.13.e.a: example and tests .
  2. Move parallel_ad.cpp from example directory to openmp directory (and convert it from a function to a program).
  3. Simplify example_a11c.cpp by making it just a correctness test.
  4. Change openmp/run.sh so that it runs correctness tests with the compiler debugging flags.


12.7.7.ak: 07-13
  1. static hash code data that was begin used by multiple threads when recording AD<Base> operations 12.8.6.2: omp_in_parallel execution mode. This has been fixed.
  2. Make the sparse calculations safe for use during 12.8.6.2: omp_in_parallel execution mode.
  3. Add the parallel_ad.cpp example.
  4. Change example_a11c.cpp example so that is just a correctness (not speed) test.


12.7.7.al: 07-11
  1. Change the upper limit for 12.8.6.1: omp_max_num_threads from 32 to 48.
  2. Add 8.23.4: parallel documentation for, nan, 8.18.m: Rosen34 , and 8.17.n: Runge45 .
  3. Fix 8.8: CheckNumericType and 8.10: CheckSimpleVector so they work properly when used in parallel mode.


12.7.7.al.a: openmp/run.sh
The following changes were made to openmp/run.sh:
  1. Change to openmp/run.sh maximum number of threads instead of specifying the entire set of values to be tested.
  2. Change settings for newton_example so that n_gird is a multiple of the maximum number of threads.
  3. Report dynamic number of thread results as a separate result in the summary output line.
  4. Fix automatic removal of executables from openmp directory (was commented out).
  5. The documentation for openmp/run.sh was moved to the multi_thread section.


12.7.7.am: 07-10
  1. Add link to 4.4.5: Discrete AD Functions in 7: multi_thread .
  2. Make use of the 12.8.5: TrackNewDel routines 12.8.6.2: omp_in_parallel execution mode an error (it never worked properly); see 12.8.5.o: TrackNewDel multi-threading .
  3. Change 12.8.7: memory_leak so that it checks for a leak in all threads. This is what openmp_newton_example.cpp and sum_i_inv_time.cpp assumed was being done.


12.7.7.an: 07-09
All the OpenMP parallel execution requirements have been grouped in the section 7: multi_thread .

12.7.7.ao: 07-07
Add the routine 7.1: parallel_ad to fix bug when using AD<Base> in 12.8.6.2: parallel execution mode.

12.7.7.ap: 06-23
  1. Fix a bug whereby the assert
         Error detected by false result for
              ! omp_in_parallel()
         at line 
    n in the file
         
    prefix/include/cppad/omp_alloc.hpp
    sometimes occurred.
  2. The routine 12.8.4: omp_max_thread was deprecated, use the routine 12.8.6.1: omp_max_num_threads instead.
  3. The deprecated routines have been grouped together in the 12.8: deprecated section of the CppAD manual.


12.7.7.aq: 06-21
  1. The openmp/run.sh routine was changed to use zero, instead of automatic, for automatic choice of openmp/run.sh number of repeats and maximum number of threads.
  2. The output of each of the OpenMP examples / speed tests (run by openmp/run.sh) was changed to be valid matlab / octave assignment statements.
  3. In the case where OpenMP is enabled during compilation, a summary for the different number of threads as added at the end of the openmp/run.sh output.


12.7.7.ar: 06-18
  1. The 12.8.13.t: tape_addr_type option was added to the 12.8.13.d: configure command line.
  2. The function 5.1.5.m: size_op_seq results uses sizeof(CppAD_TAPE_ADDR_TYPE) where it used to use sizeof(size_t).
  3. Remove cppad/config.h from CppAD distribution, (put the information in cppad/configure.hpp.) This removes the need to undefine symbols that were defined by cppad/config.h and that did not begin with CPPAD_.
  4. Change 12.8.13.n: adolc library linkage so it works with version ADOL-C-2.2.0.


12.7.7.as: 05-29
Fix bug (introduced on 12.7.7.av: 05-22 ) whereby constructor might not be called (but required) when the 4.7: base type is not plain old data.

12.7.7.at: 05-28
  1. Add the 12.8.6.11: omp_efficient routine to the 12.8.6: omp_alloc system.
  2. Improve the omp_alloc tracing so it prints the same pointer as returned to the user (not an offset version of that pointer).


12.7.7.au: 05-26
Fix Visual Studio project files that were broken during the change on 05-22. In addition, in the file cppad/omp_alloc.hpp, suppress the following Microsoft Visual Studio warning
 
     warning C4345: behavior change: an object of POD type constructed with
     an initializer of the form () will be default-initialized

12.7.7.av: 05-22
  1. The old memory tracking routines 12.8.5: TrackNewDel have been deprecated. Their use should be replaced using the 12.8.6: omp_alloc a memory allocator which is designed to work well in a multi-threading OpenMP environment; see 12.8.6.b: purpose .
  2. The replacement of TrackNewDel by omp_alloc has been throughout the CppAD source code, including the examples that used TrackNewDel; namely, 4.7.9.3.1: mul_level_adolc.cpp , 10.2.13: mul_level_adolc_ode.cpp .
  3. The CppAD vector template class and the 8.22.m: vectorBool class were modified to use the omp_alloc 8.22.n: memory manager. This should improves its speed of memory allocation 12.8.6.2: omp_in_parallel sections of a program.
  4. The 8.3: speed_test argument 8.3.g: size_vec call was by value, instead of by reference (as documented). This has been fixed and the call is now by reference.
  5. The 8.22.d: capacity function has been added to the CppAD vector class.
  6. The simple vector 8.9.f: element constructor and destructor description has been changed to explicitly specify that the default constructor is used to initialize elements of the array.
  7. The 5.1.5.m: size_op_seq documentation has been improved to mention that the allocated memory may be larger.


12.7.7.aw: 05-11
  1. Avoid ambiguity in the definition of the 4.7.9.6.j: complex isnan function.
  2. Errors during make test were not being detected. This has been fixed.


12.7.7.ax: 05-03
  1. If NDEBUG is not defined, the 8.11: hasnan function is used to make sure that the results of any 5.3: Forward operation does not contain a nan (not a number). If so, an error message is generated and the program terminates. This error message and termination can be caught; see 8.1: ErrorHandler .
  2. In the event that the 12.8.10: cppad_ipopt_nlp objective function, the constraints, or their derivatives are infinite, an error message is generated and the program terminates (proved that NDEBUG is not defined and the default error handler has not been replaced).


12.7.7.ay: 04-29
  1. The Microsoft Visual Studio 2003 project files for the Windows examples and tests no longer worked because the current version of CppAD uses local types in template instantiation; see Compiler Error C2918 (http://msdn.microsoft.com/en-us/library/bh44f2cb(v=vs.71).aspx) . These project files were converted to Visual Studio 2008 where they do work (if you use a later version, Visual Studio should automatically convert them for you).
  2. The old speed test directory was moved to speed_cppad before the new 11: speed test organization was created on 2006-12-11 (revision 715 of the repository). The old speed tests have not been used for years and so have been deleted.


12.7.7.az: 04-20
The openmp/run.sh script what changed to take an argument that specifies which tests is run (it no longer runs all the tests). Also improve the openmp test program output formatting.

12.7.7.ba: 04-19
The use_ad option was added to the openmp_newton_example.cpp test case.

12.7.7.bb: 03-19
The subversion write protected directory bin/.svn was mistakenly part of the 2.1.f: compressed tar file . It has been removed.

12.7.7.bc: 03-11
The vector of sets argument 12.8.11.r.c: r to the old_atomic function rev_hes_sparse must have size greater than or equal to n . There was a check that its size was greater than or equal q . This was incorrect and has been fixed.

12.7.7.bd: 03-05
Add the 10.2.3: conjugate gradient example.

12.7.7.be: 02-22
Add the 11.1.f.d: atomic option to the speed test program and use 12.8.11.5.1: old_mat_mul.hpp during the 11.5.3: cppad_mat_mul.cpp speed test when the atomic option is specified.

12.7.7.bf: 02-19
There was a bug when 12.8.4: omp_max_thread was set to one, and NDEBUG was not defined, the thread corresponding to parameters was one, but the only valid thread number was zero (only one thread) and an CPPAD stopped with an assertion error. This has been fixed.

12.7.7.bg: 02-17
There was a mistake in openmp/run.sh where it attempted to remove a non-existent file in the case where openmp/run.sh openmp_flag was not "". This has been fixed.

12.7.7.bh: 02-15
A matrix multiply speed test has been added. So far, this has only implemented for the 11.5.3: cppad and 11.3.3: double cases. (For the time being this test is not available for the other speed comparison cases.)

12.7.7.bi: 02-09
A variable in old_atomic.hpp was declare of type Base when it should have been declared of type size_t. It caused the 12.8.11: old_atomic feature to fail with some base types. This has been fixed.

The 12.8.11.5.1: old_mat_mul.hpp example has been improved by caching the @(@ x @)@ variable information and using it during 12.8.11.r: reverse Hessian sparsity calculations.

Some of the 12.8.11: old_atomic documentation was extended to include more explanation.

12.7.7.bj: 02-06
The use can now define complex 12.8.11: atomic operations and store them in a CppAD 5: ADFun object. This item has been remove from the 12.6: wish list .

The documentation for 5.5.6: RevSparseHes had a dimension error. This has been fixed.

A faster set operations item was added to the wish list. This has since been satisfied by cppad_sparse_list choice during the install process (since removed).

12.7.7.bk: 02-02
The documentation for 5.5.2: ForSparseJac had some formatting errors. The errors have been fix and the documentation has been improved.

12.7.7.bl: 02-01
The subversion install instructions were brought up to date. They have since been replaced by just separate subversion instructions.

12.7.7.bm: 01-19
The directory where the 2.4: pkgconfig file cppad.pc is stored has been moved from prefixdir/lib/pkgconfig/cppad.pc to prefixdir/share/pkgconfig/cppad.pc ; see devel@lists.fedoraproject.org (http://lists.fedoraproject.org/pipermail/devel/2011-January/147915.html) .

12.7.7.bn: 01-16
The following have been fixed:
  1. The install of the documentation failed when it was done from a directory other than the top source directory.
  2. The GPL distribution had the output of the 12.8.13.d: configure command in it.
  3. Since the change on 01-09, the file omh/appendix/whats_new_11.omh has been required to build the documentation (and it has been missing from the distribution).
  4. Fadbad was generating warnings due to the -Wshadow flag with the g++ compiler. The Fadbad 11.6: speed tests have a special flag with this warning removed from the 12.8.13.k: cxx_flags .


12.7.7.bo: 01-09
There were some problems running make test in the releases
http://www.coin-or.org/download/source/CppAD/cppad-20110101.0.
license.tgz
where license is gpl or cpl.
  1. The version of automake used to build the corresponding makefile.in files did not define abs_top_builddir.
  2. The include file cppad_ipopt_nlp.hpp was always installed, even if 12.8.13.r: ipopt_dir was not defined on the configure command line.
  3. The speed test library libspeed.a was being installed (it is only intended for testing).
These problems are fixed in the trunk and these fixes will be copied to the corresponding stable and release versions; i.e.,
http://www.coin-or.org/download/source/CppAD/cppad-20110101.1.
license.tgz
will not have this problem.
Input File: omh/appendix/whats_new/whats_new_11.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.8: Changes and Additions to CppAD During 2010

12.7.8.a: Introduction
This section contains a list of the changes to CppAD during 2010 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.8.b: 12-31
  1. Add specifications for the CppAD 2.4: pkgconfig files.
  2. Update the CppAD README file.
  3. Move most all CppAD development shell scripts to the bin subdirectory of the distribution directory.
  4. Fix warnings generated by the g++ compiler option -Wshadow; for example, sparse_pack.hpp:101:2: warning: declaration of 'end' shadows a member of 'this'


12.7.8.c: 11-27
If NDEBUB was defined, the default CppAD 8.1: error handler would return because its assert had no effect in this case. This has been fixed by placing a call to std::exit(1) after its assert.

12.7.8.d: 09-26
There was a bug (introduced on 09-22) in make test when the configure command was executed from a directory other than the distribution directory (the 12.8.10: cppad_ipopt_nlp did not build). This has been fixed.

12.7.8.e: 09-22
Promote 12.8.10: cppad_ipopt_nlp from an example to a library that gets installed (provided that the 12.8.13.r: ipopt_dir is specified on the configure command line).

12.7.8.f: 08-21
Fix problems with linking of 12.8.10: cppad_ipopt_nlp test with both older and newer versions of ipopt.

12.7.8.g: 07-14
The new versions of ipopt use pkg-config to record the location where its necessary libraries are stored on the system. The cppad 12.8.13.d: configure command has been improved so that it can work both with versions of ipopt that use pkg-config and ones that don't.

12.7.8.h: 07-11
Old versions of the ipopt library were located in ipopt_dir/lib/libipopt.* (see 12.8.13.r: ipopt_dir ), but newer versions will be located in ipopt_dir/lib/coin/libipopt.* . The directory ipopt_dir/lib/coin has been added to the library search path so that the 12.8.10.u: cppad_ipopt_nlp examples work with both old and new versions of ipopt.

12.7.8.i: 06-01
In the case where the preprocessor symbol NDEBUG was defined (see 12.1.j: speed ), the function
     CheckSimpleVector(const 
Scalarx, const Scalary)
was not defined; see bug report (http://list.coin-or.org/pipermail/cppad/2010q2/000166.html) . This has been fixed.

12.7.8.j: 04-28
Change the multi-level taping examples 10.2.10.1: mul_level.cpp and 4.7.9.3.1: mul_level_adolc.cpp to be more efficient.

12.7.8.k: 04-26
Include Blas and Linpack libraries in makefile links for 12.8.10: cppad_ipopt_nlp examples. This removes the need to use get.Blas when building Ipopt.

The speed test in cppad_ipopt/speed was missing a link to the library ../src/libcppad_ipopt.a. This has been fixed.

12.7.8.l: 04-24
There was a bug in the error checking done by cppad/local/sub_op.hpp that caused the following improper abort:
Error detected by false result for
    arg[1] < i_z
at line 337 in the file
    
.../include/cppad/local/sub_op.hpp
This was fixed in the trunk. It was also fixed in the release with version number 20100101.3 which can be downloaded from the CppAD download directory (http://www.coin-or.org/download/source/CppAD/) .

12.7.8.m: 04-01
Some of the 11.2: speed_utility library (in speed/src) was being compiled for debugging. This has been changed and they are now compiled with debugging off and optimization on.

12.7.8.n: 03-11
The old 5.4.3: reverse_any example was moved to 5.4.3.1: reverse_three.cpp , the old checkpoint example is now the general case reverse example, and a better checkpoint.cpp/ example was created.

12.7.8.o: 03-10
The 5.7: optimize routine would mistakenly remove some expressions that depended on the independent variables and that affected the result of certain 4.4.4: CondExp operations. This has been fixed.

12.7.8.p: 03-09
Extend 5.4.3: reverse_any so that it can be used for 4.4.7.1: checkpointing ; i.e., splitting reverse mode calculations at function composition points.

12.7.8.q: 03-03
Fixed a bug in handling 12.4.j.b: vector of boolean sparsity patterns. (when the number of elements per set was a multiple of sizeof(size_t)).

12.7.8.r: 02-11
The speed directory has been reorganized and the common part of the 11.1.j: link functions , as well as the 11.1.8: microsoft_timer , have been moved to the subdirectory speed/src where a library is built.

12.7.8.s: 02-08
A bug was introduced in the 12.7.8.u: 02-05 change whereby the make command tried to build the libcppad_ipopt.a library even if IPOPT_DIR was not specified on the 12.8.13.d: configure command line. This has been fixed.

12.7.8.t: 02-06
The Microsoft project files for the speed tests were extended so that the worked properly for the Release (as well as the Debug) configuration. (This required conversion from Visual Studio Version 7 to Visual Studio 2008 format.)

Add an automated check for 5.7: optimize bug fixed on 02-05. (Automatic checking for 4.3.6: PrintFor bug was added on 02-05.)

12.7.8.u: 02-05
  1. Simplify running all the tests by adding the make test command.
  2. Simplify the 12.8.13.d: configure command by removing need for: --with-Speed, --with-Introduction, --with-Example, --with-TestMore, and --with-PrintFor.
  3. Add files that were missing in the Microsoft Visual Studio projects.
  4. Fix two significant bugs. One in the 5.7: optimize command and the other in the 4.3.6: PrintFor command.


12.7.8.v: 02-03
Fix a mistakes in the test 12.10.1.1: bender_quad.cpp . In addition, the 5.7: optimize routine was used to reduce the tape before doing the calculations.

The routine 12.10.2: opt_val_hes was added as an alternative to 12.10.1: BenderQuad .

12.7.8.w: 01-26
Another speed improvement was made to 12.8.10: cppad_ipopt_nlp . To be specific, the Lagragian multipliers where checked and components that were zero were excluded form the Hessian evaluation.

12.7.8.x: 01-24
It appears that in 12.8.10: cppad_ipopt_nlp , when retape[k] was true, and L[k] > 1, it was possible to use the wrong operation sequence in the calculations (though a test case that demonstrated this could not be produced). This is because the argument value to @(@ r_k (u) @)@ depends on the value of @(@ \ell @)@ in the expression @[@ r_k \circ [ J_{k, \ell} \otimes n ] (x) @]@ (even when the value of @(@ x @)@ does not change).

There was a bug in the 12.8.10.2.5: ipopt_nlp_ode_check.cpp program, for a long time, that did not show up until now. Basically, the check had code of the was using an undefined value. This has been fixed.

12.7.8.y: 01-23
Improve the sparsity patterns and reduce the amount of memory required for large sparse problems using 12.8.10: cppad_ipopt_nlp . The speed test cppad_ipopt/speed showed significant improvement.

12.7.8.z: 01-20
We plan to split up the ipopt_cppad/src/ipopt_cppad_nlp.hpp include file. In preparation, the example ipopt_cppad has been changed to cppad_ipopt. This will facilitate using CPPAD_IPOPT_* for the # ifdef commands in the new include files (note that they must begin with CPPAD).

12.7.8.aa: 01-18
The ipopt_cppad subdirectory of the distribution has been split into an src, example, and speed subdirectories. The example (speed) subdirectory is where one builds the 12.8.10: ipopt_cppad_nlp examples (12.8.10.3: speed tests ).

12.7.8.ab: 01-04
The following items have been fulfilled and hence were removed for the 12.6: wish_list :
  1. If an exception occurs before the call to the corresponding 5: ADFun constructor or 5.1.3: Dependent , the tape recording can be stopped using 5.1.4: abort_recording .
  2. A speed test for 12.8.10: ipopt_cppad_nlp was added; see 12.8.10.3: ipopt_ode_speed.cpp .
  3. The 5.7: optimize command uses hash coding to check when an expression is already in the tape and can be reused.
  4. The 5.7: optimize command removes operations that are not used by any of the dependent variables.
  5. The 10.2.2: ad_in_c.cpp example demonstrates how to connect CppAD to an arbitrary scripting language.
  6. The vector of sets option has been added to sparsity calculations; see 12.4.j: sparsity pattern .

Input File: omh/appendix/whats_new/whats_new_10.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.9: Changes and Additions to CppAD During 2009

12.7.9.a: Introduction
This section contains a list of the changes to CppAD during 2009 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD. (Comments about developer documentation are only important if you are trying to read and understand the CppAD source code.)

12.7.9.b: 12-23
The ADFun 5.1.2.i: assignment operator was changed so that it now copies forward mode Taylor coefficients and sparsity pattern information. (This assignment operator was added on 12.7.9.o: 10-24 .) You can use 5.3.8: capacity_order to delete the Taylor coefficients before copying them. Two new functions were added so that you can query and delete the forward mode sparsity information; see 5.5.2.c.a: size_forward_bool and 5.5.2.c.b: size_forward_set .

12.7.9.c: 12-22
Convert the optimization of a sequence of additions from multiple operators to one operator with a varying number of arguments. This improved the speed for forward and reverse mode computations of an optimized tape.

12.7.9.d: 12-18
It turns out that detection of a sequence of additions makes the optimization longer. This was simplified and makes slightly faster by converting two jointly recursive routines to one non-recursive routine that uses a stack for the necessary information. More work is planned to make this optimization faster.

12.7.9.e: 12-12
Detection of a sequence of additions that can be converted to one variable during the 5.7: optimize process. This leads to a significant improvement in the tape size and speed.

12.7.9.f: 12-04
Change hash coding of parameter values as part of operators during the 5.7: optimize process. This should leads to more detection and removal of duplicate operations.

12.7.9.g: 12-02
Fix minor grammatical error in the Purpose heading for 4.4.4.b: conditional expressions .

Add the following functions: 5.1.5.i: size_op , 5.1.5.j: size_op_arg , and 5.1.5.m: size_op_seq . In addition, improve and extend the 5.1.5.1: seq_property.cpp example.

12.7.9.h: 11-28
Fix bug in tape optimization with 4.6: VecAD objects.

12.7.9.i: 11-27
Remove duplicate expressions for the commutative binary operators; i.e., addition and multiplication.

12.7.9.j: 11-26
Improve 5.7: optimize command so that it removes some duplicate expressions from the tape (more of this is planned).

12.7.9.k: 10-30
Make program that check Ipopt ODE example correctness a separate file 12.8.10.2.5: ipopt_nlp_ode_check.cpp Split out Ipopt driver for ODE example 12.8.10.2.4: ipopt_nlp_ode_run.hpp . Add the speed testing problem ipopt_cppad/ipopt_ode_speed.cpp.

12.7.9.l: 10-29
Split out the 12.8.10.2.1: ode inverse problem , 12.8.10.2.2: its simple representation , and 12.8.10.2.3: its fast representation , as a separate files; to be specific, 12.8.10.2.1.1: ipopt_nlp_ode_problem.hpp , 12.8.10.2.2.1: ipopt_nlp_ode_simple.hpp , 12.8.10.2.3.1: ipopt_nlp_ode_fast.hpp , and 12.8.10.2.5: ipopt_nlp_ode_check.cpp .

12.7.9.m: 10-28
Improve the documentation for 12.8.10.2.2: ipopt_nlp_ode_simple and 12.8.10.2.3: ipopt_nlp_ode_fast .

12.7.9.n: 10-27
Moved old ipopt_cppad_simple.cpp to 12.8.10.1: ipopt_nlp_get_started.cpp , created the example 12.8.10.2.2.1: ipopt_nlp_ode_simple.hpp , and split and ipopt_cppad_ode.cpp into 12.8.10.2.3.1: ipopt_nlp_ode_fast.hpp and 12.8.10.2.5: ipopt_nlp_ode_check.cpp .

12.7.9.o: 10-24
Added the 5.1.2.i: assignment operator to the ADFun object class. This makes a copy of the entire operation sequence in another function object. The intention is that the two functions objects can do calculations in parallel. In addition, CppAD now check for the ADFun 5.1.2.h: copy constructor and generates an error message if it is used.

12.7.9.p: 10-23
The 5.6.4: sparse_hessian routine was extended so the user can now choose between vectors of sets and boolean vectors for representing 12.4.j: sparsity patterns .

12.7.9.q: 10-21
The 8.10: CheckSimpleVector function was extended so that it can check simple vectors where the elements of the vector can not be assigned to integer values. This was done by adding the 8.10.c: x, y arguments to CheckSimpleVector.

12.7.9.r: 10-16
The 5.6.2: sparse_jacobian routine was extended so the user can now choose between vectors of sets and boolean vectors for representing 12.4.j: sparsity patterns .

12.7.9.s: 10-14
The packed parameter for the sparsity routines 5.5.2: ForSparseJac , 5.5.4: RevSparseJac , and 5.5.6: RevSparseHes (introduced on 12.7.9.x: 09-26 ) has been removed. It has been replaced by changing the argument and return values to be more versatile. To be specific, they can now represent sparsity using vectors of std::set<size_t> instead of just as vectors of bool (see 12.4.j: sparsity patterns ).

12.7.9.t: 10-03
The Microsoft Visual Studio project files for examples and testing and for more correctness testing were not including some new tests in their builds. This has been fixed.

12.7.9.u: 09-30
Added the 11.5.7: cppad_sparse_jacobian.cpp speed test and increased the sizes used by 11.1.6: link_sparse_hessian . Some mistakes were fixed in the documentation for speed tests 11.1.6: link_sparse_hessian and 11.2.9: sparse_hes_fun .

12.7.9.v: 09-29
The documentation definition of the function @(@ H(x) @)@ in 5.5.6: RevSparseHes was missing a factor of @(@ R @)@. This has been fixed.

12.7.9.w: 09-28
Changed 5.5.6: RevSparseHes so that it uses a sparse representation when the corresponding call to 5.5.2: ForSparseJac used a sparse representation. This should have been included with the change on 09-26 because Hessian sparsity patters after ForSparseJac with packed did not work. Thus, this could be considered a bug fix.

12.7.9.x: 09-26
Added the packed parameter to 5.5.2: ForSparseJac and 5.5.4: RevSparseJac . If packed is false, a sparse instead of packed representation is used during the calculations of sparsity patterns. The sparse representation should be faster, and use less memory, for very large sparse Jacobians. The functions ForSparseJac and RevSparseJac return packed representations. The plan is to eventually provide new member functions that return sparse representations.

12.7.9.y: 09-20
Fixed a bug in the 5.5.6: Hessian Sparsity calculations that included use of 4.6: VecAD objects.

12.7.9.z: 09-19
Some more memory allocation improvements (related to those on 09-18) were made.

12.7.9.aa: 09-18
A bug was found in all the 5.5: sparsity_pattern calculations. The result was that eight times the necessary memory was being used during these calculations. This has been fixed.

12.7.9.ab: 08-25
Add 10.2.1: ad_fun.cpp an example of how to create your own interface to an 5: ADFun object.

12.7.9.ac: 08-14
Add 10.2.2: ad_in_c.cpp an example of how to link CppAD to other languages.

12.7.9.ad: 08_13
Add an option to 5.7: optimize an operation sequence.

Begin Merge
of changes from the directory branches/optimize in the CppAD subversion repository. The subheading dates below represent when the correspond change was made in branches/optimize.

12.7.9.ad.a: 08-13
An automatic check of the 5.3.1: forward_zero results was added after each call to 5.7: f.optimize() (this 5.7.i: check is skipped when NDEBUG is defined). In addition, all of the speed/cppad/*.cpp tests now check and use the speed test 11.1.f.c: optimize flag.

12.7.9.ad.b: 08-11
Change the speed test 11.1: main program so that it uses a list of options instead of a boolean flag for each option. This will make it possible to add options in the future with out having to change all the existing tests because the options are now global variables instead of arguments to the speed test routines; for example, see retape speed test option.

12.7.9.ad.c: 08-10
The routine for 5.7: optimizing the operation sequence has been added has been further tested using test_more/optimize.cpp. Some bugs have been fix and the routine can now be trusted to work correctly.

The function 5.1.5.l: size_VecAD function was added so that the user could see the VecAD vectors and elements corresponding to an operation sequence.

12.7.9.ad.d: 08-09
A routine for 5.7: optimizing the operation sequence has been added. This is a preliminary version and needs more testing before it can be trusted to work correctly.
End Merge

12.7.9.ae: 08-06
Add hash table coding to reduce the number of copies of the same parameter value necessary in a tape recording. In addition, add the function 5.1.5.h: size_par was added so that the user could see the number of parameters corresponding to an operation sequence.

12.7.9.af: 08-02
Fix bug in new version of how 5.5.2: ForSparseJac handles 4.6: VecAD objects.

Fix bug in overnight build where HTML version and entire documentation as one page versions of documentation were not being built.

Fix missing new line under 8.9.k.a: Using Value heading for simple vector documentation.

12.7.9.ag: 08-01
Fix bug in reverse mode Jacobian 5.5.4: sparsity for 4.6: VecAD objects.

12.7.9.ah: 07-31
The 5.5.2: forward and 5.5.4: reverse sparse Jacobian routines have been improved so the resulting sparsity patterns are valid for all values of the independent variables (even if you use 4.4.4: CondExp or 4.6: VecAD ).

12.7.9.ai: 07-26
Convert developer documentation from forward and reverse mode sweep routines from OMhelp to doxygen.

12.7.9.aj: 07-25
Add developer documentation for 4.3.6: PrintFor operations.

12.7.9.ak: 07-24
Add developer documentation for 4.4.5: Discrete operations.

12.7.9.al: 07-23
Add developer documentation for tape evaluation of 4.6: VecAD store operations. (a store operation changes the value of a VecAD element).

Improve the 4.6.1: vec_ad.cpp user example.

12.7.9.al.a: 07-06
Fixed a bug in second or higher order reverse mode calculations that used 4.6: VecAD . This bug was demonstrated by the test case SecondOrderReverse in the file test_more/vec_ad.cpp.

Add developer documentation for tape evaluation of the VecAD load operations (a load operation accesses an element of the vector but does not change it.)

Fix isnan undefined in example/cond_exp.cpp error introduced on 07-04 change.

12.7.9.am: 07-04
Add developer documentation for the 12.8.3: CompareChange operations during tape evaluation.

Begin Merge
of changes from the directory branches/sweep in the CppAD subversion repository. The subheading dates below represent when the correspond change was made in branches/sweep.

12.7.9.am.a: 07-04
Fixed a bug in second or higher order reverse mode calculations that included 4.4.4: conditional expressions . This bug was demonstrated by the test case SecondOrderReverse in the file test_more/cond_exp.cpp.

A simpler and useful example was provided for 4.4.4: conditional expressions ; see 4.4.4.1: cond_exp.cpp .

12.7.9.am.b: 07-03
Some minor improvements were made to the documentation for 4.4.4: CondExp . To be specific, a newer OMhelp option was used to change the formatting of the syntax, some of the argument names were changed to be more descriptive.

12.7.9.am.c: 07-02
Add developer doxygen documentation of tape evaluation for power (exponentiation) operators.

12.7.9.am.d: 07-01
Fix an example indexing error in introduction/exp_apx/exp_eps_for2.cpp (found by valgrind).

Add developer doxygen documentation of tape evaluation for multiplication and division operators.

12.7.9.am.e: 06-30
Add developer doxygen documentation of tape evaluation for addition and subtraction operators.

12.7.9.am.f: 06-29
Add developer doxygen documentation of tape evaluation for sin, sinh, cos, and cosh.

12.7.9.am.g: 06-28
Add developer doxygen documentation of tape evaluation for atan, asin, acos, sqrt, log.
End Merge

12.7.9.an: 06-25
The tarball for most recent release (of the subversion trunk for CppAD) was not being placed in the download (http://www.coin-or.org/download/source/CppAD/) directory. This has been fixed.

12.7.9.ao: 06-22
Fix compiler warnings during the openmp/run.sh test.

Changed 10.3.2: speed_example.cpp to omit the speed_test from the correctness result. In stead, a message is printed explaining that timing tests need to be run without a lot of other demands on the system.

12.7.9.ap: 06-21
The configure instructions for 12.8.13.r: ipopt_dir had the wrong path for IpIpoptApplication.hpp. This has been fixed.

12.7.9.aq: 06-20
Upgrade to from autoconf 2.61 to 2.63, and from automake 1.10.1 to 1.11.

Fix conflict between CppAD's use of config.h preprocessor symbols and other packages use of the same symbol names.

12.7.9.ar: 06-06
  1. Using complex of an AD type (instead of AD of complex) was not working correctly in not_complex_ad.cpp because the 4.1: default constructor for an AD object has an unspecified value. This has been fixed for the complex type by changing the default constructor to use value zero. (The not_complex_ad.cpp example has been removed; see 12.1.d: complex FAQ .)
  2. Fixing the not_complex_ad.cpp problem above also fixed a warning generated by valgrind (http://valgrind.org/) . Now valgrind runs the CppAD example/example program with out any warning or error messages. In addition, a minor initialization error was fixed in the test_more/jacobian.cpp routine so now valgrind also runs the CppAD test_more/test_more program with out any warnings or error messages.


12.7.9.as: 05-20
A change was make to the trunk on 05-19 (svn revision 1361) that broke the 12.8.13: Unix install procedure. This was has been fixed (revision 1362).

12.7.9.at: 03-24
Added cross references in the 10.4: examples to occurrence of the following tokens: 4: AD , 5.1.2: ADFun , 12.8.9: CPPAD_TEST_VECTOR , 5.3: Forward , 5.1.1: Independent , 5.2.1: Jacobian 8.2: NearEqual , 5.4: Reverse .

12.7.9.au: 02-20
Demonstrate using AD to compute the derivative of the solution of an ODE with respect to a parameter (in the 8.17.2: runge45_2.cpp example).

12.7.9.av: 02-15
Change the distribution 2.1.f: compressed tar file to only contain one copy of the documentation. Link to the current Internet documentation for the other three copies.

12.7.9.aw: 02-01
Move the Prev and Next buttons at the top of the documentation to the beginning so that their position does not change between sections. This makes it easier to repeatedly select this links.

12.7.9.ax: 01-31
Modify cppad/local/op_code.hpp to avoid incorrect warning by g++ version 4.3.2 when building pycppad (a python interface to CppAD).

12.7.9.ay: 01-18
Sometimes an error occurs while taping AD operations. The 5.1.4: abort_recording function has been added to make it easier to recover in such cases.

Previously, CppAD speed and comparison tests used Adolc-1.10.2. The version used in the tests has been upgraded to Adolc-2.0.0. (https://projects.coin-or.org/ADOL-C)

A discussion has been added to the documentation for 5.2.1: Jacobian about its use of 5.2.1.g: forward or reverse mode depending on which it estimates is more efficient.

A minor typo has been fixed in the description of W(t, u) in 5.4.3: reverse_any . To be specific, @(@ o ( t^{p-1} ) * t^{1-p} \rightarrow 0 @)@ has been replaced by @(@ o ( t^{p-1} ) / t^{1-p} \rightarrow 0 @)@.

12.7.9.az: 01-06
Made some minor improvements to the documentation in 5.1.2: FunConstruct .
Input File: omh/appendix/whats_new/whats_new_09.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.10: Changes and Additions to CppAD During 2008

12.7.10.a: Introduction
This section contains a list of the changes to CppAD during 2008 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.10.b: 12-19
In the documentation for 8.12: pow_int change the integer exponent from int y to const int &y . In the implementation for 4.4.3.2: pow make the integer base case agree with the documentation; i.e., change from int x to const int &x .

12.7.10.c: 12-14
Added another test of 10.2.10: mul_level calculations (in the test_more directory).

12.7.10.d: 12-04
Extensive explanation for the ipopt_cppad/ipopt_cppad_ode example was provided in the section 12.8.10.2: ipopt_cppad_ode .

12.7.10.e: 11-22
The CppAD interface to the Ipopt nonlinear programming solver 12.8.10: cppad_ipopt_nlp has been moved from example/ipopt_cppad_nlp to ipopt_cppad/ipopt_cppad_nlp.

12.7.10.f: 11-21
The Microsoft's Visual C++ Version 9.0 generates a warning of the form warning C4396:%...% for every template function that is declared as a both a friend and inline (it thinks it is only doing this for specializations of template functions). The warnings are no longer generated because these inline directives are converted to empty code when a Microsoft Visual C++ is used.

12.7.10.g: 11-20
The function tanh(x) was added to the 4.4.2: unary_standard_math functions. The abs and erf functions were removed from the 4.7: Base requirements . The restrictions about the Base class were removed from 4.4.2.14: abs , 4.4.3.1: atan2 , 12.10.3: LuRatio , 4.4.2.18: erf .

Visual Studio Version 9.0 could not handle the large number of static constants in the CppAD 4.4.2.18: erf function. This function was changed to a simpler representation that is much faster and that is differentiable at all points (not defined differently on subregions). The down side to this is that the new version is not as accurate.

12.7.10.h: 10-27
Change prototypes for ipopt_cppad/ipopt_cppad_ode helper routines to use const (where appropriate).

12.7.10.i: 10-17
Major improvements to the ipopt_cppad/ipopt_cppad_ode example.

12.7.10.j: 10-16
Minor improvement to description of optimization argument in ipopt_cppad/ipopt_cppad_ode.

12.7.10.k: 09-30
Add or modify some wish list entries; see cppad_ipopt_nlp (since removed), multiple argument forward (completed with 5.3.5: forward_dir ), and sparsity patterns (12.4.j: sparsity patterns has been fulfilled).

12.7.10.l: 09-26
Use parenthesis and brackets to group terms of the form @(@ m \times I @)@ to make the documentation of 12.8.10: ipopt_cppad_nlp easier to read. Changed ipopt_cppad/ipopt_cppad_ode to use @(@ y(t) @)@ for the solution of the ODE to distinguish it for @(@ x @)@, the vector we are optimizing with respect to.

12.7.10.m: 09-18
Changed ipopt_cppad/ipopt_cppad_ode to a case where @(@ x(t) @)@ is a pair of exponential functions instead of a linear and quadratic. Fixed some of the comments in this example and included the source code in the documentation (which was missing by mistake).

12.7.10.n: 09-17
Changed ipopt_cppad/ipopt_cppad_ode to a case where there are two components in the ODE (instead of one). Also removed an initialization section that was only intended for tests with a specific initial value.

12.7.10.o: 09-16
Add ipopt_cppad/ipopt_cppad_ode, an example and test that optimizes the solution of an ODE. Change r_eval to eval_r in 12.8.10: ipopt_cppad_nlp . Fix a dimension of u_ad error in ipopt_cppad_nlp.

12.7.10.p: 09-12
Converted from storing full Hessian and Jacobian to a sparse data structure in 12.8.10: ipopt_cppad_nlp . This greatly reduced the memory requirements (and increased the speed) for sparse problems.

12.7.10.q: 09-10
Fixed more indexing bugs in 12.8.10: ipopt_cppad_nlp that effected cases where the domain index vector @(@ J_{k, \ell} @)@ was different for different values of @(@ k @)@ and @(@ \ell @)@.

In 12.8.10: ipopt_cppad_nlp , combined fg_info->domain_index() and fg_info->range_index() into a single function called fg_info->index() . Also added more error checking (if NDEBUG is not defined).

12.7.10.r: 09-09
Fixed an indexing bug in 12.8.10: ipopt_cppad_nlp . (This effected cases where the domain index vector @(@ J_{k, \ell} @)@ was different for different values of @(@ k @)@ and @(@ \ell @)@.)

12.7.10.s: 09-07
Change 12.8.10: ipopt_cppad_nlp so that object and constraints are expressed as the double summation of simpler functions. This is more versatile that the single summation representation.

12.7.10.t: 09-06
Checked in a major change to 12.8.10: ipopt_cppad_nlp whereby the object and constraints can be expressed as the sum of simpler functions. This is the first step in what will eventually be a more versatile representation.

12.7.10.u: 09-05
Fix bug in 12.8.10: ipopt_cppad_nlp (not recording the function at the proper location. Here is the difference that occurred multiple places in the ipopt_cppad/ipopt_cppad_nlp.cpp source:
 
     for(j = 0; j < n_; j++)
-         x_ad_vec[0] = x[j];
+         x_ad_vec[j] = x[j];
This did not show up in testing because there currently is no test of ipopt_cppad_nlp where the operation sequence depends on the value of @(@ x @)@.

Changed eval_grad_f in ipopt_cppad_nlp.cpp to be more efficient.

12.7.10.v: 09-04
The 12.8.10: ipopt_cppad_nlp interface has been changed to use a derived class object instead of a pointer to a function.

12.7.10.w: 09-03
The 12.8.10: ipopt_cppad_nlp interface has been changed to use size_t instead of Ipopt::Index.

12.7.10.x: 09-01
Back out the changes made to 12.8.10: ipopt_cppad_nlp on 08-29 (because testing proved the change to be less efficient in the case that motivated the change).

12.7.10.y: 08-29
The push_vector member function was missing from the 8.22.m: vectorBool class. This has been fixed. In addition, it seems that for some cases (or compilers) the assignment
     
x[i] = y[j]
did not work properly when both x and y had type vectorBool. This has been fixed.

The 12.8.10: ipopt_cppad_nlp example has been extended so that it allows for both scalar and vector evaluation of the objective and constraints; see the argument fg_vector in 12.8.10: ipopt_cppad_nlp . In the case where there is not a lot of common terms between the functions, the scalar evaluation may be more efficient.

12.7.10.z: 08-19
Add 8.22.h: push of a vector to the CppAD::vector template class. This makes it easy to accumulate multiple scalars and 8.9: simple vectors into one large CppAD::vector.

12.7.10.aa: 08-08
There was an indexing bug in the 12.8.10: ipopt_cppad_nlp example that affected the retape equal to false case. This has been fixed. In addition, the missing retape documentation was added.

12.7.10.ab: 07-02
Extend 12.8.13.d: configure command to check for extras libraries that are necessary for linking the ipopt example.

12.7.10.ac: 06-18
Add specifications for the Ipopt class 12.8.10: ipopt_cppad_nlp . This is only an example class it may change with future versions of CppAD.

12.7.10.ad: 06-15
The nonlinear programming example 12.8.10.1: ipopt_nlp_get_started.cpp was added. This is a preliminary version of this example.

12.7.10.ae: 06-11
The sparsity pattern for the Hessian was being calculated each time by 5.6.4: SparseHessian . This is not efficient when the pattern does not change between calls to SparseHessian. An optional sparsity pattern argument was added to SparseHessian so that it need not be recalculated each time.

12.7.10.af: 06-10
The sparsity pattern for the Jacobian was being calculated each time by 5.6.2: SparseJacobian . This is not efficient when the pattern does not change between calls to SparseJacobian. An optional sparsity pattern argument was added to SparseJacobian so that it need not be recalculated each time.

12.7.10.ag: 05-08
The 5.6.2: sparse_jacobian routine has been added.

The example in 5.6.4: sparse_hessian pointed to 5.2.2.1: hessian.cpp instead of 5.6.4.1: sparse_hessian.cpp . This has been fixed.

12.7.10.ah: 05-03
The retape flag has been added to 11.1: speed_main . In addition the routines 11.1.2: link_det_minor , 11.1.5: link_poly , and 11.1.4: link_ode pass this flag along to the speed test implementations (because the corresponding tests have a fixed operation sequence). If this flag is false, a test implementation is allowed to just tape the operation sequence once and reuse it. The following tests use this flag: 11.4.1: adolc_det_minor.cpp , 11.5.1: cppad_det_minor.cpp , 11.5.4: cppad_ode.cpp , 11.4.5: adolc_poly.cpp , 11.5.5: cppad_poly.cpp .

Create specialized zero order forward mode routine that should be faster, but does not test out as faster under cygwin g++ (GCC) 3.4.4.

12.7.10.ai: 04-20
Added the 11.2.7: ode_evaluate speed test utility in preparation for having ode speed tests. Created ode speed test for the cppad and double cases; see 11.1: speed_main . In addition, added the examples 11.2.7.1: ode_evaluate.cpp and 5.6.4.1: sparse_hessian.cpp .

Changed the 11.1: speed_main routines defined for each package from compute_name to link_name . For example, in speed/cppad/det_minor.cpp, the function name compute_det_minor was changed to link_det_minor.

12.7.10.aj: 04-18
Fix a problem in the 11.1.5: link_poly correctness test. Also add 11.3.6: double_sparse_hessian.cpp to the set speed and correctness tests (now available).

12.7.10.ak: 04-10
Change all the 11.4: Adolc speed examples to use 12.8.5: TrackNewDel instead of using new and delete directly. This makes it easy to check for memory allocation errors and leaks (when NDEBUG is not defined). Also include in documentation sub functions that indicate the sparse_hessian speed test is not available for 11.3.6: double_sparse_hessian.cpp , 11.6.6: fadbad_sparse_hessian.cpp , and 11.7.6: sacado_sparse_hessian.cpp .

12.7.10.al: 04-06
The following 12.6: wish list entry has been completed and removed from the list: "Change private member variables names (not part of the user interface) so that they all end with an underscore."

12.7.10.am: 04-04
Fix a problem compiling the speed test 11.1: main program with gcc 4.3.

12.7.10.an: 03-27
Corrected 11.5.6: cppad_sparse_hessian.cpp so that it uses the sparse case when USE_CPPAD_SPARSE_HESSIAN is 1. Also added a wish list sparsity pattern entry (the 12.4.j: sparsity pattern entry has been fulfilled).

Change the name of speedtest.cpp to 8.4.1: speed_program.cpp .

12.7.10.ao: 02-05
Change windows install instructions to use Unix formatted files (so only two instead of four tarballs are necessary for each version). The Microsoft project files for speed/cppad, speed/double, and speed/example were missing. This has also been fixed.

12.7.10.ap: 02-03
There was an ambiguity problem (detected by g++ 4.3) with the following operations
     
x op y
where x and y were AD<double> and op was a member operator of that class. This has been fixed by making all such member functions friends instead of members of AD<double>.

Remove compound assignment entry from wish list (it was fixed on 12.7.11.as: 2007-05-26 ). Add an expression hashing entry to the 12.6: wish_list (it has since been removed). Add Library and Scripting Languages to the wish list (this has since been fulfilled by the example 10.2.2: ad_in_c.cpp ).

12.7.10.aq: 01-26
The 8.14.2: LuFactor routine gave a misleading error message when the input matrix had not a number or infinity in it. This has been fixed.

12.7.10.ar: 01-24
The 12.8.13.m: postfix_dir has been added to the configure command line options.

12.7.10.as: 01-21
A sparse Hessian case was added to the 11: speed tests; see 11.1.6: sparse_hessian .

12.7.10.at: 01-20
CppAD can now be installed using yum on Fedora operating systems.

12.7.10.au: 01-11
The CppAD correctness tests assume that machine epsilon is less than 1e-13. A test for this has been added to the test_more/test_more program.

12.7.10.av: 01-08
Added a 5.6.4: sparse_hessian routine and extended 5.2.2: Hessian to allow for a weight vector w instead of just one component l .
Input File: omh/appendix/whats_new/whats_new_08.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.11: Changes and Additions to CppAD During 2007

12.7.11.a: Introduction
This section contains a list of the changes to CppAD during 2007 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.11.b: 12-29
License conversions missed the copyright message at the top in the following special cases: makefile.am, makefile.in, and omh/appendix/license.omh.

12.7.11.c: 12-25
The 2: install instructions have been improved.

12.7.11.d: 12-21
The 12.8.13.h: --with-Documentation option on the configure command line caused an error on some systems because it attempted to copy to many files. This has been fixed by copying the directory instead of the individual files.

12.7.11.e: 12-08
By mistake, the documentation 12.12: License statement for the GPL distribution was the same as for the CPL distribution. This has been fixed.

12.7.11.f: 12-05
Change the name of the spec file from cppad-yyyymmdd.spec to cppad.spec.

12.7.11.g: 12-04
Add the capability for the RPM spec file to use a different prefix directory.

12.7.11.h: 12-03
This is the first version with the rpm spec file cppad.spec.

12.7.11.i: 12-02
Add the DESTDIR=directory option on the 12.8.13.v: make install command line.

12.7.11.j: 11-29
The 4.4.2: unary_standard_math function sqrt did not link properly when Base was AD<double>. This has been fixed.

12.7.11.k: 11-23
The routines nan and isnan were failing for some systems because they use nan and or isnan as preprocessor symbols. This has been fixed; see 8.11.c.a: macros . In addition, the example and test 8.11.1: nan.cpp has been added.

12.7.11.l: 11-18
Speed tests for tape_values branch were not better than trunk, so the good parts of that branch (but not all) were merged into the trunk.

The interface specifications for 4.7: base type requirements have been changed so that CppAD would compile with gcc 4.1.1 (which requires more definitions before use in template functions). This changed of requirements is demonstrated by the 4.7.9.6: base_complex.hpp and 4.7.9.3: base_adolc.hpp examples.

The problem with newer C++ compilers requiring more definitions before use also required the user to know about float and double definitions for the standard math functions in the CppAD namespace; see 4.7.5: base_std_math .

The example/test_one.sh and test_more/test_one.sh scripts were modified so that one only need specify the test file name (does not also need the test routine name). Some of the test routine declarations were changed from name() to name(void) to make this possible.

The program test_more/test_more was changed to always report the memory leak test results (same as example/example).

The 4.3.6: PrintFor function was putting an unused variable in the tape. This has been fixed.

12.7.11.m: 11-06
Added the -DRAD_EQ_ALIAS compiler flag to the 11.7: Sacado speed tests . In addition, compiler flag documentation was included for Sacado and all the other speed tests.

12.7.11.n: 11-05
MS project files have been added for running the 11.5: cppad and 11.3: double speed tests.

12.7.11.o: 11-04
The cppad/config.h file was not compatible with the Windows install procedure and the Windows project's could not find a certain include file. This has been fixed.

The 12.8.13: unix install procedure has been modified so that the one configure flag --with-Speed builds all the possible executables related to the speed testing.

12.7.11.p: 11-03
Improve the 11.1: speed_main documentation and output (as well as the title for other sections under 11: speed ).

The subversion copy of the 12.8.13.d: configure script was not executable. This has been fixed.

12.7.11.q: 11-02
The instructions for downloading the current version using subversion have changed. The user should now directly edit the file
 
     trunk/configure
in order to set the correct date for the installation and to build the corresponding documentation.

The 11: speed section has been slightly reorganized (the main program and utilities have been separated).

Add 11.3: speed_double for testing the speed of evaluating functions in double as apposed to gradients using AD types.

12.7.11.r: 11-01
The instructions for downloading the current version using subversion have changed. The user must now execute the command
 
     ./build.sh version
in order to set the correct version number for her (or his) installation.

Add the return status for all the correctness tests to the documentation; see make test.

12.7.11.s: 10-30
The download instructions did not update current version number and this broke the links to the current tarballs. This has been fixed.

The documentation for 11.2.3: det_by_minor and 11.2.1: det_by_lu has been improved. The order of the elements in 11.2.2: det_of_minor has been corrected (they were transposed but this did not really matter because determinants of transposes are equal).

The makefiles in the distribution have been changed so that one can run configure from a directory other than the distribution directory.

12.7.11.t: 10-27
A subversion method for downloading CppAD has been added.

The installation was broken on some systems because the 12.8.13.d: configure command tried to run the autoconf and automake programs. This has been fixed by adding AM_MAINTAINER_MODE to the autoconf input file.

Extend the subversion methods to include a full installation and old versions.

12.7.11.u: 10-23
The 12.8.13.k: cxx_flags environment variable has been changed from CPP_ERROR_WARN to CXX_FLAGS.

The command configure --help now prints a description of the environment variables ADOLC_DIR, FADBAD_DIR, SACADO_DIR, BOOST_DIR, and CXX_FLAGS. In addition, if the environment variables POSTFIX_DIR or CPP_ERROR_WARN are used, an message is printed saying that are not longer valid.

12.7.11.v: 10-22
The correctness checks and speed test wrappers were moved from the individual package directories to 11.1: speed_main . This way they do not have to be reproduced for each package. This makes it easier to add a new package, but it requires the prototype for compute_test_name to be the same for all packages.

The Sacado (http://trilinos.sandia.gov/packages/sacado/) package was added to the list of 11: speed tests. In addition, the discussion about how to run each of the speed tests was corrected to include the seed argument.

The postfix_dir option was removed on 12.7.12.p: 2006-12-05 but it was not removed from the 12.8.13.d: configure documentation. This has been fixed.

The routine 8.10: CheckSimpleVector was changed. It used to require conversion of the form
     
Scalar(i)
where i was 0 or 1. This does not work with when Scalar is Sacado::Tay::Taylor<double>. This requirement has been changed (see 8.10.d: restrictions ) to support of
     
x = i
where x has type Scalar and i has type int.

Fix include directives in 11.6: speed_fadbad programs det_lu, det_minor, and poly, to use FADBAD++ instead of Fadbad++ directory.

Add ADOLC_DIR, FADBAD_DIR, SACADO_DIR, and BOOST_DIR to the 12.8.13.d: configure help string.

12.7.11.w: 10-16
Add seed argument and improve 11.1: speed_main documentation.

12.7.11.x: 10-13
Fix the title in 11.4.2: adolc_det_lu.cpp . Add the package name to each test case result printed by 11.1: speed_main .

12.7.11.y: 10-05
Added and example using complex calculations for a function that is not complex differentiable not_complex_ad.cpp. (This example has been removed; see 12.1.d: complex FAQ .)

12.7.11.z: 10-02
Extend the 4.4.3.2: pow function to work for any case where one argument is AD<Base> and the other is double (as do the binary operators).

12.7.11.aa: 09-06
If the 8.19.f.a: method.step function returned nan (not a number), it was possible for 8.19: OdeErrControl to drop into an infinite loop. This has been fixed.

12.7.11.ab: 08-09
Let user detect and handel the case where an ODE initial vector xi contains not a number nan (see 8.17: Runge45 , 8.18: Rosen34 , and 8.19: OdeErrControl ).

Use the || operation instead of | operator in the nan function (The Ginac library seems to use an alias for the type bool and does not have | defined for this alias).

The file test_more/ode_err_control.cpp was using the wrong include file name since the change on 08/07. This has been fixed.

12.7.11.ac: 08-07
Sometimes an ODE solver takes to large a step and this results in invalid values for the variables being integrated. The ODE solvers 8.17: Runge45 and 8.18: Rosen34 have been modified to abort and return 8.11: nan when it is returned by the differential equation evaluation. The solver 8.19: OdeErrControl have been modified to try smaller steps when this happens.

Fix an 5.1.2.g: Sequence Constructor referenced to Dependent in documentation (was using the 12.8.2: FunDeprecated one argument syntax).

Add comment about mixing debug and non-debug versions of CppAD in 12.8.5.l: TrackDelVec error message.

12.7.11.ad: 07-30
CppADCreateBinaryBool
and CppADCreateUnaryBool have been replaced by CPPAD_BOOL_BINARY and CPPAD_BOOL_UNARY respectively. In addition, the 12.6: wish_list item for conversion of all preprocessor macros to upper case been completed and removed.

12.7.11.ae: 07-29
The preprocessor macros CppADUsageError and CppADUnknownError have been replaced by CPPAD_ASSERT_KNOWN and CPPAD_ASSERT_UNKNOWN respectively. The meaning for these macros has been included in the 8.1.2: cppad_assert section. In addition, the known argument to 8.1: ErrorHandler was wrong for the unknown case.

The 12.6: wish_list item for conversion of all preprocessor macros to upper case has been changes (to an item that was previous missing).

12.7.11.af: 07-28
The preprocessor macro CPPAD_DISCRETE_FUNCTIOIN was defined as a replacement for CppADCreateDiscrete which has been deprecated.

12.7.11.ag: 07-26
Merge in changes made in branches/test_vector.

12.7.11.ag.a: 07-26
Change all occurrences of CppADvector, in the files test_more/*.cpp and speed/*/*.cpp , where changed to CPPAD_TEST_VECTOR. All occurrences of the CppADvector in the documentation were edited to reflect that fact that it has been deprecated. The documentation index and search for deprecated items has been improved.

12.7.11.ag.b: 07-25
Deprecate the preprocessor symbol CppADvector and start changing it to 12.8.9: CPPAD_TEST_VECTOR .

Change all occurrences of CppADvector, in the example/*.cpp files, to CPPAD_TEST_VECTOR.

12.7.11.ah: 07-23
The 12.8.5: TrackNewDel macros CppADTrackNewVec, CppADTrackDelVec, and CppADTrackExtend have been deprecated. The new macros names to use are CPPAD_TRACK_NEW_VEC, CPPAD_TRACK_DEL_VEC, and CPPAD_TRACK_EXTEND respectively. This item has been removed from the 12.6.o: software guidelines section of the wish list.

The member variable 12.6.o: software guideline wish list item has be brought up to date.

12.7.11.ai: 07-22
Minor improvements to the 10.2.13: mul_level_adolc_ode.cpp example.

12.7.11.aj: 07-21
  1. The openmp/run.sh example programs example_a11c.cpp, openmp_newton_example.cpp, and sum_i_inv.cpp have been changed so that they run on more systems (are C++ standard compliant).
  2. 4.7: base_require : The IdenticalEqual function, in the 4.7: base_require specification, was changed to IdenticalEqualPar (note the 4.7.c: API warning in the Base requirement specifications).
  3. Implementation of the 4.7: base requirements for complex types were moved into the 4.7.9.6: base_complex.hpp example.


12.7.11.ak: 07-20
The download for CppAD was still broken. It turned out that the copyright message was missing from the file 4.7.9.3: base_adolc.hpp and this stopped the creation of the download files. This has been fixed. In addition, the automated testing procedure has been modified so that missing copyright messages and test program failures will be more obvious in the test log.

12.7.11.al: 07-19
The download for CppAD has been broken since the example mul_level_adolc_ode.cpp was added because the example/example program was failing. This has been fixed.

12.7.11.am: 07-18
A realistic example using Adolc with CppAD 10.2.13: mul_level_adolc_ode.cpp was added. The documentation for 12.8.5: TrackNewDel was improved.

12.7.11.an: 07-14
Add a discussion at the beginning of 10.2.12: mul_level_ode.cpp example (and improve the notation used in the example).

12.7.11.ao: 07-13
Separate the include file 4.7.9.3: base_adolc.hpp from the 4.7.9.3.1: mul_level_adolc.cpp example so that it can be used by other examples.

12.7.11.ap: 06-22
Add 4.7.9.3.1: mul_level_adolc.cpp , an example that demonstrates using adouble and for the 4.7: Base type.

The 10.1: get_started.cpp example did not build when the --with-Introduction and BOOST_DIR options were included on the 12.8.13.d: configure command line. In fact, some of the 11: speed tests also had compilation errors when BOOST_DIR was include in the configure command. This has been fixed.

There was a namespace reference missing in the files that could have caused compilation errors in the files speed/cppad/det_minor.cpp and speed/cppad/det_lu.cpp. This has been fixed.

12.7.11.aq: 06-20
The MS project test_more/test_more.vcproj would not build because the file test_more/fun_check.cpp was missing; this has been fixed. In addition, fix warnings generated by the MS compiler when compiling the test_more/test_more.cpp file.

Add a section defining the 4.7: Base type requirements . Remove the Base type restrictions from the 12.1: Faq . Make all the prototype for the default Base types agree with the specifications in the Base type requirements.

Fix the description of the tan function in 4.4.2: unary_standard_math .

12.7.11.ar: 06-14
The routine 8.18: Rosen34 ( 8.17: Runge45 ) had a division of a size_t ( int ) by a Scalar , where Scalar was any 8.7: NumericType . Such an operation may not be valid for a particular numeric type. This has been fixed by explicitly converting the size_t to an int, then converting the int to a Scalar , and then preforming the division. (The conversion of an int to any numeric type must be valid.)

12.7.11.as: 05-26
If the Base type is not double, the 4.4.1.4: compound assignment operators did not always allow for double operands. For example, if x had type AD< AD<double> >
     
x += .5;
would slice the value .5 to an int and then convert it to an AD< AD<double> >. This has been fixed.

This slicing has also been fixed in the 4.2: assignment operation. In addition, the assignment and copy operations have been grouped together in the documentation; see 4.1: ad_ctor and 4.2: ad_assign .

12.7.11.at: 05-25
Document usage of double with binary arithmetic operators, and combine all those operators into one section (4.4.1.3: ad_binary ).

The documentation for all the 4.4.1.4: compound assignment operators has been grouped together. In addition, a compound assignment wish list item has been added (it was completed and removed with the 12.7.11.as: 05-26 update.)

12.7.11.au: 05-24
Suppose that op is a binary operation and we have
     
left op right
where one of the operands was AD< AD<double> > and the other operand was double. There was a bug in this case that caused the double operand to be converted to int before being converted to AD< AD<double> >. This has been fixed.

12.7.11.av: 05-22
The Microsoft examples and testing project file example/example.vcproj was missing a reference to the source code file example/reverse_two.cpp. This has been fixed.

12.7.11.aw: 05-08
Reverse mode does not work with the 4.4.3.2: pow function when the base is less than or equal zero and the exponent is an integer. For this reason, the 8.12: pow_int function is no longer deprecated (and is used by CppAD when the exponent has type int).

12.7.11.ax: 05-05
Third and fourth order derivatives were included in the routine test_more/sqrt.cpp that tests square roots.

The return value descriptions were improved for the introduction examples: 3.1.4.e: exp_2_for1 , 3.1.6.e: exp_2_for2 , 3.2.4.d: exp_eps_for1 , and 3.2.6.e: exp_eps_for2 .

The summation index in 12.3.2.3: sqrt_reverse was changed from @(@ k @)@ to @(@ \ell @)@ to make partial differentiation with respect to @(@ z^{(k)} @)@ easier to understand. In addition, a sign error was corrected near the end of 12.3.2.3: sqrt_reverse .

The dimension for the notation @(@ X @)@ in 12.3.3: reverse_identity was corrected.

The word mega was added to the spelling exception list for openmp/run.sh.

12.7.11.ay: 04-19
Improve connection from 12.3.3: reverse_identity theorem to 5.4.3: reverse_any calculations.

Improve the openmp/run.sh script. It now runs all the test cases at once in addition to including multiple number of thread cases for each test.

Add the sum_i_inv_time.cpp OpenMP example case.

There was a typo in the 5.3.4.o: second order discussion (found by Kipp Martin). It has been fixed.

12.7.11.az: 04-17
Add a paragraph to 12.3.3: reverse_identity explaining how it relates to 5.4.3: reverse_any calculations. Add description of 5.4.3.h: first and 5.4.3.i: second order results in 5.4.3: reverse_any .

12.7.11.ba: 04-14
Simplify the 5.4: Reverse mode documentation by creating a separate 5.4.2: reverse_two section for second order reverse, making major changes to the description in 5.4.3: reverse_any , and creating a third order example 5.4.3.2: reverse_checkpoint.cpp for reverse mode calculations.

Improve the 12.3.3: reverse_identity proof.

12.7.11.bb: 04-11
Merge in changes made in branches/intro.

12.7.11.bb.a: 04-11
Add 3.2.7: exp_eps_rev2 and its verification routine 3.2.7.1: exp_eps_rev2.cpp .

12.7.11.bb.b: 04-10
Finished off 3.1.7: exp_2_rev2 and added 3.1.7.1: exp_2_rev2.cpp which verifies its calculations. Added second order calculations to 3.1.8: exp_2_cppad . Added 3.2.6: exp_eps_for2 and its verification routine.

12.7.11.bb.c: 04-07
Added a preliminary version of 3.1.7: exp_2_rev2 (does not yet have verification or exercises).

12.7.11.bb.d: 04-06
Fixed a problem with the Microsoft Visual Studio project file introduction/exp_apx/exp_apx.vcproj (it did not track the file name changes of the form exp_apx/exp_2_for to exp_apx/exp_2_for1 on 04-05).

Added 3.1.6: exp_2_for2 to introduction.

12.7.11.bb.e: 04-05
Use order expansions in introduction; e.g., the 3.1.6.a: second order expansion for the 3.1: exp_2 example.

12.7.11.bc: 03-31
Merge in changes made in branches/intro and remove the corresponding Introduction item from the wish list:

12.7.11.bc.a: 03-31
Create the a simpler exponential approximation in the 3: introduction called 3.1: exp_2 which has a different program variable for each variable in the operation sequence.

Simplify the 3.2: exp_eps approximation using the @(@ v_1 , \ldots , v_7 @)@ notation so that variables directly correspond to index in operation sequence (as with the 3.1: exp_2 example).

12.7.11.bc.b: 03-30
The Microsoft project file introduction/exp_apx/exp_apx.vcproj was referencing exp_apx_ad.cpp which no longer exists. It has been changed to reference exp_apx_cppad.cpp which is the new name for that file.

12.7.11.bd: 03-29
Fixed entries in this file where the year was mistakenly used for the month. To be more specific, 07-dd was changed to 03-dd for some of the entries directly below.

Corrected some places where CppAD was used in stead of Adolc in the 11.4.5: adolc_poly.cpp documentation.

Added an Introduction and 12.6.p: Tracing entry to the wish list. (The Introduction item was completed on 12.7.11.bc: 03-31 .)

12.7.11.be: 03-20
Example A.1.1c, example_a11c.cpp, from the OpenMP 2.5 standards document, was added to the tests that can be run using openmp/run.sh.

12.7.11.bf: 03-15
Included the changes from openmp branch so that so CppAD does not use the OpenMP threadprivate command (some systems do not support this command).

12.7.11.bf.a: 03-15
Add command line arguments to openmp_newton_example.cpp, and modified openmp/run.sh to allow for more flexible testing.

12.7.11.bf.b: 03-14
Fixed some Microsoft compiler warnings by explicitly converting from size_t to int.

In the Microsoft compiler case, the cppad/config.h file had the wrong setting of GETTIMEOFDAY. The setting is now overridden (and always false) when the _MSC_VER preprocessor symbol is defined.

Some minor changes were made in an effort to speed up the multi-threading case.

12.7.11.bf.c: 03-13
Started a new openmp branch and created a version of CppAD that does not use the OpenMP threadprivate command (not supported on some systems).

12.7.11.bg: 03-09
Included the changes from openmp branch so that OpenMP can be used with CppAD, see 12.8.4: omp_max_thread . The changes dated between 12.7.11.bg.f: 02-15 and 03-28 below were made in the openmp branch and transferred to the trunk on 03-09.

12.7.11.bg.a: 03-28
The conditional include commands were missing on some include files; for example
 
     # ifndef CPPAD_BENDER_QUAD_HPP
     # define CPPAD_BENDER_QUAD_HPP
was missing at the beginning of the 12.10.1: BenderQuad include file. This has been fixed.

The speed_test routines 8.3.j: timing was changed to use gettimeofday if it is available. (gettimeofday measures wall clock time which is better in a multi-threading environment).

Added the user multi-threading interface 12.8.4: omp_max_thread along with its examples which are distributed in the directory openmp.

The speed/*.hpp files have been moved to cppad/speed/*.hpp and the corresponding wish list item has been removed.

The multiple tapes with the same base type wish list item have been removed (it's purpose was multi-threading which has been implemented).

12.7.11.bg.b: 02-27
The 11: speed include files are currently being distributed above the cppad include directory. A fix this wish list item has been added.

Multiple active tapes required a lot of multi-threading access management for the tapes. This was made simpler (and faster) by having at most one tape per thread.

12.7.11.bg.c: 02-22
The include command in the 8.3: speed_test documentation was
 
     # include <speed/speed_test.hpp>
but it should have been
 
     # include <cppad/speed_test.hpp>
This has been fixed.

12.7.11.bg.d: 02-17
An entry about optimizing the operation sequence in an 5.1.2: ADFun object was added to the 12.6: wish_list .

Change the argument syntax for 5.1.3: Dependent and deprecate the 12.8.2.c: old Dependent syntax .

12.7.11.bg.e: 02-16
Added VecAD<Base> as a valid argument type for the 4.5.4: Parameter and Variable functions. In addition, 4.6.h: size_t indexing is was extended to be allowed during taping so long as the VecAD object is a parameter.

12.7.11.bg.f: 02-15
Fixed the example/test_one.sh script (it was using its old name one_test).

12.7.11.bh: 02-06
The 12.10.1: BenderQuad documentation was improved by adding the fact that the x and y arguments to the f.dy member function are equal to the x and y arguments to BenderQuad. Hence values depending on them can be stored as private objects in f and need not be recalculated.

12.7.11.bi: 02-04
The method for distributing the documentation needed to be changed in the top level makefile.am in order to be compatible with automake version 1.10.

12.7.11.bj: 02-03
The change on 12.7.11.bl: 02-01 had a new, saved as a static pointer, with no corresponding delete. This was not a bug, but it has been changed to avoid an error message when using CppAD with valgrind (http://valgrind.org/) .

The change to the pow function on 12.7.12.m: 06-12-10 did not include the necessary changes to the sparsity calculations. This has been fixed.

12.7.11.bk: 02-02
Fix minor errors and improve 12.8.13.f: profiling documentation. Also change the problem sizes used for the 11: speed tests.

12.7.11.bl: 02-01
There seems to be a bug in the cygwin version of g++ version 3.4.4 with the -O2 flag whereby some static variables in static member functions sometimes do not get constructed before being used. This has been avoided by using a static pointer and the new operator in cppad/local/ad.hpp.

12.7.11.bm: 01-29
The copyright message was missing from some of the distribution files for some new files added on 12.7.12.i: 06-12-15 . This resulted in the tarballs *.tgz and *.zip not existing for a period of time. The automated tests have been extended so that this should not happen again.
Input File: omh/appendix/whats_new/whats_new_07.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.12: Changes and Additions to CppAD During 2006

12.7.12.a: Introduction
This section contains a list of the changes to CppAD during 2006 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

12.7.12.b: 12-24
Move exp_eps_ad to exp_eps_cppad and add exercises to the following sections: 3.2.5: exp_eps_rev1 , 3.2.8: exp_eps_cppad .

Add operation sequence indices to help track operations in 3.2.3: exp_eps_for0 , 3.2.4: exp_eps_for1 , 3.2.5: exp_eps_rev1 .

12.7.12.c: 12-23
Add exercises to the following sections: 10.1: get_started.cpp , 3.2: exp_eps , 3.2.3: exp_eps_for0 , and 3.2.4: exp_eps_for1 .

12.7.12.d: 12-22
Move 10.1: get_started.cpp below the 3: introduction directory.

Move the exponential example to the subdirectory introduction/exp_apx and change the --with-Introduction unix configure option to build both the 10.1: get_started.cpp and 3.3: exp_apx.cpp example programs. (The --with-GetStarted configure command line option has been removed.)

12.7.12.e: 12-21
Add the 8.13.2: source code for Poly to the documentation and include 8.13: Poly in the in the 11.2: speed_utility section.

The 10.1: get_started.cpp section has been moved into the 3: Introduction and 10.1.f: exercises were added to that section. In addition some sections has switched position between the top level : CppAD and the 12: Appendix .

12.7.12.f: 12-19
Reorganize so that the source code is below the corresponding routine in the documentation tree (instead of at the same level) for the following routines: 11.2.3: det_by_minor , 11.2.5: det_grad_33 , 11.2.10: uniform_01 , 11.2.2: det_of_minor , 11.2.1: det_by_lu , 8.14.3: LuInvert , 8.14.2: LuFactor , 8.14.1: LuSolve .

Separate the specifications for the source in 11.2: speed_utility and add cross reference to the following routine specification and implementations: 11.2.3: det_by_minor , 11.2.5: det_grad_33 , 11.2.10: uniform_01 , 11.2.2: det_of_minor , 11.2.1: det_by_lu , 8.14.3: LuInvert , 8.14.2: LuFactor , 8.14.1: LuSolve .

12.7.12.g: 12-18
Make the 11: speed source code easier to read.

Change the speed test output name det_poly to poly (as it should have been).

12.7.12.h: 12-17
The speed test 11.2.5: det_grad_33 was missing its documentation (this has been fixed). In addition, the titles and indexing for the speed test documentation has been improved.

Add to the specifications that each repeated test corresponds to a different matrix in 11.1.1: det_lu and 11.1.2: det_minor . In addition, modify all the speed tests so that they abide by this rule.

Change some references from the deprecated name CppAD.h to new name cppad.hpp.

Change 11.4.1: adolc_det_minor.cpp and 11.5.1: cppad_det_minor.cpp to tape once and reuse operation sequence for each repeated matrix in the test.

Add the 11.1.5: poly speed test for all three packages. In addition, correct a missing include in 8.13: poly routine.

12.7.12.i: 12-15
The wish list item to simplify and better organize the speed tests been completed:
11: speed/ template functions that are speed tested
speed/example example usage of speed template functions
11.4: speed/adolc Adolc drivers for the template functions
11.5: speed/cppad CppAD drivers for the template functions
11.6: speed/fadbad Fadbad drivers for the template functions
12.8.13.f: speed/profile profiling version of CppAD drivers

12.7.12.j: 12-13
Next step for the speed wish list item: remove speed_cppad from the documentation and replace it by speed/cppad, see 11.5: speed_cppad for the new CppAD speed test routines.

12.7.12.k: 12-12
Started the speed wish list item by move the adolc director to speed/adolc and fadbad to speed/fadbad.

12.7.12.l: 12-11
Started the speed wish list item by creating the speed/example directory and moving the relevant examples from example/*.cpp and speed_example/*.cpp to speed/example/*.cpp . In addition, the relevant include files have been moved from example/*.hpp to speed/*.hpp .

A new 8.3: speed_test routine was added to the library.

12.7.12.m: 12-10
The 4.4.3.2: pow function was changed to be a an AD<Base> 12.4.g.a: atomic operation. This function used to return a nan if x is negative because it was implemented as
     pow(
xy) = exp( log(x) * y )
This has been fixed so that the function and its derivatives are now calculated properly when x is less than zero. The 4.4.3.2: pow documentation was improved and the 4.4.3.2.1: pow.cpp example was changed to test more cases and to use the same variable names as in the documentation.

12.7.12.n: 12-09
A speed wish list item was added to the wish list.

The prototype for int arguments in binary operations (for example 4.4.1.3: addition ) was documented as const int & but was actually just plain int. This has been fixed. (Later changed to double.)

12.7.12.o: 12-07
Fix bug in the subversion installation instructions; see bug report (http://list.coin-or.org/pipermail/cppad/2006q4/000076.html) .

The some of the automatically generated makefile.in files had an improper license statement in the GPL license version. This has been fixed.

12.7.12.p: 12-05
Add the unix installation 12.8.13.h: --with-Documentation option and remove the postfix_dir option.

Create a fixed 12.7: whats_new section above the section for each particular year. Also improve the CppAD distribution README file.

12.7.12.q: 12-03
The include file directory CppAD was changed to be all lower case; i.e., cppad. If you are using a Unix system, see 12.8.1: include_deprecated . This completes the following 12.6: wish_list items (which were removed):
  1. File and directory names should only contain lowercase letters, numbers underscores and possibly one period. The leading character must be alphabetic.
  2. C++ header files should have the .hpp extension.


12.7.12.r: 12-02
Put explanation of version numbering in the download instructions.

Correct some file name references under the Windows heading in 11.5: speed_cppad .

12.7.12.s: 12-01
All of the Makefile.am and Makefile files were changed to lower case; i.e., makefile.am and makefile.

Fix compiler warning while compiling cppad/RombergOne/ (mistake occurred during 12.7.12.u: 11-20 change).

12.7.12.t: 11-30
Cygwin packages, and other system packages, should not have a dash in the version number. See cygwin package file naming (http://cygwin.com/setup.html#naming) or, to quote the rpm file naming convention (http://www.rpm.org/max-rpm/ch-rpm-file-format.html) The only restriction placed on the version is that it cannot contain a dash "-". As per the acceptable package naming conventions for cygwin, CppAD version numbering has be changed from yy-mm-dd format to yyyymmdd ; i.e. cppad-06-11-30 was changed to cppad-20061130.

12.7.12.u: 11-29
There was a problem using 8.15: RombergOne with floating point types other than double. This has been fixed.

12.7.12.v: 11-28
The 2: installation download files were not being built because Makefile.am referenced Doc when it should have referenced doc. This has been fixed.

12.7.12.w: 11-23
A Version Numbering entry was added to the 12.6: wish_list (this was completed on 12.7.12.t: 11-30 ).

12.7.12.x: 11-18
The example routine that computes determinants using expansion by minors DetOfMinor was changed to 11.2.2: det_of_minor , in preparation for more formal speed comparisons with other packages. To be specific, its documentation was improved, its dependence on the rest of CppAD was removed (it no longer includes : CppAD.h ).

12.7.12.y: 11-12
The 10.3.1: general.cpp and test_more/test_more.cpp programs were changed to print out the number of tests that passed or failed instead of just "All the tests passed" or "At least one of the tests failed".

The windows project files for examples and testing should have been changes to use lower case file names on as part of the 11-08 change below. This has been fixed.

12.7.12.z: 11-08
Move the Example directory to example and change all its files to use lower case names.

12.7.12.aa: 11-06
Move the TestMore directory to test_more and change all its files to use lower case names.

12.7.12.ab: 11-05
Remove references in the 11.5: speed_cppad tests to the Memory and Size functions because they have been 12.8.2: deprecated .

Correct some references to var_size that should have been 5.1.5.g: size_var .

12.7.12.ac: 11-04
Put text written to standard output in the documentation for the 10.1.h: get_started.cpp and print_for.cpp examples. (Now documentation can be built from a subversion checkout with out needing to execute automake.) The PrintFor.cpp and speedtest.cpp examples were missing in 10.4: ListAllExamples (which has been fixed).

Move the Speed directory to speed and change all its files to use lower case names.

12.7.12.ad: 11-02
The print_for directory was referenced as PrintFor in the root CppAD Makefile.am this has been fixed.

The documentation for the Adolc helper routines AllocVec and AllocMat were not being included. This has been fixed.

Move the GetStarted directory to get_started and change all its files to use lower case names.

12.7.12.ae: 11-01
Move the PrintFor directory to print_for and change all its files to use lower case names.

12.7.12.af: 10-31
Move the SpeedExample directory to speed_cppad_example and change all its files to use lower case names.

12.7.12.ag: 10-29
Move the Adolc directory to adolc and change all its files to use lower case names.

Change all the file in the omh directory to use lower case names.

The file Makefile.am in the distribution directory had the CPL copyright message in the GPL version. This has been fixed.

12.7.12.ah: 10-28
The copyright message in the script files example/OneTest and TestMore/OneTest were GPL (in the CPL distribution). This has been fixed by moving them to example/OneTest.sh and TestMore/OneTest.sh so that the distribution automatically edits the copyright message.

12.7.12.ai: 10-27
Change 5.2.2.2: hes_lagrangian.cpp example so that it computes the Lagrangian two ways. One is simpler and the other can be used to avoid re-taping operation sequence.

12.7.12.aj: 10-26
Change 5.2.2.2: hes_lagrangian.cpp example so that it modifies the independent variable vector between the call to 5.1.1: Independent and the ADFun<Base> 5.1.2: constructor .

12.7.12.ak: 10-25
A subversion install procedure was added to the documentation.

Fix definition of preprocessor symbol PACKAGE_STRING in Speed/Speed.cpp (broken by change on 10-18).

Added the example 5.2.2.2: hes_lagrangian.cpp which computes the Hessian of a Lagrangian.

12.7.12.al: 10-18
Document and fix possible conflicts for 6: preprocessor symbols that do not begin with CppAD or CPPAD_.

Include a default value for the file cppad/config.h in the subversion repository.

12.7.12.am: 10-16
Fix bug when using 8.19: OdeErrControl with the type AD< AD<double> >.

12.7.12.an: 10-10
Add the 4.3.7: Var2Par function so it is possible to obtain the 4.3.1: Value of a variable. Move the Discrete.cpp example to 4.4.5.1: tape_index.cpp . Fix the Microsoft project file so that the Windows install examples and testing works properly (it was missing the 10.2.15: stack_machine.cpp example).

12.7.12.ao: 09-30
These changes were grouped together because it took a while for Coin-Or to review the dual licensing version and because it was not possible to get the nightly build changed:
  1. Change shell scripts to use *.sh extension.
  2. Two versions, one with CPL and other with GPL license.
  3. Change subversion version of CppAD from GPL to CPL copyright.
  4. Change all files in cppad/local to use lower case and *.hpp extension.
  5. CppAD_vector.h was generating a warning on version 4 of gcc. This have been fixed.
  6. Change the preprocessor # define commands in cppad/local/*.hpp to use upper case names.
  7. Add the 10.2.15: stack_machine.cpp example.


12.7.12.ap: 08-17
Some error message occurred while executing
 
     valgrind --tool=memcheck example/example
     valgrind --tool=memcheck TestMore/TestMore

These were not really bugs, but they have been fixed to avoid this conflict between CppAD and valgrind (http://valgrind.org/) .

12.7.12.aq: 07-14
Make some improvements were made to the 3: Introduction , 3.2.1: exp_eps.hpp and 3.2.5: exp_eps_rev1 sections.

12.7.12.ar: 07-12
Use a drop down menu for the navigation links, instead of a separate frame for the navigation links, for each section in the documentation.

12.7.12.as: 06-29
Newer versions of the gcc compiler generated an error because 4.4.2.18: erf was using 4.4.4: CondExp before it was defined. This was found by Kasper Kristensen and his fix has been included in the CppAD distribution.

12.7.12.at: 06-22
The 5: ADFun operation f(xy) no longer executes a zero order 5.3: Forward operation when a new operation sequence is stored in f . In addition, the syntax for this operation was changed to f.Dependent(y) (see 5.1.3: Dependent ).

12.7.12.au: 06-19
The changes listed under 06-17 and 06-18 were made in the branches/ADFun branch of the CppAD subversion repository. They did not get merged into the trunk and become part of the distribution until 06-19. This accomplished the following goal, which was removed from the 12.6: wish_list :

"We would like to be able to erase the function values so that 5: ADFun objects use less memory. We may even want to erase the AD operation sequence so that 5: ADFun objects use even less memory and can be used for a subsequent AD operation sequence."

12.7.12.au.a: 06-17
Added 5.3.8: capacity_order which can be used to control the amount of memory used to store 5.3: Forward results. Also 12.8.2: deprecated taylor_size, and defined 5.3.6: size_order in its place.

12.7.12.au.b: 06-18
Added the 5.1.2: ADFun default constructor and the ability to 5.1.3: store a new operation sequence in an ADFun object with out having to use ADFun pointers together with new and delete.

12.7.12.av: 06-17
The location where the distribution files are stored has changed and this broke the Download Current Version links for the unix and windows installation. This has been fixed.

The compiling instructions for the 11.5: speed_cppad routines have been improved.

The 4.3.1: Value function has been extended to allow for 12.4.h: parameter arguments even if the corresponding tape is in the Recording state.

The 12.10.1: BenderQuad documentation and example have been improved by changing Vector to BAvector to emphasize that it corresponds to a vector of Base objects.

12.7.12.aw: 06-15
Change 12.10.1: BenderQuad to use Base instead of AD<Base> where every possible. This allows for more calculations to be done in the base type; i.e., is more efficient.

12.7.12.ax: 06-09
Add a size check (size one) for the 12.10.1.h: function value argument, g in BenderQuad.

12.7.12.ay: 06-07
Some major changes were made to the notation in 10.1: get_started.cpp (to make it easier to start using CppAD).

In the 3: Introduction example, @(@ exp_eps @)@ was changed to @(@ {\rm exp\_eps} @)@.

12.7.12.az: 06-05
Change 12.10.1: BenderQuad @(@ F_y (x, y) @)@ to @(@ H(x,y) @)@ so applies in a more general setting. This was another change to the BenderQuad interface, fun.fy was changed to fun.h .

12.7.12.ba: 06-02
Newer versions of the gcc compiler generated a warning for possible use of an uninitialized pointer. This was found by Michael Tautschnig and his fix has been included in the CppAD distribution.

12.7.12.bb: 05-31
The interface to 12.10.1: BenderQuad has been changed. Now all the function evaluation routines are member functions of one class object. This makes it easy for them to share common data.

12.7.12.bc: 05-29
Change statement of command syntax to be in the same browser frame as the command documentation (for all the commands with a syntax statement). Now when a user links to a specific heading in a command's documentation, the syntax for that command is automatically included. Before the user needed to follow another link to see to the command syntax.

12.7.12.bd: 05-27
Added 12.10.1: BenderQuad for computing the Hessian of Bender's reduced objective function.

Added special specifications for resize(0) to 8.22: CppAD_vector .

12.7.12.be: 05-03
The g++ (GCC) 4.1.0 (Red Hat 4.1.0-3) compiler reported an error because certain functions were used before being defined (version 3.4.4 did not complain about this). This has been fixed.

12.7.12.bf: 04-29
Change all of the example and test driver programs so that they return error codes; i.e., zero for no error and one for an error.

Add more discussion and a reference for a gcc 3.4.4 -O2 bug (since been removed).

12.7.12.bg: 04-28
Improve the 10.1: get_started.cpp example and move it so that it is visible at the too level of the documentation.

12.7.12.bh: 04-26
The programs in 3: Introduction have been converted to automated test that return true or false with the driver program 3.3: Introduction .

12.7.12.bi: 04-25
Add an 3: Introduction section to the documentation (replaces old example that was part of the 12.3: Theory section).

12.7.12.bj: 04-19
A discussion was added near the end of the 5.9: FunCheck documentation. And the cross references to the 12.8.3: CompareChange discussion were changed to the FunCheck discussion.

An operation sequence entry was added to the 12.6: wish_list .

12.7.12.bk: 04-18
The new definitions for 12.4.b: AD of Base and 12.4.g.b: operation sequence have been used throughout the documentation.

Add the 5.9: FunCheck section for checking that a sequence of operations is as intended.

12.7.12.bl: 04-17
The documentation for 8.4: SpeedTest and 8.13: Poly was improved.

Definitions were added for an atomic 12.4.g: operation and for an operation sequence being dependent and independent of the values of specific operands.

The definition of AD sequence of operations was made abstract and moved to the glossary as 12.4.g.b: Type operation sequence .

12.7.12.bm: 04-15
The 10.2.10: mul_level example was moved from 5: ADFun to 10.2: General . The documentation for 8.4: SpeedTest was improved.

12.7.12.bn: 04-14
Documentation and examples were improved for the following routines: 5.2.5: ForTwo , 5.2.6: RevTwo . In addition, the computation in RevTwo was made more efficient (it used to possibly calculate some first order partials that were not used).

12.7.12.bo: 04-13
Documentation and examples were improved for the following routines: 5.2.1: Jacobian , 5.2.3: ForOne , 5.2.4: RevOne , and 5.2.2: Hessian .

12.7.12.bp: 04-08
In the case where 12.8.2.h: use_VecAD is true, the 5.5.2: ForSparseJac calculation in only for the current independent variable values. In this case, the sparsity pattern can be (and has been) made more efficient; i.e., fewer true values (because it only applies to the current 5.3.1: forward_zero ).

The conversion from 4.6.d: VecAD<Base>::reference to 4: AD gave a compile error (this has been fixed). Code example for this fix
 
     VecAD<double> V(1);
     AD<double> zero = 0;
     V[zero] = 1.;
     static_cast< AD<double> > ( V[zero] );

12.7.12.bq: 04-06
The 5.5.2: ForSparseJac , 5.5.4: RevSparseJac , 5.5.6: RevSparseHes sparsity results are now valid for all independent variable values (if the AD operation sequence does no use any VecAD<Base> operands). In addition, the ForSparseJac, 5.5.4: RevSparseJac and 5.5.6: RevSparseHes documentation and examples were improved.

The 12.8.2.h: useVecAD member function was added to 5: ADFun objects.

The var_size member function was changed to 5.1.5.g: size_var (this is not backward compatible, but var_size was just added on 12.7.12.bt: 04-03 ).

12.7.12.br: 04-05
The documentation and example for 12.8.3: CompareChange were improved and moved to be part of the 5.3: Forward section.

12.7.12.bs: 04-04
The documentation and examples for 5.4: Reverse were improved and split into 5.4.1: reverse_one and 5.4.3: reverse_any .

12.7.12.bt: 04-03
Create separate sections for the 5.3.1: zero and 5.3.2: forward_one first order case of 5.3: Forward mode.

The ADFun 12.8.2.f: Size member function has been deprecated (use 5.3.6: size_order instead).

The 5.4: Reverse member function is now declared, and documented as, const; i.e., it does not effect the state of the ADFun object.

Change the examples that use 5.4: Reverse to use the same return value notation as the documentation; i.e., dw.

12.7.12.bu: 04-02
The member functions of 5: ADFun that return properties of AD of Base 12.4.g.b: operation sequence have been grouped into the 5.1.5: seq_property section. In addition, the 5.1.5.1: seq_property.cpp example has been added.

The 12.8.3: CompareChange function documentation was improved and moved to a separate section.

Group the documentation for the 5: ADFun member functions that evaluate functions and derivative values. This organization has since been changed.

Remove the old Fun.cpp example and extend 5.1.1.1: independent.cpp so that it demonstrates using different choices for the 8.9: SimpleVector type.

12.7.12.bv: 04-01
Move the 5.1.2: ADFun Constructor to its own separate section, improve its documentation, and use 5.1.1.1: independent.cpp for its example.

The following member functions of 5: ADFun have been 12.8.2: deprecated : Order, Memory.

The wish list entry for Memory usage was updated on 04-01. The request was implemented on 12.7.12.au: 06-19 and the entry was removed from the wish list.

12.7.12.bw: 03-31
Add examples for the 4.5.4: Parameter, Variable and 5.1.1: Independent functions.

Move the 4.5.4: Parameter and Variable functions from the 5: ADFun section to the 4: AD section.

In the examples for the 4: AD sections, refer to the range space vector instead of the dependent variable vector because some of the components may not be 12.4.m: variables .

12.7.12.bx: 03-30
Move the 12.10.3: LuRatio section below 8.14: LuDetAndSolve .

Move the definition of an AD of Base 12.4.g.b: operation sequence from the glossary to the 4: AD section.

Improve the definition of tape state.

Add mention of taping to 4.4.2.18: Erf , 4.5.3: BoolFun , 4.5.2: NearEqualExt ,and 4.4.3.2: Pow .

Change the definition for 4.6.d: VecAD<Base>::reference so that it stands out of the text better.

12.7.12.by: 03-29
Mention the 4.6.d: VecAD<Base>::reference case in documentation and examples for 4.4.2.14: abs , 4.4.3.1: atan2 , 4.4.2.18: erf , and 4.4.3.2: pow .

Fix a bug derivative computation for abs(x) when x had type AD< AD<double> > and x had value zero.

Fix a bug using non-zero AD indices for 4.6: VecAD vectors while the tape is in the empty state.

Extend 4.4.2.18: erf to include float, double, and VecAD<Base>::reference .

12.7.12.bz: 03-28
Mention the 4.6.d: VecAD<Base>::reference case in documentation and examples for 4.4.1.1: UnaryPlus , 4.4.1.2: UnaryMinus , 4.4.1.3: ad_binary , 4.4.1.4: compound_assign , and 4.4.2: unary_standard_math

12.7.12.ca: 03-27
Extend and improve the 4.6.d.a: VecAD exceptions .

Mention the 4.6.d: VecAD<Base>::reference case and generally improve 4.4.1.3: addition documentation and examples.

12.7.12.cb: 03-26
Improve documentation and examples for 4.6: VecAD and change its element type from VecADelem<Base> to VecAD_reference<Base> (so that it looks more like 4.6.d: VecAD<Base>::reference ).

Mention the 4.6.d: VecAD<Base>::reference case and generally improve 4.3.1: Value , 4.3.5: ad_output and 4.2: assignment documentation and examples.

Extend 4.3.2: Integer and 4.3.6: PrintFor to include the 4.6.d: VecAD<Base>::reference case (and mention in documentation and examples).

12.7.12.cc: 03-24
Move 4.6: VecAD and 12.10.3: LuRatio from the old ExtendDomain section to 4: AD .

12.7.12.cd: 03-23
Improve documentation and examples for 4.4.4: CondExp and 4.4.5: Discrete . Move both of these sections from ExtendDomain to 4.4: ADValued .

12.7.12.ce: 03-22
The documentation sections under 4: AD have been organized into a new set of sub-groups.

12.7.12.cf: 03-18
The documentation and example for 4.3.6: PrintFor have been improved. The sections below 4: AD in the documentation have been organized into subgroups.

12.7.12.cg: 03-17
The documentation and examples have been improved for the following functions: 4.5.3: BoolFun , and 4.5.2: NearEqualExt .

12.7.12.ch: 03-16
Improve the documentation and example for the 4.4.3.2: pow function. This includes splitting out and generalizing the integer case 8.12: pow_int .

The copies of the atan2 function were included in the CppAD namespace for the float and double types.

12.7.12.ci: 03-15
Improve the b: introduction to CppAD.

12.7.12.cj: 03-11
The file cppad/local/MathOther.h had a file name case error that prevented the documentation from building and tests from running (except under Cygwin which is not really case sensitive). This has been fixed.

The term AD of Base 12.4.g.b: operation sequence has been defined. It will be used to improve the user's understanding of exactly how an 5: ADFun object is related to the C++ algorithm.

12.7.12.ck: 03-10
The math functions that are not under 4.4.2: unary_standard_math have been grouped under MathOther.

The documentation and examples have been improved for the following functions: 4.4.2.14: abs , 4.4.3.1: atan2 .

12.7.12.cl: 03-09
The examples 4.4.2.4.1: cos.cpp , 4.4.2.5.1: cosh.cpp , 4.4.2.6.1: exp.cpp , 4.4.2.7.1: log.cpp , 4.4.2.8.1: log10.cpp , 4.4.2.9.1: sin.cpp , 4.4.2.10.1: sinh.cpp , 4.4.2.11.1: sqrt.cpp have been improved.

12.7.12.cm: 03-07
The tan function has been added to CppAD.

The examples 4.4.2.1.1: Acos.cpp , 4.4.2.2.1: Asin.cpp and 4.4.2.3.1: atan.cpp have been improved.

12.7.12.cn: 03-05
The AD standard math unary functions documentation has been grouped together with improved documentation in 4.4.2: unary_standard_math .

12.7.12.co: 02-28
The 4.3.5: ad_output and 4.4.2.14: Abs documentation and example have been improved. Minor improvements were also made to the 10.3.3: lu_vec_ad.cpp documentation.

12.7.12.cp: 02-25
The 4.5.1: Compare documentation and example have been improved.

12.7.12.cq: 02-24
The documentation and examples have been improved for the following sections: 4.4.1.3: division , 4.4.1.4: -= , 4.4.1.4: *= , and 4.4.1.4: /= .

12.7.12.cr: 02-23
The 4.4.1.3: multiplication documentation and example have been improved.

12.7.12.cs: 02-21
The 4.4.1.3: subtraction documentation and example have been improved.

There was a bug 5.2.6: RevTwo that was not detected by the 5.2.6.1: rev_two.cpp test. This bug was reported by Kasper Kristensen (http://list.coin-or.org/pipermail/cppad/2006-February/000020.html) A test was added TestMore/rev_two.cpp that detects this problem and the problem has been fixed.

12.7.12.ct: 02-15
The 4.4.1.4: += documentation and example have been improved.

12.7.12.cu: 02-14
The 4.4.1.3: addition documentation and example have been improved.

12.7.12.cv: 02-13
Combine the old binary operator and compound assignment documentation into 4.4.1: Arithmetic documentation.

The documentation and examples have been improved for the following sections: 4.2: assignment , 4.4.1.1: UnaryPlus , 4.4.1.2: UnaryMinus .

12.7.12.cw: 02-11
The documentation and examples have been improved for the following sections: 4.1: ad_ctor , 4.1: ad_ctor and 4.2: ad_assign , and 4.3.1: Value .

12.7.12.cx: 02-10
This is the beginning of a pass to improve the documentation: The documentation sections The CopyBase (formerly FromBase and now part of 4.1: ad_ctor and 4.2: ad_assign ) and 4.1: AD copy constructor (formerly Copy) documentation has been modified.

Some of the error messaging during 5: ADFun construction has been improved.

12.7.12.cy: 02-04
There was a read memory access past the end of an array in 8.22.g: CppAD::vector::push_back . This has been fixed and in addition 12.8.5: TrackNewDel is now used to do and check the allocation in CppAD::vector.

The routines 8.17: Runge45 and 8.18: Rosen34 had static vectors to avoid recalculation on each call. These have been changed to be plain vectors to avoid memory leak detection by 12.8.5.n: TrackCount .

12.7.12.cz: 01-20
Add 12.6.o: software guidelines to the wish list.

12.7.12.da: 01-18
Improve the definition for 12.4.h: parameters and 12.4.m: variables . Remove unnecessary reference to parameter and variable in documentation for 5.1.1: Independent .

12.7.12.db: 01-08
The aclocal program is part of the automake and autoconf system. It often generates warnings of the form:
     /usr/share/aclocal/
...: warning: underquoted definition of
     
...
The shell script file FixAclocal, which attempts to fix these warnings, was added to the distribution.

12.7.12.dc: 01-07
Change CppAD error handler from using the macros defined in cppad/CppADError.h to using a class defined in 8.1: cppad/utility/error_handler.hpp . The macros CppADUnknownError and CppADUsageError have been deprecated (they are temporarily still available in the file cppad/local/CppADError.h).

12.7.12.dd: 01-02
Add the sed script Speed/gprof.sed to aid in the display of the 12.8.13.f: profiling output.

Make the following source code files easier to understand: Add.h, Sub.h, Mul.h, Div.h (in the directory cppad/local).

12.7.12.de: 01-05
Make the following source code files easier to understand: RevSparseHes.h, Reverse.h, Fun.h, Forward.h, ForSparseJac.h, RevSparseJac.h (in the directory cppad/local).
Input File: omh/appendix/whats_new/whats_new_06.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.13: Changes and Additions to CppAD During 2005

12.7.13.a: 12-24
Fix a memory leak that could occur during the 5.5.2: ForSparseJac calculations.

12.7.13.b: 12-23
The buffers that are used to do 5.5.4: RevSparseJac and 5.5.6: RevSparseHes calculations are now freed directly after use.

The 12.8.5.1: TrackNewDel.cpp example was missing from the Windows install examples and testing project file. This has been fixed.

12.7.13.c: 12-22
The buffer that is are used to do 5.4: Reverse mode calculations is now freed directly after use. This reduces the memory requirements attached to an 5: ADFun object.

12.7.13.d: 12-20
Buffers that are used to store the tape information corresponding to the AD<Base> type are now freed when the corresponding 5: ADFun object is constructed. This reduces memory requirements and actually had better results with the 11.5: speed_cppad tests.

The 11.5: speed_cppad test program now outputs the version of CppAD at the top (to help when comparing output between different versions).

12.7.13.e: 12-19
The 12.8.5: TrackNewDel routines were added for track memory allocation and deletion with new[] and delete[]. This is in preparation for making CppAD more efficient in it's use of memory. The bug mentioned on 12.7.13.p: 12-01 resurfaced and the corresponding routine was changed as follows:
 
     static local::ADTape<Base> *Tape(void)
     {    // If we return &tape, instead of creating and returning ptr,
          // there seems to be a bug in g++ with -O2 option.
          static local::ADTape<Base> tape;
          static local::ADTape<Base> *ptr = &tape;
          return ptr;
     }

12.7.13.f: 12-16
The 8.2: NearEqual function documentation for the relative error case was changed to
     | 
x - y | <= r * ( |x| + |y| )
so that there is no problem with division by zero when x and y are zero (the code was changed to that form also). The std::abs function replaced the direct computation of the complex norms (for the complex case in NearEqual). In addition, more extensive testing was done in 8.2.1: near_equal.cpp .

12.7.13.g: 12-15
Extend 8.2: NearEqual and 4.5.2: NearEqualExt to cover more cases while converting them from, a library function in lib/CppADlib.a and an utility in example/NearEqualExt.h, to a template functions in cppad/near_equal.hpp and cppad/local/NearEqualExt.h. This is another step along the way of removing the entire CppADlib.a library.

The change on 12.7.13.h: 12-14 broke the Microsoft project files example/Example.sln and TestMore/TestMore.sln used during CppAD installation on Windows. This has been fixed.

Move lib/SpeedTest.cpp to cppad/speed_test.hpp. This was the last change necessary in order to remove the CppAD library, so remove all commands related to building and linking CppADlib.a. The corresponding entry has been removed from the 12.6: wish_list .

One of the entries in the 12.6: wish_list corresponded to the 4.3.2: Integer function. It has also been removed (because it is already implemented).

12.7.13.h: 12-14
Extend 4.4.2.18: erf to cover more cases while converting it from a function in lib/CppADlib.a to a template function in cppad/local/Erf.h. This is one step along the way of removing the entire CppADlib.a library.

12.7.13.i: 12-11
Group routines that extend the domain for which an 5: ADFun object is useful into the ExtendDomain section.

Add an example of a C callable routine that computes derivatives using CppAD (see 10.2.7: interface2c.cpp ).

12.7.13.j: 12-08
Split out 8.14.2: LuFactor with the ratio argument to a separate function called 12.10.3: LuRatio . This needed to be done because 12.10.3: LuRatio is more restrictive and should not be part of the general template 8: utilities .

12.7.13.k: 12-07
Improve 8.10: CheckSimpleVector so that it tests element assignment. Change 8.10.1: check_simple_vector.cpp so that it provides and example and test of a case where a simple vector returns a type different from the element type and the element assignment returns void.

12.7.13.l: 12-06
The specifications for a 8.9: SimpleVector template class were extended so that the return type of an element access is not necessarily the same as the type of the elements. This enables us to include std::vector<bool> which packs multiple elements into a single storage location and returns a special type on element access (not the same as bool). To be more specific, if x is a std::vector<bool> object and i has type size_t, x[i] does not have type bool.

Add a Home icon, that links to the CppAD home page (http://www.coin-or.org/CppAD/) , to the top left of the navigation frame (left frame) for each documentation section.

12.7.13.m: 12-05
The 5.5.6: RevSparseHes reverse mode Hessian sparsity calculation has been added.

The definition of a 12.4.j: sparsity pattern has been corrected to properly correspond to the more efficient form mentioned under 12.7.13.s: whats_new_05 below.

The dates in this file used to correspond to local time for when the change was checked into the subversion repository (http://projects.coin-or.org/CppAD/browser) . From now on the dates in this file will correspond to the first version of CppAD where the change appears; i.e., the date in the unix and windows download file names CppAD-yy-mm-dd .

12.7.13.n: 12-03
There was a bug in the 5.5.4: RevSparseJac reverse mode sparsity patterns when used with 4.6: VecAD calculations. This bug was fixed and the calculations were made more efficient (fewer true entries).

12.7.13.o: 12-02
There was a bug in the 5.5.2: ForSparseJac forward mode sparsity patterns when used with 4.6: VecAD calculations. This bug was fixed and the calculations were made more efficient (fewer true entries).

12.7.13.p: 12-01
The speed test of 10.3.3: lu_vec_ad.cpp has been reinstated. It appears that there is some sort of bug in the gcc compiler with the -O2 option whereby the following member function
 
     static local::ADTape<Base> *Tape(void)
     {    static local::ADTape<Base> tape;
          return &tape;
     }
(in cppad/local/AD.h) would sometimes return a null value (during 4.6: VecAD operations). A speed improvement in cppad/local/ExtendBuffer.h seems to prevent this problem. This fix is not well understood; i.e., we should watch to see if this problem reoccurs.

The source code for 10.3.3.1: lu_vec_ad_ok.cpp was mistakenly used for speed_cppad/LuSolveSpeed.cpp. This has been fixed.

12.7.13.q: 11-23
The speed test of 10.3.3: lu_vec_ad.cpp has been commented out because it sometimes generates a segmentation fault. Here is an explanation:

If X is a AD<Base> object, y is a Base object, X[y] uses pointer from the element back to the original vector. Optimizing compilers might reorder operations so that the vector is destroyed before the object is used. This can be avoided by changing the syntax for 4.6: VecAD objects to use set and get member functions.

12.7.13.r: 11-22
A much better 4.6.1: example for using 4.6: VecAD vectors has been provided. In addition, a bug in the computation of derivatives using VecAD vectors has been fixed.

CppAD now checks that the domain dimension during 5.1.1: Independent and the range dimension during 5: ADFun (provided that -DNDEBUG is not defined). If either of these is zero, the CppADUsageError macro is invoked.

12.7.13.s: 11-20
The sparsity pattern routines 5.5.2: ForSparseJac and 5.5.4: RevSparseJac have been modified so that they are relative to the Jacobian at a single argument value. This enables us to return more efficient 12.4.j: sparsity patterns .

An extra 4.6.d.a: exception has been added to the use of 4.6: VecAD elements. This makes VecAD some what more efficient.

12.7.13.t: 11-19
Improve the output messages generated during execution of the 12.8.13.d: configure command.

Put a try and catch block around all of the uses of new so that if a memory allocation error occurs, it will generate a CppADUsageError/ message.

The 10.1: get_started.cpp example has been simplified so that it is easier to understand.

12.7.13.u: 11-15
Fix a memory leak in both the 5.5.2: ForSparseJac and 5.5.4: RevSparseJac calculations.

12.7.13.v: 11-12
Add reverse mode 5.5.4: Jacobian sparsity calculation.

12.7.13.w: 11-09
Add prototype documentation for 8.14.1.l: logdet in the 8.14.1: LuSolve function.

Add the optional ratio argument to the 8.14.2: LuFactor routine. (This has since been moved to a separate routine called 12.10.3: LuRatio .)

12.7.13.x: 11-07
Remove some blank lines from the example files listed directly below (under 11-06). Comments for computing the entire Jacobian 5.5.2.k: entire sparsity pattern was added.

12.7.13.y: 11-06
The cases of std::vector, std::valarray, and CppAD::vector were folded into the standard example and tests format for the following cases: 5.2.6.1: rev_two.cpp , 5.2.4.1: rev_one.cpp , Reverse.cpp, 5.2.2.1: hessian.cpp , 5.2.1.1: jacobian.cpp , 5.3.4.1: forward.cpp , 5.2.5.1: for_two.cpp , 5.2.3.1: for_one.cpp , Fun.cpp (Fun.cpp has since been replaced by 5.1.1.1: independent.cpp , Reverse.cpp has since been replaced by 5.4.1.1: reverse_one.cpp and reverse_checkpoint.cpp).

12.7.13.z: 11-01
Add forward mode 5.5.2: Jacobian sparsity calculation.

12.7.13.aa: 10-20
Add 12.4.j: sparsity patterns to the whish list.

12.7.13.ab: 10-18
The Unix install 12.8.13.d: configure command was missing the -- before of the prefix command line argument.

12.7.13.ac: 10-14
The template class 8.22: CppAD_vector uses a try/catch block during the allocation of memory (for error reporting). This may be slow down memory allocation and hence it is now replaced by simple memory allocation when the preprocessor variable NDEBUG is defined.

The specialization of CppAD::vector<bool> was moved to 8.22.m: vectorBool so that CppAD::vector<bool> does not pack one bit per value (which can be slow to access).

12.7.13.ad: 10-12
Change the 12.8.13.d: configure script so that compilation of the 10.1: get_started.cpp and 4.3.6.1: print_for_cout.cpp examples are optional.

One of the dates in the Unix installation extraction discussion was out of date. This has been fixed.

12.7.13.ae: 10-06
Change the Unix install configure script so that is reports information using the same order and notation as its 12.8.13.d: documentation .

Some compiler errors in the 8.21.1: ode_gear_control.cpp and 10.2.11: ode_stiff.cpp examples were fixed.

12.7.13.af: 09-29
Add a specialization to 8.22: CppAD_vector for the CppAD::vector<bool> case. A test for the push_back member function as well as a 8.10: CheckSimpleVector test has been added to 8.22.1: cppad_vector.cpp . The source code for this template vector class, cppad/vector.hpp, has been removed from the documentation.

12.7.13.ag: 09-27
Add the 12.8.13.g: prefix_dir and postfix_dir ( postfix_dir has since been removed) options to the configure command line. This gives the user more control over the location where CppAD is installed.

12.7.13.ah: 09-24
The stiff Ode routines, 8.20: OdeGear and 8.21: OdeGearControl , were added to the 8: utilities . A comparison various Ode solvers on a stiff problem 10.2.11: ode_stiff.cpp was added. In addition, OdeGear and OdeGearControl were added to the 8: utilities and the library was reorganized.

12.7.13.ai: 09-20
The Microsoft compiler project files example/Example.vcproj and TestMore/TestMore.vcproj were not up to date. This has been fixed. In addition, the example 8.7.1: numeric_type.cpp has been added.

Make the building of the Example, TestMore, and Speed, directories optional during the 12.8.13.d: configure command. The 12.8.13: Unix installation instructions were overhauled to make the larger set of options easy to understand.

12.7.13.aj: 09-14
Added the 8.7: NumericType concept and made the following library routines require this concept for their floating point template parameter type: 8.14.1: LuSolve , 8.14.2: LuFactor , 8.15: RombergOne , 8.16: RombergMul , 8.17: Runge45 , 8.18: Rosen34 , and 8.19: OdeErrControl . This is more restrictive than the previous requirements for these routines but it enables future changes to the implementation of these routines (for optimization purposes) with out affecting their specifications.

12.7.13.ak: 09-09
Add the 4.4.1.1: UnaryPlus operator and move the Neg examples and tests to 4.4.1.2: UnaryMinus .

12.7.13.al: 09-07
Change name of distribution files from CppAD.unix.tar.gz and CppAD.dos.tar.gz to CppAD-yy-mm-dd.tar.gz and CppAD-yy-mm-dd.zip (the *.zip file uses pkzip compression).

12.7.13.am: 08-30
The maxabs argument has been added to the 8.19: OdeErrControl function so that it can be used with relative errors where components of the ODE solution may be zero (some of the time). In addition, some of the rest of the OdeErrControl documentation has been improved.

The documentation for replacing defaults in CppAD error macros has been improved.

12.7.13.an: 08-24
Changed Romberg to 8.15: RombergOne and added 8.16: RombergMul . In addition, added missing entries to 10.4: ListAllExamples and reorganized 8: utilities .

12.7.13.ao: 08-20
Backed out addition of Romberg integration routine (at this point uncertain of the interface that is most useful in the context of AD.)

12.7.13.ap: 08-19
Added a Romberg integration routine for where the argument types are template parameters (for use with AD types).

12.7.13.aq: 08-15
The Microsoft project files example/Example.vcproj and TestMore/TestMore.vcproj were missing some necessary routines. In addition, Speed/Speed.vcproj was generating a warning. This has been fixed.

12.7.13.ar: 08-14
An 4.3.2: Integer conversion function as been added.

The 4.3.1.1: value.cpp example has been improved and the old example has been moved into the TestMore directory.

12.7.13.as: 08-13
The 4.4.2: unary_standard_math functions sinh, and cosh have been added. In addition, more correctness testing has been added for the sin and cos functions.

The 8.19: OdeErrControl routine could lock in an infinite loop. This has been fixed and a test case has been added to check for this problem.

12.7.13.at: 08-07
The 4.4.4: conditional expression function has been changed from just CondExp to CondExpLt, CondExpLe, CondExpEq, CondExpGe, CondExpGt. This should make code with conditional expressions easier to understand. In addition, it should reduce the number of tape operations because one need not create as many temporaries to do comparisons with. The old CondExp function has been deprecated.

12.7.13.au: 07-21
Remove unnecessary no-op that was left in tape for the 4.4.2: unary_standard_math functions acos, asin, atan, cos.

Improve the index entries in the documentation that corresponds to the cppad/local directory source code.

12.7.13.av: 07-19
The 12.6: wish_list and Bugs information were moved out of this section and into their own separate sections (the Bugs section has been removed; see the bug subdirectory instead).

A discussion of 4.6.k: VecAD speed and memory was added as well as an entry in the 12.6: wish_list to make it more efficient.

12.7.13.aw: 07-15
The BOOST_DIR and CPP_ERROR_WARN 12.8.13.d: configure options were not properly implemented for compiling the lib sub-directory. This has been fixed.

Some compiler warnings in the file lib/ErrFun.cpp, which computes the 4.4.2.18: erf function, have been fixed.

12.7.13.ax: 07-11
The 8.22.g: push_back function has been added to the CppAD::vector template class.

It appears that the TestMore/Runge45.cpp file was missing an include of example/NearEqualExt.h. This has been fixed.

12.7.13.ay: 07-08
The documentation for 5.3: Forward and 5.4: Reverse has been improved.

12.7.13.az: 07-05
The 8.18.1: rosen_34.cpp example mixed the 8.22: CppAD::vector and CppADvector vector types. This caused the compilation of the examples to fail when CppADvector was defined as something other than CppAD::vector (found by Jon Pearce). This has been fixed.

The 8.10: CheckSimpleVector run time code has been improved so that it is only run once per case that is being checked.

Simple Vector concept checking (8.10: CheckSimpleVector ) was added to the routines: 5.2.3: ForOne , 5.2.5: ForTwo , 5.3: Forward , 5: ADFun , 5.2.2: Hessian , 5.1.1: Independent , 5.2.1: Jacobian , 5.2.4: RevOne , 5.2.6: RevTwo , and 5.4: Reverse .

12.7.13.ba: 07-04
Simple Vector concept checking (8.10: CheckSimpleVector ) was added to the routines: 8.14.2: LuFactor , 8.14.1: LuSolve , 8.14.3: LuInvert , 8.19: OdeErrControl , 8.17: Runge45 , and 8.18: Rosen34 .

The previous version of the routine 8.19: OdeErrControl was mistakenly in the global namespace. It has been moved to the CppAD namespace (where all the other 8: utilities routines are).

The previous distribution (version 05-07-02) was missing the file cppad/local/Default.h. This has been fixed.

12.7.13.bb: 07-03
Added 8.10: CheckSimpleVector , a C++ concept checking utility that checks if a vector type has all the necessary conditions to be a 8.9: SimpleVector class with a specific element type.

12.7.13.bc: 07-02
Version 7 of Microsoft's C++ compiler supports the standard declaration for a friend template function. Version 6 did not and CppAD used macros to substitute the empty string for <Base>, < AD<Base> >, and < VecAD<Base> > in these declarations. These macro substitutions have been removed because Version 6 of Microsoft's C++ compiler is no longer supported by CppAD.

The copy base section was split into the default constructor and the construction for the base type. The construction from base type has been extended to include any type that is convertible to the base type. As a special case, this provides the previous wish list item of a constructor from an arbitrary Base to a AD< AD<Base> > , AD< AD< AD<Base> > > etc.

12.7.13.bd: 07-01
The permissions were set as executable for many of the no-executable files in the distribution; for example, the README, file. This has been fixed.

12.7.13.be: 06-25
Some improvements were made to the README, AUTHORS, COPYING, and INSTALL files. In addition, the file UWCopy040507.html which contains the University of Washington's copyright policy (see Section 2) was added to the distribution.

12.7.13.bf: 06-24
The List2Vector 10.3: example utility is no longer used and has been removed.

12.7.13.bg: 06-18
CppAD is now supported by Microsoft Visual C++ version 7 or higher. The version 6 project files *.dsw and *.dsp have been replaced by the version 7 project files *.sln and *.vcproj .

12.7.13.bh: 06-14
A new 4.4.4.1: CondExp example has been added and the old 4.4.4: CondExp example has been moved to the TestMore directory (it is now only a test).

12.7.13.bi: 06-13
The changes made on 06-06 do not run under Microsoft Visual C++ version 6.0 (even though they are within the C++ standard). Preliminary testing under version 7 indicates that Microsoft has fixed this problem in later versions of their C++ compiler.

12.7.13.bj: 06-06
Converted the routines 5.3: Forward and 5.4: Reverse to allow for any 8.9: SimpleVector instead of just CppADvector. In addition, separated the syntax of the function call from the prototype for each of the arguments. This was also done for all the easy to use 5.2: Drivers as well as the 5.1.1: Independent function and the 5: ADFun constructor.

Add a section containing a list of 10.4: all the examples .

12.7.13.bk: 05-19
A significant improvement in speed was obtained by moving the buffer extension to a separate function and then inline the rest of putting operators in the tape. For example, here is part of the speed test output before this change:
 
     Tape of Expansion by Minors Determinant: Length = 350, Memory = 6792
     size = 5 rate = 230
     size = 4 rate = 1,055
     size = 3 rate = 3,408
     size = 2 rate = 7,571
     size = 1 rate = 13,642
and here is the same output after this change:
 
     Tape of Expansion by Minors Determinant: Length = 350, Memory = 6792
     size = 5 rate = 448
     size = 4 rate = 2,004
     size = 3 rate = 5,761
     size = 2 rate = 10,221
     size = 1 rate = 14,734
Note that your results will vary depending on operating system and machine.

12.7.13.bl: 05-18
Change name of OdeControl to 8.19: OdeErrControl and improve its documentation.

Correct the syntax for the 4.5.4: Parameter and Variable functions.

12.7.13.bm: 05-16
Change 8.19: OdeErrControl to have method return its order instead of having a separate argument to OdeErrControl.

Add the argument scur to OdeErrControl, improve OdeErrControl choice of step size and documentation.

12.7.13.bn: 05-12
Using profiling, the 4.4.1.3: multiplication operator was show to take a significant amount of time. It was reorganized in order to make it faster. The profiling indicated an improvement so that same change was made to the 4.4.1.3: ad_binary and 4.4.1.4: compound_assign operators.

12.7.13.bo: 05-06
The documentation for 8.9: SimpleVector and 8.2: NearEqual were changed to use more syntax (what the user enters) and simpler prototypes (the compiler oriented description of the arguments). In addition, exercises were added at the end of the 8.9: SimpleVector , 8.22: CppAD_vector , and 8.2: NearEqual documentation.

There was a undesired divide by zero case in the file TestMore/VecUnary.cpp that just happened to work in corresponding 8.2: NearEqual check. The NearEqual routine has been changed to return false if either of the values being compared is infinite or not a number. In addition, the divide by zero has been removed from the TestMore/VecUnary.cpp test.

12.7.13.bp: 05-01
The doubly linked list was also removed from the 4.6: VecAD internal data structure because this method of coding is simpler and it makes it more like the rest of CppAD.

12.7.13.bq: 04-21
The profiling indicated that the destructor for an AD object was using a significant amount of time. The internal data structure of an AD object had a doubly linked list that pointed to the current variables and this was modified when an AD object was destroyed. In order to speed AD operations in general, the internal data structure of an AD object has been changed so that this list is no longer necessary (a tape id number is used in its place)

During the process above, the function 4.5.4: Variable was added.

12.7.13.br: 04-20
Add 12.8.13.f: profiling to the speed tests.

12.7.13.bs: 04-19
Remove an extra (not necessary) semi-colon from the file cppad/local/Operator.h.

12.7.13.bt: 03-26
The new routine 8.19: OdeErrControl does automatic step size control for the ODE solvers.

12.7.13.bu: 03-23
The routine 8.18: Rosen34 is an improved stiff integration method that has an optional error estimate in the calling sequence. You must change all your calls to OdeImplicit to use Rosen34 (but do not need to change other arguments because error estimate is optional).

12.7.13.bv: 03-22
The routine 8.17: Runge45 is an improved Runge-Kutta method that has an optional error estimate in the calling sequence. You must change all your calls to OdeRunge to use Runge45 (but do not need to change other arguments because error estimate is optional).

12.7.13.bw: 03-09
Some extra semi-colons (empty statements) were generating warnings on some compilers. The ones that occurred after the macros CppADStandardMathBinaryFun, CppADCompareMember, CppADBinaryMember, and CppADFoldBinaryOperator have been removed.

12.7.13.bx: 03-04
An new multiple level of AD example 10.2.10: mul_level was added.

12.7.13.by: 03-01
An option that specifies error and warning 12.8.13.k: flags for all the C++ compile commands, was added to the 12.8.13: Unix installation instructions .

12.7.13.bz: 02-24
The routine 8.14.1: LuSolve was split into 8.14.2: LuFactor and 8.14.3: LuInvert . This enables one to efficiently solve equations where the matrix does not change and the right hand side for one equation depends on the left hand side for a previous equation.

An extra requirement was added to the 8.9: SimpleVector template class. There must be a typedef for value_type which is the type of elements in the vector

Under Mandrake Linux 10.1, some template friend declarations were failing because the corresponding operations were not declared before being indicated as friends (found by Jean-Pierre Dussault (mailto:Jean-Pierre.Dussault@Usherbrooke.ca) ). This has been fixed.

12.7.13.ca: 01-08
The 4.4.2.18: erf function was added. The implementation of this function used conditional expressions (4.4.4: CondExp ) and some times the expression that was not valid in a region caused division by zero. For this reason, the check and abort on division by zero has been removed.
Input File: omh/appendix/whats_new/whats_new_05.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.14: Changes and Additions to CppAD During 2004

12.7.14.a: Introduction
This section contains a list of the changes plus future plans for CppAD during 2004 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions.

12.7.14.b: 12-11
The documentation for the CppAD error macros was improved. The package title in : cppad was changed. The documentation for 8.22: CppAD::vector was improved and the corresponding source code cppad/vector.hpp was included.

12.7.14.c: 12-09
The 8.14.1: LuSolve and OdeRunge source code was modified to make the more in line with the introduction to C++ AD book (OdeRunge has been replaced by 8.17: Runge45 ). In addition, the examples OdeRunge.cpp and 8.14.1.1: lu_solve.cpp were modified to make the simpler. (The more complex version of OdeRunge.cpp was moved to the TestMore directory.)

12.7.14.d: 12-03
The 8.13: Poly documentation and source code were modified to make them more in line with the introduction to C++ AD book.

12.7.14.e: 11-17
Changing to Autoconf and Automake on 12.7.14.ah: 08-24 mistakenly forgot the -Wall compiler switch (all warnings). This has been added and the corresponding warnings have been fixed.

12.7.14.f: 11-16
The 11-15 Debug version would not compile under Visual C++ version 7.0 because a declaration of LessThanOrZero was missing. This has been fixed.

12.7.14.g: 11-15
The 5.2.3: ForOne and 5.2.4: RevOne easy to use 5.2: drivers were added.

12.7.14.h: 11-14
The notation in the 5: ADFun sections was changed to make the 5.3: Forward and 5.4: Reverse routines easier to use.

12.7.14.i: 11-13
The Taylor coefficient vector and matrix notation was folded into just 12.4.l: Taylor coefficients .

12.7.14.j: 11-12
If NDEBUG is not defined during compile time, all AD<Base> 4.5.1: comparison operations are checked during 5.3.1: zero order forward mode calculations. The 12.8.3: CompareChange function returns the number of comparison operations that have changed.

12.7.14.k: 11-10
The 10.1: get_started.cpp example was changed to use the 5.2.1: Jacobian driver. In addition, more 14: index entries, that point to the 5.2: easy to use drivers , were added.

12.7.14.l: 11-04
The Microsoft Visual Studio project file example/Example.dsp/ was missing some new examples that needed to be linked in the install windows procedure. This has been fixed.

12.7.14.m: 11-02
The 12.8.13: unix installation required the user to touch the files to get the dates in proper order. This is no longer necessary.

12.7.14.n: 11-01
Some of the dependency directories and files, for example PrintFor/.deps and PrintFor/.deps/PrintFor.Po had an extra ? at the end of their names. This seems to have been fixed by using a newer version of the autoconf and automake tools.

12.7.14.o: 10-29
Add the example and test 8.9.1: simple_vector.cpp to the 8.9: SimpleVector documentation.

The specifications for 6: preprocessor symbols state that all the CppAD preprocessor symbols begin with CppAD (so they do not conflict with other packages). Some preprocessor symbols in the file cppad/config.h did began with WITH_. This has been fixed.

12.7.14.p: 10-28
The examples 10.2.6: hes_lu_det.cpp , 10.2.5: hes_minor_det.cpp , 10.2.9: jac_lu_det.cpp , and 10.2.8: jac_minor_det.cpp used the negative of a size_t value. The value has been changed to an int.

The 8.22: CppAD::vector template class was converted into a library routine so it can be used separately from the rest of CppAD.

12.7.14.q: 10-27
The 4.3.6: PrintFor example was moved to its own directory because the conversion from VC 6.0 to VC 7.0 projects did not work when there were multiple executables in one project file. The 2: install instructions were modified to reflect this change.

12.7.14.r: 10-21
One declaration (for the 4.3.1: Value function) was missing from the file cppad/local/Declare.h. This has been added and CppAD should now compile and run under both Microsoft VC 6.0 and 7.0.

12.7.14.s: 10-19
The current version of CppAD has a problem compiling under Microsoft Visual C++ version 7.0 (it compiles and works under version 6.0). The problem appears to be due to a closer agreement between VC 7.0 and the C++ standard for declaring templates functions as friends. Some friend declarations were removed and others were made more specific in order to migrate the a version that will compile and run using VC 7.0.

12.7.14.t: 10-16
The example 4.5.1.1: compare.cpp displayed the text from 4.5.3.1: bool_fun.cpp by mistake. This has been fixed.

The 4.5.1: Compare operators have been extended to work with int operands.

12.7.14.u: 10-06
The test TapeDetLu was added to speed_cppad/DetLuSpeed.cpp and TapeDetMinor was added to speed_cppad/DetMinorSpeed.cpp. These tests just tape the calculations without computing any derivatives. Using this, and the other tests, one can to separate the taping time from the derivative calculation time.

The windows installation steps do not build a config.h file. Hence a default config.h file was added to the distribution for use with Microsoft Visual Studio.

The Distribute section of the developer documentation was brought up to date.

Links to the ADOLC and FADBAD download pages were added to the 12.8.13: unix installation instructions.

12.7.14.v: 09-29
The include files for the 8: utilities are now included by the root file cppad/cppad.hpp. They can still be included individually with out the rest of the CppAD package.

12.7.14.w: 09-26
The routine OdeRunge was modified so that it will now integrate functions of a complex arguments. This was done by removing all uses of greater than and less than comparisons were removed. (OdeRunge has been replaced by 8.17: Runge45 ).

The changes on 12.7.14.y: 09-21 did not fix all the file date and time problems; i.e., automake was still running in response to the 12.8.13: unix installation make command.

12.7.14.x: 09-23
There was a reference to B that should have been X in the description of the 8.14.1.k: X argument of LuSolve. This has been fixed.

12.7.14.y: 09-21
The 4.4.4: CondExp function has been modified so that it works properly for AD< AD<Base> > types; i.e., it now works for multiple levels of taping.

The date of the files aclocal.m4 and config.h.in were later than the date of top level Makefile.am. This caused the make command during the 12.8.13: unix installation to try to run autoconf and this did not work on systems with very old versions of autoconf. This has been fixed.

12.7.14.z: 09-13
The examples that are specific to an operation were moved to be below that operation in the documentation tree. For example 4.4.1.3.1: add.cpp is below 4.4.1.3: ad_binary in the documentation tree.

12.7.14.aa: 09-10
The version released on 04-09-09 did not have the new file PrintFor.h in cppad/local. This has been fixed.

The Base type requirements were simplified.

The 12.8.13: Unix installation instructions were modified so just one make command was executed at the top level. This was necessary because the order of the makes is now important (as previously suggested, the makes did not work properly).

12.7.14.ab: 09-09
The 4.3.6: PrintFor function was added so that users can debug the computation of function values at arguments that are different from those used when taping.

12.7.14.ac: 09-07
In the 12.8.13: Unix installation instructions place ./ in front of current directory program names; for example, ./GetStarted instead of GetStarted (because some unix systems do not have the current directory in the default executable path).

12.7.14.ad: 09-04
A library containing the 8.4: SpeedTest and 8.2: NearEqual object files was added to the distribution.

All of the include files of the form <cppad/library/name.h> were moved to <cppad/name.h> .

12.7.14.ae: 09-02
Some more messages were added to the output of configure during the 12.8.13: Unix installation .

The suggested compression program during Windows installation was changed from 7-zip (http://www.7-zip.org) to WinZip (http://www.winzip.com) .

12.7.14.af: 08-27
The error messages printed by the default version of the CppAD error macros had YY-MM-DD in place of the date for the current version. This has been fixed.

All the correctness tests are now compiled with the -g command line option (the speed tests are still compiled with -O2 -DNDEBUG).

The 2: installation instructions for Unix and Windows were split into separate pages.

12.7.14.ag: 08-25
The 2: installation now automates the replacement of 8.22: CppAD::vector by either the std::vector or boost::numeric::ublas::vector.

12.7.14.ah: 08-24
This date marks the first release that uses the Gnu tools Autoconf and Automake. This automates the building of the make files for the 2: installation and is the standard way to distribute open source software. This caused some organizational changes, for example, the 10.1: GetStarted example now has its own directory and the distribution directory is named
     cppad-
yy-mm-dd
where yy-mm-dd is the year, month and date of the distribution. (Note the distribution directory is different from the directory where CppAD is finally installed.)

12.7.14.ai: 08-12
Move OdeExplicit into the cppad/library/ directory. In addition, change it so that the vector type was a template argument; i.e., works for any type of vector (not just CppADvector).

12.7.14.aj: 07-31
Move 8.14.1: LuSolve into the cppad/library/ directory. In addition, change it so that the vector type was a template argument; i.e., works for any type of vector (not just CppADvector).

12.7.14.ak: 07-08
The file cppad/example/NearEqual.h has been moved to cppad/example/NearEqualExt.h because it contains extensions of the 8.2: NearEqual routine to AD types.

12.7.14.al: 07-07
The double and std::complex<double> cases for the 8.2: NearEqual routine arguments has been moved to the general purpose 8: utilities .

12.7.14.am: 07-03
The CppAD error macros names CppADExternalAssert and CppADInternalAssert were changed to CppADUsageError and CppADUnknownError. The 8.4: SpeedTest routine was changed to use CppADUsageError instead of a C assert.

12.7.14.an: 07-02
The 8.4: SpeedTest output was improved so that the columns of values line up. Previously, this was not the case when the number of digits in the size changed.

12.7.14.ao: 06-29
Added code to trap and report memory allocation errors during new operations.

12.7.14.ap: 06-25
A discussion of the order dependence of the 4.2: assignment operator and the 5.1.1: independent function was added to the 12.1.a: Faq . In addition, a similar discussion was added to the documentation for the 5.1.1: Independent function.

The definition of a 12.4.h: parameter and 12.4.m: variable were changed to reflect that fact that these are time dependent (current) properties of an AD<Base> object.

12.7.14.aq: 06-12
All of the 4.4.1: arithmetic operators (except for the unary operators) can now accept int arguments. The documentation for these arguments has been changed to reflect this. In addition, the corresponding test cases have been changed to test this and to test high order derivative cases. The old versions of these tests were moved into the cppad/Test directory.

12.7.14.ar: 06-04
The 4.4.3.1: atan2 function was added.

12.7.14.as: 06-03
The asin and acos 4.4.2: unary_standard_math functions were added.

There was a bug the reverse mode theory and calculation of derivatives of 4.4.2.11: sqrt for fourth and higher orders. This has been fixed. In addition, the following examples have been changed so that they test derivative up to fifth order: 4.4.2.2.1: asin , 4.4.2.3.1: atan , 4.4.2.4.1: cos , 4.4.2.6.1: exp , 4.4.2.7.1: log , 4.4.2.9.1: sin , 4.4.2.11.1: sqrt .

12.7.14.at: 06-01
There was a bug in the 4.4.2.3: atan function 5.3: forward mode calculations for Taylor coefficient orders greater than two. This has been fixed.

12.7.14.au: 05-30
The 4.4.2.9.1: sin and 4.4.2.4.1: cos examples were changed so that they tested higher order derivatives.

12.7.14.av: 05-29
The forward mode recursion formulas for each of the 12.3.1.c.c: standard math functions has been split into separate sections.

A roman (instead of italic) font was used for the name of for the name of each of the standard math functions in the assumption statements below the section for the standard math functions. For example, @(@ \sin(x) @)@ instead of @(@ sin(x) @)@.

12.7.14.aw: 05-26
In the documentation for 8.13: Poly , the reference to example/Poly.h was corrected to cppad/library/Poly.h.

In the documentation for 8.4: SpeedTest , the reference to Lib/SpeedTest.h was corrected to cppad/library/SpeedTest.h. In addition, the example case was corrected.

In 5.4: Reverse , the definition for @(@ U(t, u) @)@ had @(@ t^p-1 @)@ where it should have had @(@ t^{p-1} @)@. This has been fixed.

12.7.14.ax: 05-25
The special case where the second argument to the 4.4.3.2: pow function is an int has been added.

12.7.14.ay: 05-14
Change all of the include syntax
     # include "
filename"
to the syntax
     # include <
filename>
so that examples and other use better reflect how one would use CppAD after it was installed in a standard include directory; for example /usr/local/include/cppad.

The user documentation was moved from the directory cppad/User to the directory cppad/Doc.

The directory cppad/Lib was moved to cppad/library to reflect that fact that it is not what one expects in a standard lib directory or a standard include directory.

12.7.14.az: 05-12
The string YY-MM-DD in the preprocessor symbol CppADVersion was not being replaced by the current date during distribution. This resulted in the CppADExternalAssert macro printing YY-MM-DD where is should have printed the date of distribution. This has been fixed.

All of the include commands of the form
     # include "include/
name.h"
     # include "lib/
name.h"
have been changed to the form
     # include "cppad/include/
name.h"
     # include "cppad/lib/
name.h"
This will avoid mistakenly loading a file from another package that is in the set of directories being searched by the compiler. It is therefore necessary to specify that the directory above the CppAD directory be searched by the compiler. For example, if CppAD is in /usr/local/cppad, you must specify that /usr/local be searched by the compiler. Note that if /usr/local/cppad/ is no longer searched, you will have to change
 
     # include "cppad.hpp"
to
 
     # include "cppad/cppad.hpp"
.

The window nmake file Speed/Speed.mak was out of date. This has been fixed.

12.7.14.ba: 05-09
Move 8.13: Poly and 8.4: SpeedTest into the cppad/Lib directory and the CppAD namespace.

12.7.14.bb: 05-07
The 4.4.1.3.4: divide operator tests were extended to include a second order derivative calculation using reverse mode.

The 8.13: Poly routine was modified to be more efficient in the derivative case. In addition, it was changed to use an arbitrary vector for the coefficients (not just a CppADvector).

12.7.14.bc: 05-04
A reloading of the data base caused the files include/atan.h and include/cos.h to be mistakenly started with lower case letters. These have been moved to include/Atan.h and include/Cos.h respectively.

12.7.14.bd: 05-03
The 5.4: Reverse mode calculations for 4.4.4: conditional expressions were mistakenly left out. This has been fixed.

12.7.14.be: 04-29
The unary functions, such as 4.4.2.9: sin and 4.4.2.4: cos , were not defined for elements of an 4.6: VecAD vector. This has been fixed.

12.7.14.bf: 04-28
The operator 8.22.i: << was added to the default 12.8.9: test_vector template class.

A FADBAD correctness and speed comparison with CppAD was added.

12.7.14.bg: 04-25
Factor out common sub-expressions in order to make 10.3.3: lu_vec_ad.cpp faster.

Convert description from C++ Automatic Differentiation to C++ Algorithmic Differentiation.

12.7.14.bh: 04-24
The 4.6: VecAD element class is no longer a derived class of the 4: AD class. This enabled a decrease in tape memory and an increase in the speed for 4.6: VecAD operations.

The 4.4.2.8: log10 function was added.

12.7.14.bi: 04-22
Add 4.4.4: CondExp and use it to speed up 10.3.3: lu_vec_ad.cpp .

12.7.14.bj: 04-21
Use 4.4.2.14: abs to speed up 10.3.3: lu_vec_ad.cpp .

12.7.14.bk: 04-20
The 4.4.2.14: absolute value function was added.

The value n for OdeExplicit and OdeImplicit is deduced from the argument x0 and is not passed as a separate argument. This documentation has been fixed to this effect.

12.7.14.bl: 04-19
The 4.4.1.4: += operator did not function correctly when the left hand operand was a 12.4.h: parameter and the right hand operand was a variable (found by Mike Dodds (mailto:magister@u.washington.edu) ). This has been fixed.

12.7.14.bm: 04-09
Adding special operators for using parameters to index VecAD objects increased the speed and reduced the memory requirements (by about 20%) for the 4.6: VecAD case in the speed_cppad/LuSolveSpeed.cpp/ test.

The 4.6: VecAD objects are not being handled correctly by the 5.4: Reverse function. The VecAD test was extended to demonstrate the problem and the problem was fixed (it is now part of TestMore/VecAD).

12.7.14.bn: 04-08
The example 10.3.3.1: lu_vec_ad_ok.cpp uses 4.6: VecAD to executes different pivoting operations during the solution of linear equations with out having to retape.

The speed test speed_cppad/LuSolveSpeed.cpp/ has been added. It shows that the initial implementation of 4.6: VecAD is slow (and uses a lot of memory.) In fact, it is faster to use 8.14.1: LuSolve and retape for each set of equations than it is to use 10.3.3: lu_vec_ad.cpp and not have to retape. This test will help us improve the speed of 10.3.3: lu_vec_ad.cpp .

12.7.14.bo: 04-07
There were bugs in the assignment to 4.6: VecAD elements during taping that have been fixed. In addition, an example of tapping the pivoting operations in an 10.3.3: Lu factorization has been added.

12.7.14.bp: 04-03
Added size_t indexing to the 4.6: VecAD class.

Fixed a bug connected to the 4.6: VecAD class and erasing the tape.

12.7.14.bq: 04-02
Some memory savings is done with regard to equal parameter values being stored in the tape. There was a bug in this logic when parameter in an AD< AD<Base> > class had values that were variables in the AD<Base> class. This has been fixed.

12.7.14.br: 04-01
The name of the class that tapes indexing operations was changed from ADVec to 4.6: VecAD . This class was extended so that the value of elements in these vectors can be variables (need not be 12.4.h: parameters ).

12.7.14.bs: 03-30
Do some simple searching of the parameter table during taping avoid multiple copies of parameters on tape (use less tape memory).

12.7.14.bt: 03-28
The version 4.6: ADVec , a vector class that tapes indexing operations, is now available. It is currently restricted by the fact that all the values in the vector must be 12.4.h: parameters .

12.7.14.bu: 03-25
The internal taping structure has been changed to have variable length instructions. This is to save memory on the tape. In addition, it may help in the implementation of the vector class that tracks indexing. (A now functioning version of this class is described in 4.6: VecAD .)

12.7.14.bv: 03-18
A change was made to the way parameter values are stored on the tape. This resulted in a significant savings in the amount of memory required.

12.7.14.bw: 03-17
Change the return type for 8.4: SpeedTest from const char * to std::string. The memory required for the largest test cases was added to the 11.5: speed_cppad tests output.

12.7.14.bx: 03-15
The comparison between ADOLC and CppAD for the DetLuADOLC.cpp/ example was returning an error (because it was checking for exact equality of calculated derivatives instead of nearly equal). This has been fixed.

12.7.14.by: 03-12
The user defined unary functions were removed and the user defined 4.4.5: discrete functions were added. These discrete functions add the capability of conditional expressions (alternate calculations) being included in an 5: ADFun object.

12.7.14.bz: 03-11
The classes 11.2.3: det_by_minor and 11.2.1: det_by_lu were added and used these to simplify the examples that compute determinants.

12.7.14.ca: 03-09
The routines Grad and Hess have been removed. You should use 5.2.1: Jacobian and 5.2.2: Hessian instead.

12.7.14.cb: 03-07
The driver routines 5.2.2: Hessian and 5.2.6: RevTwo has been added. These to compute specialized subsets of the second order partials.

Documentation errors in 5.2.5: ForTwo and 5.4: Reverse were fixed. The 10: example documentation was reorganized.

12.7.14.cc: 03-06
The driver 5.2.5: ForTwo has been added. It uses forward mode to compute a subset of the second order partials.

Split all of the "example" and "test" index entries that come from cppad/example/*.cpp into sorted subheadings.

12.7.14.cd: 03-05
The Grad routine, which only computed first derivatives of scalar valued functions, has been replaced by the 5.2.1: Jacobian routine which computes the derivative of vector valued functions.

12.7.14.ce: 03-04
The bug reported on 12.7.14.cl: 02-17 was present in all the operators. These have all been fixed and tests for all the operators have been added to the cppad/Test directory.

The 5.1.5.f: f.Parameter() function was added so that one can count how many components of the range space depend on the value of the domain space components. This helps when deciding whether to use forward or reverse mode.

12.7.14.cf: 03-03
Special operators were added to distinguish the cases where one of the operands is a 12.4.h: parameter . This reduced the amount of branching that is necessary when executing 5.3: Forward and 5.4: Reverse calculations.

The 5.1.1: Independent and 5.1.5.f: Parameter functions were moved below 5: ADFun in the documentation.

12.7.14.cg: 03-01
The DetLuADOLC.cpp, DetLu case was added to the ADOLC comparison tests.

12.7.14.ch: 02-29
Under certain optimization flag values, and on certain systems, an error was reported by the ADOLC correctness comparison. It turned out that CppAD was not initializing a particular index when debugging was turned off. This has been fixed.

12.7.14.ci: 02-28
A set of routines for comparing CppAD with ADOLC has been added to the distribution. In addition, documentation for compiling and linking the 10: Examples and 11.5: Speed Tests has been added.

12.7.14.cj: 02-21
If you use the user defined unary atomic functions there is a restriction on the order of the derivatives that can be calculated. This restriction was documented in the user defined unary function 5.3: Forward and 5.4: Reverse . (These unary functions were removed on 12.7.14.by: 03-12 .)

12.7.14.ck: 02-20
A user interface to arbitrary order 5.4: reverse mode calculations was implemented. In addition, the 5: ADFun member functions Rev and RevTwo were removed because it is easier to use the uniform syntax below:
Old Syntax Uniform Syntax
r1 = f.Rev(v) r1 = f.Reverse(1, v)
q1 = f.RevTwo(v) r2 = f.Reverse(2, v)
q1[i] == r2[2 * i + 1]


The 12.3: Theory section has been completely changed so that it corresponds to the arbitrary order calculations. (Some of this change was made when the arbitrary forward mode interface was added on 12.7.14.cn: 04-02-15 .

The directory cppad/Test has been added. It contains tests cases that are not intended as examples.

12.7.14.cl: 02-17
There was a bug in the way CppAD handled the parameters zero and one when they were variables on a lower level tape; i.e. x might be a parameter on an AD< AD<Base> > tape and a its value might be a variable on the AD<Base> tape. This bug in the multiply and divide routines has been fixed.

There was a bug that is some cases reported a divide by zero error when the numerator was zero. This has been fixed.

12.7.14.cm: 02-16
A bug in 5.3: Forward prevented the calculation of derivatives with higher order than two. In addition, this checking for user errors in the use of Forward was also faulty. This has been fixed.

The Microsoft project file example\Example.dsp was out of date. This has been fixed.

The example that 10.2.10: tapes derivative calculations has been changed to an application of 10.2.12: Taylor's method for solving ordinary differential equations.

12.7.14.cn: 02-15
A user interface to arbitrary order 5.3: forward mode calculations was implemented. In addition, the 5: ADFun member functions Arg, For and ForTwo were removed because it is easier to use the uniform syntax below:
Old Syntax Uniform Syntax
v0 = f.Arg(u0) v0 = f.Forward(0, u0)
v1 = f.For(u1) v1 = f.Forward(1, u1)
v2 = f.For(u2) v2 = f.Forward(1, u2)

12.7.14.co: 02-12
All of the derivative calculations are now done using arbitrary order Taylor arithmetic routines. The 12.3: Theory section was changed to document this method of calculation.

12.7.14.cp: 02-01
The definition of a 12.4.l: Taylor coefficient was changed to include the factorial factor. This change was also made to the output specifications for the FunForTwo routine.

12.7.14.cq: 01-29
There were some bugs in the FunArg function that were fixed.
  1. If one of the dependent variables was a 12.4.h: parameter FunArg did not set it's value properly. (All its derivatives are zero and this was handled properly.)
  2. The user defined unary functions were not computed correctly.
The specifications for the usage and unknown CppAD error macros were modified so that they could be used with out side effects.

12.7.14.cr: 01-28
Some corrections and improvements were made to the documentation including: CppADvector was placed before its use, a reference to Ode_ind and Ode_dep was fixed in OdeImplicit.

12.7.14.cs: 01-22
The specifications for the routine FunForTwo was changed to use 12.4.l: Taylor coefficients . This makes the interface to CppAD closer to the interface for ADOLC (https://projects.coin-or.org/ADOL-C) .
Input File: omh/appendix/whats_new/whats_new_04.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.7.15: Changes and Additions to CppAD During 2003

12.7.15.a: Introduction
This section contains a list of the changes plus for (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions.

12.7.15.b: 12-24
Some references to double should have been references to the 12.4.e: base type (in reverse mode and in the Grad/ and Hess functions). This has been fixed.

12.7.15.c: 12-22
The preprocessor symbol WIN32 was being used to determine if one was using Microsoft's C++ compiler. This symbol is predefined by the MinGW (http://www.mingw.org) version of the GNU C++ compiler and hence CppAD had errors during installation using MinGW. This has been fixed by using the preprocessor symbol _MSC_VER to determine if one is using the Microsoft C++ compiler.

12.7.15.d: 12-14
The extended system solvers OdeOne and OdeTwo have been removed from the distribution. In addition, the interface to the ODE solvers have been simplified.

12.7.15.e: 12-13
Remove the CppADCreateTape macro and have the tapes created and grow automatically.

12.7.15.f: 12-12
The old method where one directly accesses the tape has been removed and the following functions are no longer available:
          size_t 
TapeName.Independent(AD<Base> &indvar)
          size_t 
TapeName.Record(size_t order)
          size_t 
TapeName.Stop(void)
          bool Dependent(const AD<
Base> &var) const
          bool 
TapeName.Dependent(const AD<Base> &var) const
          size_t 
TapeName.Total(void) const
          size_t 
TapeName.Required(void) const
          size_t 
TapeName.Erase(void)
          TapeState 
TapeName.State(void) const
          size_t 
TapeName.Order(void) const
          size_t 
TapeName.Required(void) const
          bool Parameter(CppADvector< AD<
Base> > &u)
          
TapeName.Forward(indvar)
          
TapeName.Reverse(var)
          
TapeName.Partial(var)
          
TapeName.ForwardTwo(indvar)
          
TapeName.ReverseTwo(var)
          
TapeName.PartialTwo(var)

12.7.15.g: 12-10
The change on 12.7.15.i: 12-01 make the taping process simpler if one does not directly access CppADCreateTape. The 10: examples were changed to not use TapeName . The following examples were skipped because they document the functions that access TapeName : DefFun.cpp, For.cpp, for_two.cpp, Rev.cpp, and rev_two.cpp.

12.7.15.h: 12-05
There was a bug in f.Rev and f.RevTwo and when two dependent variables were always equal and shared the same location in the tape. This has been fixed.

The ODE Example was changed to tape the solution (and not use OdeOne or OdeTwo). This is simpler to use and the resulting speed tests gave much faster results.

12.7.15.i: 12-01
The following function has been added:
     void Independent(const CppADvector<
Base> &x)
which will declare the independent variables and begin recording AD<Base> operations (see 5.1.1: Independent ). The 5: ADFun constructor was modified so that it stops the recording and erases that tape as well as creates the 5: ADFun object. In addition, the tape no longer needs to be specified in the constructor.

12.7.15.j: 11-21
Add StiffZero to set of ODE solvers.

12.7.15.k: 11-20
The AbsGeq and LeqZero in 8.14.1: LuSolve were changed to template functions so they could have default definitions in the case where the <= and >= operators are defined. This made the double and AD<double> use of LuSolve simpler because the user need not worry about these functions. On the other hand, it made the std::complex and AD<std::complex> use of LuSolve more complex.

The member function names for the fun argument to ODE were changed from fun.f to fun.Ode and from fun.g to fun.Ode_ini .

12.7.15.l: 11-16
The 1: table of contents was reorganized to provide a better grouping of the documentation.

The 8.14.1: LuSolve utility is now part of the distribution and not just an example; i.e., it is automatically included by cppad.hpp.

12.7.15.m: 11-15
The ODE solver was modified so that it can be used with any type (not just an AD type. This was useful for the speed testing. It is also useful for determining how the integrator steps should be before starting the tape.

The template argument Type was changed to Base where ever it was the 12.4.e: base type of an AD class.

12.7.15.n: 11-14
An speed_cppad/OdeSpeed.cpp/ test was added and some changes were made to the ODE interface in order to make it faster. The most significant change was in the specifications for the ODE function object fun .

12.7.15.o: 11-12
The user defined unary function example example/UnaryFun.cpp was incorrect. It has been corrected and extended.

12.7.15.p: 11-11
The 8.22: CppAD::vector template class is now used where the std::vector template class was previously used. You can replace the CppAD::vector class with a vector template class of your choosing during the 2: Install procedure.

12.7.15.q: 11-06
The documentation for 10.2.10: taping derivative calculations was improved as well as the corresponding example. In order to make this simpler, the example tape name DoubleTape was changed to ADdoubleTape (and the other example tape names were also changed).

12.7.15.r: 11-04
The ODE utility was changed from an example to part of the distribution. In addition, it was extended so that it now supports taping the solution of the differential equations (case order equal zero) or solving the extended set of differential equations for both first and second derivatives (cases order equal one and two). In addition, an initial condition that depends on the parameter values is also allowed.

12.7.15.s: 11-02
It is now legal to differentiate a 12.4.h: parameter with respect to an 12.4.k.c: independent variable (parameter derivatives are always equal to zero). This is an extension of the Reverse, Partial, ReverseTwo, and PartialTwo functions.

12.7.15.t: 10-21
All the CppAD include files, except cppad.hpp were moved into an include subdirectory.

12.7.15.u: 10-16
The 5: ADFun template class was added so that one can save a tape recording and use it as a differentiable function. The ADFun functions supports directional derivatives in both 5.3: Forward and 5.4: Reverse mode where as the tape only supports partial derivatives.

12.7.15.v: 10-14
The sqrt function was added to the 4.4.2: unary_standard_math functions. In addition, a definition of the power function for the types float and double was automatically included in the CppAD namespace.

The 4.3.1: Value function was changed so that it can be called when the tape is in the Empty state.

12.7.15.w: 10-10
The atan function was added to the 4.4.2: unary_standard_math functions.

12.7.15.x: 10-06
In the notation below, zero and one are parameters that are exactly equal to zero and one. If the variables z and x were related in any of the following ways, they share can share the same record on the tape because they will have the same derivatives.
     
z = x + zero        z =  x * one
     
z = zero + x        z =  one * x
     
z = x - zero        z =  x / one
Furthermore, in the following cases, the result z is a parameter (equal to zero) and need not be recorded in the tape:
     
z = x * zero        z =  zero / x
     
z = zero * x
The 4.4.1: arithmetic operators were all checked to make sure they did not add to the tape in these special cases. The total record count for the program in the Example directory was 552 before this change and 458 after.

12.7.15.y: 10-05
The process of converting the tape to operators was completed. In order to make this conversion, the binary user defined functions were removed. (Bob Goddard suggested a very nice way to keep the unary functions.) Another significant change was made to the user interface during this procedure, the standard math library functions are now part of the CppAD distribution and not defined by the user.

The function TapeName.Total was added to make it easy to track how many tape records are used by the test suite. This will help with future optimization of the CppAD recording process.

There was a bug (found by Mike Dodds (mailto:magister@u.washington.edu) ) in the error checking of the TapeName.Erase function. If Erase was called twice in a row, and NDEBUG was false during compilation, the program would abort. This has been fixed.

12.7.15.z: 09-30
A process of changing the tape from storing partial derivatives to storing operators has been started. This will make the tape smaller and it will enable the computation of higher derivatives with out having to tape the tape (see 10.2.10: mul_level ). The Add, Subtract, Multiply and Divide operators have been converted. The user defined functions are presenting some difficulties, so this process has not yet been completed.

There was a bug in reverse mode when an dependent variable was exactly equal to an independent variable. In this case, it was possible for it to be located before other of the independent variables on the tape. These other independent variable partials were not initialized to zero before the reverse calculation and hence had what ever value was left by the previous mode calculation. This has been fixed and the Eq.cpp example has been changed to test for this case.

The following tape functions were changed to be declared const because they do not modify the tape in any way: State, Order, Required, Dependent, and 4.5.4: Parameter .

12.7.15.aa: 09-20
The functions Grad and Hess were changed to use function objects instead of function pointers.

12.7.15.ab: 09-19
The higher order constructors (in standard valarray) were removed from the ODE example in order to avoid memory allocation of temporaries (and hence increase speed). In addition, the function objects in the ODE examples were changed to be const.

12.7.15.ac: 09-18
An ordinary differential equation solver was added. In addition, the extended system to differentiate the solution was included.

12.7.15.ad: 09-15
The linked list of AD variables was not being maintained correctly by the AD destructor. This was fixed by have the destructor use RemoveFromVarList to remove variables from the list. (RemoveFromVarList is a private AD member function not visible to the user.)

12.7.15.ae: 09-14
There is a new Faq question about evaluating derivatives at multiple values for the 12.1.f: independent variables .

12.7.15.af: 09-13
An example that uses AD< AD<double> > to compute higher derivatives was added.

The name GaussEliminate was changed to 8.14.1: LuSolve to better reflect the solution method.

12.7.15.ag: 09-06
Changed the 10.1: get_started.cpp and 4.7.9.6.1: complex_poly.cpp examples so they use a template function with both base type and AD type arguments. (The resulting code is simpler and a good use of templates.)

12.7.15.ah: 09-05
A 10.1: getting started example was added and the organization of the 10: Examples was changed.

12.7.15.ai: 09-04
The AbsOfDoubleNotDefine flag is no longer used and it was removed from the Windows 2: install instructions.

The 03-09-03 distribution did not have the proper date attached to it. The distribution script has been changed so that attaching the proper date is automated (i.e., this should not happen again).

A 12.1: Frequently Asked Questions and Answers section was started.

12.7.15.aj: 09-03
Added the 4.3.1: Value function which returns the 12.4.e: base type value corresponding to an AD object.

12.7.15.ak: 08-23
A new version of Cygwin was installed on the development system (this may affect the timing tests reported in this document). In addition, 8.14.1: LuSolve was changed to use back substitution instead of reduction to an identity matrix. This reduced the number of floating point operations corresponding to evaluation of the determinant. The following results correspond to the speed test of DetLu on a 9 by 9 matrix:
Version double Rate AD<double> Rate Gradient Rate Hessian Rate Tape Length
03-08-20 8,524 5,278 4,260 2,450 532
03-08-23 7,869 4,989 4,870 2,637 464

12.7.15.al: 08-22
The 4.4.1.2: unary minus operator was added to the AD operations.

12.7.15.am: 08-19
The standard math function examples were extended to include the complex case.

The 8.14.1: LuSolve routine what changed to use std::vector<Base> & arguments in place of Base * arguments. This removes the need to use new and delete with LuSolve.

When testing the speed of the change to using standard vector, it was noticed that the LuSolve routine was much slower. (see times for 03-08-16 below). This was do to computing the determinant instead of the log of the determinant. Converting back to the log of the determinant regained the high speeds. The following results correspond to the speed test of DetLu on a 9 by 9 matrix:
Version double Rate AD<double> Rate Gradient Rate Hessian Rate Tape Length
03-08-16 9,509 5,565 3,587 54 537
03-08-19 8,655 5,313 4,307 2,495 532

12.7.15.an: 08-17
The macro CppADTapeOverflow was added so that CppAD can check for tape overflow even in the NDEBUG preprocessor flag is defined.

12.7.15.ao: 08-16
The 8.14.1: LuSolve routine was extended to handle complex arguments. Because the complex absolute value function is nowhere differentiable, this required the allowing for user defined 4.5.3: boolean valued functions with AD arguments . The examples 8.14.1.1: lu_solve.cpp and GradLu.cpp were converted to a complex case.

12.7.15.ap: 08-11
The routine 8.14.1: LuSolve was made more efficient so that it is more useful as a tool for differentiating linear algebra calculations. The following results correspond to the speed test of DetLu on a 9 by 9 matrix:
Version double Rate AD<double> Rate Gradient Rate Hessian Rate Tape Length
03-08-10 49,201 7,787 2,655 1,809 824
03-08-11 35,178 12,681 4,521 2,541 540
In addition the corresponding test case 8.14.1.1: lu_solve.cpp was changed to a Hilbert matrix case.

12.7.15.aq: 08-10
A 4.7.9.6.1: complex polynomial example was added.

The documentation and type conversion in 8.14.1: LuSolve was improved.

The absolute value function was removed from the examples because some systems do not yet properly support double abs(double x) ,

12.7.15.ar: 08-07
Because the change to the multiplication operator had such a large positive effect, all of the 4.4.1: arithmetic operators were modified to reduce the amount of information in the tape (where possible).

12.7.15.as: 08-06
During Lu factorization, certain elements of the matrix are know to be zero or one and do not depend on the variables. The 4.4.1.3: multiplication operator was modified to take advantage of this fact. This reduced the size of the tape and increased the speed for the calculation of the gradient and Hessian for the Lu determinant test of a 5 by 5 matrix as follows:
Version Tape Length Gradient Rate Hessian Rate
03-08-05 176 11,362 1,149
03-08-06 167 12,780 10,625

12.7.15.at: 08-05
Fixed a mistake in the calculation of the sign of the determinant in the 8.14.1: LuSolve example.

12.7.15.au: 08-04
Added a the compiler flag
 
     AbsOfDoubleNotDefined
to the make files so that it could be removed on systems where the function
     double abs(double 
x)
was defined in math.h.

12.7.15.av: 08-03
The Grad and Hess functions were modified to handel the case where the function does not depend on the independent variables.

The 8.14.1: LuSolve example was added to show how on can differentiate linear algebra calculations. In addition, it was used to add another set of 11.5: speed tests .

The standard Math functions were added both as examples of defining atomic operations and to support mathematical operations for the AD<double> case.

The 4.3.5: << operator was added to the AD template class for output to streams.

12.7.15.aw: 08-01
The 4.4.1: compound assignment operators were added to the AD template class.

The name of the Speed/SpeedTest program was changed to 11.5: Speed/Speed . In addition, Speed/SpeedRun was changed to Speed/SpeedTest.

12.7.15.ax: 07-30
The 4.2: assignment operator was changed so the it returns a reference to the target. This allows for statements of the form
     
x = y = z;
i.e., multiple assignments.

12.7.15.ay: 07-29
If the 4.1: AD copy constructor constructor or 4.2: assignment operator used an 12.4.k.c: independent variable for its source value, the result was also an independent variable. This has been fixed so that the result is a dependent variable in these cases.

12.7.15.az: 07-26
The AD<Base> data structure was changed to include a doubly linked list of variables. This enabled the 4.1: AD copy constructor constructor and 4.2: assignment operator to create multiple references to the same place in the tape. This reduced the size of the tape and increased the speed for the calculation of the gradient and Hessian for the determinant of a 5 by 5 matrix as follows:
Version Tape Length Gradient Rate Hessian Rate
03-07-22 1668 1,363 53
03-07-26 436 3,436 213

12.7.15.ba: 07-22
The facility was added so that the user can define binary functions together with their derivatives. (This facility has been removed because it is better to define binary functions using AD variables.)

The Windows version make file directive /I ..\.. in example\Example.mak and Speed\Speed.mak was changed to /I .. (as it should have been).

12.7.15.bb: 07-20
The facility was added so that the user can define unary functions, together with their derivatives. For example, the standard math functions such as 4.4.2.6.1: exp are good candidates for such definitions. (This feature has been replaced by and the standard math functions are now part of the AD types, see 4: AD .)

The first Alpha for the Windows 2: installation was released.

12.7.15.bc: 07-18
Computing the determinant of a minor of a matrix 11.2.2: det_of_minor was documented as a realistic example using CppAD.

12.7.15.bd: 07-16
Fixed some non-standard constructions that caused problems with the installation on other machines.

Compiled and ran the tests under Microsoft Windows. (The Windows release should not take much more work.)

12.7.15.be: 07-14
First Alpha release of CppAD and is being released under the 12.12: Gnu Public License . It is intended for use by a Unix system. A Microsoft release is intended in the near future.
Input File: omh/appendix/whats_new/whats_new_03.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8: CppAD Deprecated API Features

12.8.a: Contents
include_deprecated: 12.8.1Deprecated Include Files
FunDeprecated: 12.8.2ADFun Object Deprecated Member Functions
CompareChange: 12.8.3Comparison Changes During Zero Order Forward Mode
omp_max_thread: 12.8.4OpenMP Parallel Setup
TrackNewDel: 12.8.5Routines That Track Use of New and Delete
omp_alloc: 12.8.6A Quick OpenMP Memory Allocator Used by CppAD
memory_leak: 12.8.7Memory Leak Detection
epsilon: 12.8.8Machine Epsilon For AD Types
test_vector: 12.8.9Choosing The Vector Testing Template Class
cppad_ipopt_nlp: 12.8.10Nonlinear Programming Using the CppAD Interface to Ipopt
old_atomic: 12.8.11User Defined Atomic AD Functions
zdouble: 12.8.12zdouble: An AD Base Type With Absolute Zero
autotools: 12.8.13Autotools Unix Test and Installation

12.8.b: Name Changes
4.5.3.n: CppADCreateUnaryBool AD Boolean Functions
4.4.5.n: CppADCreateDiscrete Discrete AD Functions
8.11.f: nan(zero) nan(zero)
colpack.star coloring see 5.6.3.j.e: sparse_hes and 5.6.4.i.b: sparse_hessian

12.8.c: Atomic Functions
The following are links to deprecated 4.4.7.2: atomic_base interfaces: 4.4.7.2.6.b: for_sparse_jac , 4.4.7.2.7.b: rev_sparse_jac , 4.4.7.2.8.b: for_sparse_hes , 4.4.7.2.9.b: rev_sparse_hes .
Input File: omh/appendix/deprecated/deprecated.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.1: Deprecated Include Files

12.8.1.a: Deprecated 2015-11-30
The 8: utility individual include files have been deprecated; e.g.,
 
     # include <cppad/runge_45.hpp>
You must instead use
 
     # include <cppad/utility.hpp>
or you can include individual utility files; e.g.,
 
     # include <cppad/utility/runge_45.hpp>

12.8.1.b: Deprecated 2006-12-17
The following is a list of deprecated include file names and the corresponding names that should be used. For example, if your program uses the deprecated preprocessor command
 
     # include <CppAD/CppAD.h>
you must change it to the command
 
     # include <cppad/cppad.hpp>
Deprecated    Should Use    Documentation
CppAD/CheckNumericType.h    cppad/check_numeric_type.hpp    8.8: CheckNumericType
CppAD/CheckSimpleVector.h    cppad/check_simple_vector.hpp    8.10: CheckSimpleVector
CppAD/CppAD.h    cppad/cppad.hpp    : CppAD
CppAD/CppAD_vector.h    cppad/vector.hpp    8.22: CppAD_vector
CppAD/ErrorHandler.h    cppad/error_handler.hpp    8.1: ErrorHandler
CppAD/LuFactor.h    cppad/lu_factor.hpp    8.14.2: LuFactor
CppAD/LuInvert.h    cppad/lu_invert.hpp    8.14.3: LuInvert
CppAD/LuSolve.h    cppad/lu_solve.hpp    8.14.1: LuSolve
CppAD/NearEqual.h    cppad/near_equal.hpp    8.2: NearEqual
CppAD/OdeErrControl.h    cppad/ode_err_control.hpp    8.19: OdeErrControl
CppAD/OdeGear.h    cppad/ode_gear.hpp    8.20: OdeGear
CppAD/OdeGearControl.h    cppad/ode_gear_control.hpp    8.21: OdeGearControl
CppAD/Poly.h    cppad/poly.hpp    8.13: Poly
CppAD/PowInt.h    cppad/pow_int.hpp    8.12: pow_int
CppAD/RombergMul.h    cppad/romberg_mul.hpp    8.16: RombergMul
CppAD/RombergOne.h    cppad/romberg_one.hpp    8.15: RombergOne
CppAD/Rosen34.h    cppad/rosen_34.hpp    8.18: Rosen34
CppAD/Runge45.h    cppad/runge_45.hpp    8.17: Runge45
CppAD/SpeedTest.h    cppad/speed_test.hpp    8.4: SpeedTest
CppAD/TrackNewDel.h    cppad/track_new_del.hpp    12.8.5: TrackNewDel

Input File: omh/appendix/deprecated/include_deprecated.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.2: ADFun Object Deprecated Member Functions

12.8.2.a: Syntax
f.Dependent(y)
o = f.Order()
m = f.Memory()
s = f.Size()
t = f.taylor_size()
u = f.use_VecAD()
v = f.size_taylor()
w = f.capacity_taylor()

12.8.2.b: Purpose
The ADFun<Base> functions documented here have been deprecated; i.e., they are no longer approved of and may be removed from some future version of CppAD.

12.8.2.c: Dependent
A recording of and AD of Base 12.4.g.b: operation sequence is started by a call of the form
     Independent(
x)
If there is only one such recording at the current time, you can use f.Dependent(y) in place of
     
f.Dependent(xy)
See 5.1.3: Dependent for a description of this operation.

12.8.2.c.a: Deprecated 2007-08-07
This syntax was deprecated when CppAD was extended to allow for more than one AD<Base> recording to be active at one time. This was necessary to allow for multiple threading applications.

12.8.2.d: Order
The result o has prototype
     size_t 
o
and is the order of the previous forward operation using the function f . This is the highest order of the 12.4.l: Taylor coefficients that are currently stored in f .

12.8.2.d.a: Deprecated 2006-03-31
Zero order corresponds to function values being stored in f . In the future, we would like to be able to erase the function values so that f uses less memory. In this case, the return value of Order would not make sense. Use 5.3.6: size_order to obtain the number of Taylor coefficients currently stored in the ADFun object f (which is equal to the order plus one).

12.8.2.e: Memory
The result
     size_t 
m
and is the number of memory units (sizeof) required for the information currently stored in f . This memory is returned to the system when the destructor for f is called.

12.8.2.e.a: Deprecated 2006-03-31
It used to be the case that an ADFun object just kept increasing its buffers to the maximum size necessary during its lifetime. It would then return the buffers to the system when its destructor was called. This is no longer the case, an ADFun object now returns memory when it no longer needs the values stored in that memory. Thus the Memory function is no longer well defined.

12.8.2.f: Size
The result s has prototype
     size_t 
s
and is the number of variables in the operation sequence plus the following: one for a phantom variable with tape address zero, one for each component of the domain that is a parameter. The amount of work and memory necessary for computing function values and derivatives using f is roughly proportional to s .

12.8.2.f.a: Deprecated 2006-04-03
There are other sizes attached to an ADFun object, for example, the number of operations in the sequence. In order to avoid confusion with these other sizes, use 5.1.5.g: size_var to obtain the number of variables in the operation sequence.

12.8.2.g: taylor_size
The result t has prototype
     size_t 
t
and is the number of Taylor coefficient orders currently calculated and stored in the ADFun object f .

12.8.2.g.a: Deprecated 2006-06-17
This function has been replaced by 5.3.6: size_order .

12.8.2.h: use_VecAD
The result u has prototype
     bool 
u
If it is true, the AD of Base 12.4.g.b: operation sequence stored in f contains 4.6.d: VecAD operands. Otherwise u is false.

12.8.2.h.a: Deprecated 2006-04-08
You can instead use
     
u = f.size_VecAD() > 0

12.8.2.i: size_taylor
The result v has prototype
     size_t 
v
and is the number of Taylor coefficient orders currently calculated and stored in the ADFun object f .

12.8.2.i.a: Deprecated 2014-03-18
This function has been replaced by 5.3.6: size_order .

12.8.2.j: capacity_taylor
The result w has prototype
     size_t 
w
and is the number of Taylor coefficient orders currently allocated in the ADFun object f .

12.8.2.j.a: Deprecated 2014-03-18
This function has been replaced by 5.3.8: capacity_order .
Input File: omh/appendix/deprecated/fun_deprecated.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.3: Comparison Changes During Zero Order Forward Mode

12.8.3.a: Syntax
c = f.CompareChange()
See Also 5.9: FunCheck

12.8.3.b: Deprecated 2015-01-20
This routine has been deprecated, use 5.3.7: compare_change instead.

12.8.3.c: Purpose
We use @(@ F : B^n \rightarrow B^m @)@ to denote the 12.4.a: AD function corresponding to f . This function may be not agree with the algorithm that was used to create the corresponding AD of Base 12.4.g.b: operation sequence because of changes in AD 4.5.1: comparison results. The CompareChange function can be used to detect these changes.

12.8.3.d: f
The object f has prototype
     const ADFun<
Basef

12.8.3.e: c
The result c has prototype
     size_t 
c
It is the number of AD<Base> 4.5.1: comparison operations, corresponding to the previous call to 5.3: Forward
     
f.Forward(0, x)
that have a different result from when F was created by taping an algorithm.

12.8.3.f: Discussion
If c is not zero, the boolean values resulting from some of the 4.5.1: comparison operations corresponding to x are different from when the AD of Base 12.4.g.b: operation sequence was created. In this case, you may want to re-tape the algorithm with the 12.4.k.c: independent variables equal to the values in x (so AD operation sequence properly represents the algorithm for this value of independent variables). On the other hand, re-taping the AD operation sequence usually takes significantly more time than evaluation using 5.3.1: forward_zero . If the functions values have not changed (see 5.9: FunCheck ) it may not be worth re-taping a new AD operation sequence.
Input File: omh/appendix/deprecated/compare_change.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.4: OpenMP Parallel Setup

12.8.4.a: Deprecated 2011-06-23
Use 8.23.2: thread_alloc::parallel_setup to set the number of threads.

12.8.4.b: Syntax
AD<Base>::omp_max_thread(number)

12.8.4.c: Purpose
By default, for each AD<Base> class there is only one tape that records 12.4.b: AD of Base operations. This tape is a global variable and hence it cannot be used by multiple OpenMP threads at the same time. The omp_max_thread function is used to set the maximum number of OpenMP threads that can be active. In this case, there is a different tape corresponding to each AD<Base> class and thread pair.

12.8.4.d: number
The argument number has prototype
     size_t 
number
It must be greater than zero and specifies the maximum number of OpenMp threads that will be active at one time.

12.8.4.e: Independent
Each call to 5.1.1: Independent(x) creates a new 12.4.k.a: active tape. All of the operations with the corresponding variables must be preformed by the same OpenMP thread. This includes the corresponding call to 5.1.3: f.Dependent(x,y) or the 5.1.2.g: ADFun f(x, y) during which the tape stops recording and the variables become parameters.

12.8.4.f: Restriction
No tapes can be 12.4.k.a: active when this function is called.
Input File: cppad/core/omp_max_thread.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.5: Routines That Track Use of New and Delete

12.8.5.a: Deprecated 2007-07-23
All these routines have been deprecated. You should use the 8.23: thread_alloc memory allocator instead (which works better in both a single thread and properly in multi-threading environment).

12.8.5.b: Syntax
# include <cppad/utility/track_new_del.hpp>
newptr = TrackNewVec(filelinenewlenoldptr)
TrackDelVec(filelineoldptr)
newptr = TrackExtend(filelinenewlenncopyoldptr)
count = TrackCount(fileline)

12.8.5.c: Purpose
These routines aid in the use of new[] and delete[] during the execution of a C++ program.

12.8.5.d: Include
The file cppad/track_new_del.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD include files.

12.8.5.e: file
The argument file has prototype
     const char *
file
It should be the source code file name where the call to TrackNew is located. The best way to accomplish this is the use the preprocessor symbol __FILE__ for this argument.

12.8.5.f: line
The argument line has prototype
     int 
line
It should be the source code file line number where the call to TrackNew is located. The best way to accomplish this is the use the preprocessor symbol __LINE__ for this argument.

12.8.5.g: oldptr
The argument oldptr has prototype
     
Type *oldptr
This argument is used to identify the type Type .

12.8.5.h: newlen
The argument newlen has prototype
     size_t 
newlen

12.8.5.i: head newptr
The return value newptr has prototype
     
Type *newptr
It points to the newly allocated vector of objects that were allocated using
     new Type[
newlen]

12.8.5.j: ncopy
The argument ncopy has prototype
        size_t 
ncopy
This specifies the number of elements that are copied from the old array to the new array. The value of ncopy must be less than or equal newlen .

12.8.5.k: TrackNewVec
If NDEBUG is defined, this routine only sets
     
newptr = Type new[newlen]
The value of oldptr does not matter (except that it is used to identify Type ). If NDEBUG is not defined, TrackNewVec also tracks the this memory allocation. In this case, if memory cannot be allocated 8.1: ErrorHandler is used to generate a message stating that there was not sufficient memory.

12.8.5.k.a: Macro
The preprocessor macro call
     CPPAD_TRACK_NEW_VEC(
newlenoldptr)
expands to
     CppAD::TrackNewVec(__FILE__, __LINE__, 
newlenoldptr)

12.8.5.k.b: Previously Deprecated
The preprocessor macro CppADTrackNewVec is the same as CPPAD_TRACK_NEW_VEC and was previously deprecated.

12.8.5.l: TrackDelVec
This routine is used to a vector of objects that have been allocated using TrackNew or TrackExtend. If NDEBUG is defined, this routine only frees memory with
     delete [] 
oldptr
If NDEBUG is not defined, TrackDelete also checks that oldptr was allocated by TrackNew or TrackExtend and has not yet been freed. If this is not the case, 8.1: ErrorHandler is used to generate an error message.

12.8.5.l.a: Macro
The preprocessor macro call
     CPPAD_TRACK_DEL_VEC(
oldptr)
expands to
     CppAD::TrackDelVec(__FILE__, __LINE__, 
oldptr)

12.8.5.l.b: Previously Deprecated
The preprocessor macro CppADTrackDelVec is the same as CPPAD_TRACK_DEL_VEC was previously deprecated.

12.8.5.m: TrackExtend
This routine is used to allocate a new vector (using TrackNewVec), copy ncopy elements from the old vector to the new vector. If ncopy is greater than zero, oldptr must have been allocated using TrackNewVec or TrackExtend. In this case, the vector pointed to by oldptr must be have at least ncopy elements and it will be deleted (using TrackDelVec). Note that the dependence of TrackExtend on NDEBUG is indirectly through the routines TrackNewVec and TrackDelVec.

12.8.5.m.a: Macro
The preprocessor macro call
     CPPAD_TRACK_EXTEND(
newlenncopyoldptr)
expands to
     CppAD::TrackExtend(__FILE__, __LINE__, 
newlenncopyoldptr)

12.8.5.m.b: Previously Deprecated
The preprocessor macro CppADTrackExtend is the same as CPPAD_TRACK_EXTEND and was previously deprecated.

12.8.5.n: TrackCount
The return value count has prototype
     size_t 
count
If NDEBUG is defined, count will be zero. Otherwise, it will be the number of vectors that have been allocated (by TrackNewVec or TrackExtend) and not yet freed (by TrackDelete).

12.8.5.n.a: Macro
The preprocessor macro call
     CPPAD_TRACK_COUNT()
expands to
     CppAD::TrackCount(__FILE__, __LINE__)

12.8.5.n.b: Previously Deprecated
The preprocessor macro CppADTrackCount is the same as CPPAD_TRACK_COUNT and was previously deprecated.

12.8.5.o: Multi-Threading
These routines cannot be used 8.23.4: in_parallel execution mode. Use the 8.23: thread_alloc routines instead.

12.8.5.p: Example
The file 12.8.5.1: TrackNewDel.cpp contains an example and test of these functions. It returns true, if it succeeds, and false otherwise.
Input File: cppad/utility/track_new_del.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.5.1: Tracking Use of New and Delete: Example and Test

# include <cppad/utility/track_new_del.hpp>

bool track_new_del(void)
{     bool ok = true;

     // initial count
     size_t count = CPPAD_TRACK_COUNT();

     // allocate an array of length 5
     double *ptr = CPPAD_NULL;
     size_t  newlen = 5;
     ptr = CPPAD_TRACK_NEW_VEC(newlen, ptr);

     // copy data into the array
     size_t ncopy = newlen;
     size_t i;
     for(i = 0; i < ncopy; i++)
          ptr[i] = double(i);

     // extend the buffer to be length 10
     newlen = 10;
     ptr    = CPPAD_TRACK_EXTEND(newlen, ncopy, ptr);

     // copy data into the new part of the array
     for(i = ncopy; i < newlen; i++)
          ptr[i] = double(i);

     // check the values in the array
     for(i = 0; i < newlen; i++)
          ok &= (ptr[i] == double(i));

     // free the memory allocated since previous call to TrackCount
     CPPAD_TRACK_DEL_VEC(ptr);

     // check for memory leak
     ok &= (count == CPPAD_TRACK_COUNT());

     return ok;
}

Input File: example/deprecated/track_new_del.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6: A Quick OpenMP Memory Allocator Used by CppAD

12.8.6.a: Syntax
# include <cppad/omp_alloc.hpp>


12.8.6.b: Purpose
The C++ new and delete operators are thread safe, but this means that a thread may have to wait for a lock on these operations. Once memory is obtained for a thread, the omp_alloc memory allocator keeps that memory 12.8.6.8: omp_available for the thread so that it can be re-used without waiting for a lock. All the CppAD memory allocations use this utility. The 12.8.6.6: omp_free_available function should be used to return memory to the system (once it is no longer required by a thread).

12.8.6.c: Include
The routines in sections below are defined by cppad/omp_alloc.hpp. This file is included by cppad/cppad.hpp, but it can also be included separately with out the rest of the CppAD.

12.8.6.d: Deprecated 2011-08-23
Use 8.23: thread_alloc instead.

12.8.6.e: Contents
omp_max_num_threads: 12.8.6.1Set and Get Maximum Number of Threads for omp_alloc Allocator
omp_in_parallel: 12.8.6.2Is The Current Execution in OpenMP Parallel Mode
omp_get_thread_num: 12.8.6.3Get the Current OpenMP Thread Number
omp_get_memory: 12.8.6.4Get At Least A Specified Amount of Memory
omp_return_memory: 12.8.6.5Return Memory to omp_alloc
omp_free_available: 12.8.6.6Free Memory Currently Available for Quick Use by a Thread
omp_inuse: 12.8.6.7Amount of Memory a Thread is Currently Using
omp_available: 12.8.6.8Amount of Memory Available for Quick Use by a Thread
omp_create_array: 12.8.6.9Allocate Memory and Create A Raw Array
omp_delete_array: 12.8.6.10Return A Raw Array to The Available Memory for a Thread
omp_efficient: 12.8.6.11Check If A Memory Allocation is Efficient for Another Use
old_max_num_threads: 12.8.6.12Set Maximum Number of Threads for omp_alloc Allocator
omp_alloc.cpp: 12.8.6.13OpenMP Memory Allocator: Example and Test

Input File: omh/appendix/deprecated/omp_alloc.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator

12.8.6.1.a: Deprecated 2011-08-31
Use the functions 8.23.2: thread_alloc::parallel_setup and 8.23.3: thread_alloc:num_threads instead.

12.8.6.1.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
omp_alloc::set_max_num_threads(number)
number = omp_alloc::get_max_num_threads()

12.8.6.1.c: Purpose
By default there is only one thread and all execution is in sequential mode (not 12.8.6.2: parallel ).

12.8.6.1.d: number
The argument and return value number has prototype
     size_t 
number
and must be greater than zero.

12.8.6.1.e: set_max_num_threads
Informs 12.8.6: omp_alloc of the maximum number of OpenMP threads.

12.8.6.1.f: get_max_num_threads
Returns the valued used in the previous call to set_max_num_threads. If there was no such previous call, the value one is returned (and only thread number zero can use 12.8.6: omp_alloc ).

12.8.6.1.g: Restrictions
The function set_max_num_threads must be called before the program enters 12.8.6.2: parallel execution mode. In addition, this function cannot be called while in parallel mode.
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.2: Is The Current Execution in OpenMP Parallel Mode

12.8.6.2.a: Deprecated 2011-08-31
Use the function 8.23.4: thread_alloc::in_parallel instead.

12.8.6.2.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
flag = omp_alloc::in_parallel()

12.8.6.2.c: Purpose
Some of the 12.8.6: omp_alloc allocation routines have different specifications for parallel (not sequential) execution mode. This routine enables you to determine if the current execution mode is sequential or parallel.

12.8.6.2.d: flag
The return value has prototype
     bool 
flag
It is true if the current execution is in parallel mode (possibly multi-threaded) and false otherwise (sequential mode).

12.8.6.2.e: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.3: Get the Current OpenMP Thread Number

12.8.6.3.a: Deprecated 2011-08-31
Use the function 8.23.5: thread_alloc::thread_num instead.

12.8.6.3.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
thread = omp_alloc::get_thread_num()

12.8.6.3.c: Purpose
Some of the 12.8.6: omp_alloc allocation routines have a thread number. This routine enables you to determine the current thread.

12.8.6.3.d: thread
The return value thread has prototype
     size_t 
thread
and is the currently executing thread number. If _OPENMP is not defined, thread is zero.

12.8.6.3.e: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.4: Get At Least A Specified Amount of Memory

12.8.6.4.a: Deprecated 2011-08-31
Use the function 8.23.6: thread_alloc::get_memory instead.

12.8.6.4.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
v_ptr = omp_alloc::get_memory(min_bytescap_bytes)

12.8.6.4.c: Purpose
Use 12.8.6: omp_alloc to obtain a minimum number of bytes of memory (for use by the 12.8.6.3: current thread ).

12.8.6.4.d: min_bytes
This argument has prototype
     size_t 
min_bytes
It specifies the minimum number of bytes to allocate.

12.8.6.4.e: cap_bytes
This argument has prototype
     size_t& 
cap_bytes
It's input value does not matter. Upon return, it is the actual number of bytes (capacity) that have been allocated for use,
     
min_bytes <= cap_bytes

12.8.6.4.f: v_ptr
The return value v_ptr has prototype
     void* 
v_ptr
It is the location where the cap_bytes of memory that have been allocated for use begins.

12.8.6.4.g: Allocation Speed
This allocation should be faster if the following conditions hold:
  1. The memory allocated by a previous call to get_memory is currently available for use.
  2. The current min_bytes is between the previous min_bytes and previous cap_bytes .


12.8.6.4.h: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.5: Return Memory to omp_alloc

12.8.6.5.a: Deprecated 2011-08-31
Use the function 8.23.7: thread_alloc::return_memory instead.

12.8.6.5.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
omp_alloc::return_memory(v_ptr)

12.8.6.5.c: Purpose
If 12.8.6.1: omp_max_num_threads is one, the memory is returned to the system. Otherwise, the memory is retained by 12.8.6: omp_alloc for quick future use by the thread that allocated to memory.

12.8.6.5.d: v_ptr
This argument has prototype
     void* 
v_ptr
. It must be a pointer to memory that is currently in use; i.e. obtained by a previous call to 12.8.6.4: omp_get_memory and not yet returned.

12.8.6.5.e: Thread
Either the 12.8.6.3: current thread must be the same as during the corresponding call to 12.8.6.4: omp_get_memory , or the current execution mode must be sequential (not 12.8.6.2: parallel ).

12.8.6.5.f: NDEBUG
If NDEBUG is defined, v_ptr is not checked (this is faster). Otherwise, a list of in use pointers is searched to make sure that v_ptr is in the list.

12.8.6.5.g: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.6: Free Memory Currently Available for Quick Use by a Thread

12.8.6.6.a: Deprecated 2011-08-31
Use the function 8.23.8: thread_alloc::free_available instead.

12.8.6.6.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
omp_alloc::free_available(thread)

12.8.6.6.c: Purpose
Free memory, currently available for quick use by a specific thread, for general future use.

12.8.6.6.d: thread
This argument has prototype
     size_t 
thread
Either 12.8.6.3: omp_get_thread_num must be the same as thread , or the current execution mode must be sequential (not 12.8.6.2: parallel ).

12.8.6.6.e: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.7: Amount of Memory a Thread is Currently Using

12.8.6.7.a: Deprecated 2011-08-31

12.8.6.7.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
num_bytes = omp_alloc::inuse(thread) Use the function 8.23.10: thread_alloc::inuse instead.

12.8.6.7.c: Purpose
Memory being managed by 12.8.6: omp_alloc has two states, currently in use by the specified thread, and quickly available for future use by the specified thread. This function informs the program how much memory is in use.

12.8.6.7.d: thread
This argument has prototype
     size_t 
thread
Either 12.8.6.3: omp_get_thread_num must be the same as thread , or the current execution mode must be sequential (not 12.8.6.2: parallel ).

12.8.6.7.e: num_bytes
The return value has prototype
     size_t 
num_bytes
It is the number of bytes currently in use by the specified thread.

12.8.6.7.f: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.8: Amount of Memory Available for Quick Use by a Thread

12.8.6.8.a: Deprecated 2011-08-31
Use the function 8.23.11: thread_alloc::available instead.

12.8.6.8.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
num_bytes = omp_alloc::available(thread)

12.8.6.8.c: Purpose
Memory being managed by 12.8.6: omp_alloc has two states, currently in use by the specified thread, and quickly available for future use by the specified thread. This function informs the program how much memory is available.

12.8.6.8.d: thread
This argument has prototype
     size_t 
thread
Either 12.8.6.3: omp_get_thread_num must be the same as thread , or the current execution mode must be sequential (not 12.8.6.2: parallel ).

12.8.6.8.e: num_bytes
The return value has prototype
     size_t 
num_bytes
It is the number of bytes currently available for use by the specified thread.

12.8.6.8.f: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.9: Allocate Memory and Create A Raw Array

12.8.6.9.a: Deprecated 2011-08-31
Use the function 8.23.12: thread_alloc::create_array instead.

12.8.6.9.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
array = omp_alloc::create_array<Type>(size_minsize_out) .

12.8.6.9.c: Purpose
Create a new raw array using 12.8.6: omp_alloc a fast memory allocator that works well in a multi-threading OpenMP environment.

12.8.6.9.d: Type
The type of the elements of the array.

12.8.6.9.e: size_min
This argument has prototype
     size_t 
size_min
This is the minimum number of elements that there can be in the resulting array .

12.8.6.9.f: size_out
This argument has prototype
     size_t& 
size_out
The input value of this argument does not matter. Upon return, it is the actual number of elements in array (  size_min <= size_out ).

12.8.6.9.g: array
The return value array has prototype
     
Typearray
It is array with size_out elements. The default constructor for Type is used to initialize the elements of array . Note that 12.8.6.10: omp_delete_array should be used to destroy the array when it is no longer needed.

12.8.6.9.h: Delta
The amount of memory 12.8.6.7: omp_inuse by the current thread, will increase delta where
     sizeof(
Type) * (size_out + 1) > delta >= sizeof(Type) * size_out
The 12.8.6.8: omp_available memory will decrease by delta , (and the allocation will be faster) if a previous allocation with size_min between its current value and size_out is available.

12.8.6.9.i: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.10: Return A Raw Array to The Available Memory for a Thread

12.8.6.10.a: Deprecated 2011-08-31
Use the function 8.23.13: thread_alloc::delete_array instead.

12.8.6.10.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
omp_alloc::delete_array(array) .

12.8.6.10.c: Purpose
Returns memory corresponding to a raw array (create by 12.8.6.9: omp_create_array ) to the 12.8.6.8: omp_available memory pool for the current thread.

12.8.6.10.d: Type
The type of the elements of the array.

12.8.6.10.e: array
The argument array has prototype
     
Typearray
It is a value returned by 12.8.6.9: omp_create_array and not yet deleted. The Type destructor is called for each element in the array.

12.8.6.10.f: Thread
The 12.8.6.3: current thread must be the same as when 12.8.6.9: omp_create_array returned the value array . There is an exception to this rule: when the current execution mode is sequential (not 12.8.6.2: parallel ) the current thread number does not matter.

12.8.6.10.g: Delta
The amount of memory 12.8.6.7: omp_inuse will decrease by delta , and the 12.8.6.8: omp_available memory will increase by delta , where 12.8.6.9.h: delta is the same as for the corresponding call to create_array.

12.8.6.10.h: Example
12.8.6.13: omp_alloc.cpp
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.11: Check If A Memory Allocation is Efficient for Another Use

12.8.6.11.a: Removed
This function has been removed because speed tests seem to indicate it is just as fast, or faster, to free and then reallocate the memory.

12.8.6.11.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
flag = omp_alloc::efficient(v_ptrnum_bytes)

12.8.6.11.c: Purpose
Check if memory that is currently in use is an efficient allocation for a specified number of bytes.

12.8.6.11.d: v_ptr
This argument has prototype
     const void* 
v_ptr
. It must be a pointer to memory that is currently in use; i.e. obtained by a previous call to 12.8.6.4: omp_get_memory and not yet returned.

12.8.6.11.e: num_bytes
This argument has prototype
     size_t 
num_bytes
It specifies the number of bytes of the memory allocated by v_ptr that we want to use.

12.8.6.11.f: flag
The return value has prototype
     bool 
flag
It is true, a call to get_memory with 12.8.6.4.d: min_bytes equal to num_bytes would result in a value for 12.8.6.4.e: cap_bytes that is the same as when v_ptr was returned by get_memory; i.e., v_ptr is an efficient memory block for num_bytes bytes of information.

12.8.6.11.g: Thread
Either the 12.8.6.3: current thread must be the same as during the corresponding call to 12.8.6.4: omp_get_memory , or the current execution mode must be sequential (not 12.8.6.2: parallel ).

12.8.6.11.h: NDEBUG
If NDEBUG is defined, v_ptr is not checked (this is faster). Otherwise, a list of in use pointers is searched to make sure that v_ptr is in the list.
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator

12.8.6.12.a: Removed
This function has been removed from the CppAD API. Use the function 8.23.2: thread_alloc::parallel_setup in its place.

12.8.6.12.b: Syntax
# include <cppad/utility/omp_alloc.hpp>
omp_alloc::max_num_threads(number)

12.8.6.12.c: Purpose
By default there is only one thread and all execution is in sequential mode (not 12.8.6.2: parallel ).

12.8.6.12.d: number
The argument number has prototype
     size_t 
number
It must be greater than zero and specifies the maximum number of OpenMP threads that will be active at one time.

12.8.6.12.e: Restrictions
This function must be called before the program enters 12.8.6.2: parallel execution mode.
Input File: cppad/utility/omp_alloc.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.6.13: OpenMP Memory Allocator: Example and Test

12.8.6.13.a: Deprecated 2011-08-31
This example is only intended to help convert calls to 12.8.6: omp_alloc to calls to 8.23: thread_alloc .
# include <cppad/utility/omp_alloc.hpp>
# include <cppad/utility/memory_leak.hpp>
# include <vector>

namespace { // Begin empty namespace

bool omp_alloc_bytes(void)
{     bool ok = true;
     using CppAD::omp_alloc;
     size_t thread;

     // check initial memory values
     ok &= ! CppAD::memory_leak();

     // amount of static memory used by thread zero
     size_t static_inuse = omp_alloc::inuse(0);

     // determine the currently executing thread
     // (should be zero because not in parallel mode)
     thread = omp_alloc::get_thread_num();

     // repeatedly allocate enough memory for at least two size_t values.
     size_t min_size_t = 2;
     size_t min_bytes  = min_size_t * sizeof(size_t);
     size_t n_outter   = 10;
     size_t n_inner    = 5;
     size_t cap_bytes(0), i, j, k;
     for(i = 0; i < n_outter; i++)
     {     // Do not use CppAD::vector here because its use of omp_alloc
          // complicates the inuse and avaialble results.
          std::vector<void*> v_ptr(n_inner);
          for( j = 0; j < n_inner; j++)
          {     // allocate enough memory for min_size_t size_t objects
               v_ptr[j]    = omp_alloc::get_memory(min_bytes, cap_bytes);
               size_t* ptr = reinterpret_cast<size_t*>(v_ptr[j]);
               // determine the number of size_t values we have obtained
               size_t  cap_size_t = cap_bytes / sizeof(size_t);
               ok                &= min_size_t <= cap_size_t;
               // use placement new to call the size_t copy constructor
               for(k = 0; k < cap_size_t; k++)
                    new(ptr + k) size_t(i + j + k);
               // check that the constructor worked
               for(k = 0; k < cap_size_t; k++)
                    ok &= ptr[k] == (i + j + k);
          }
          // check that n_inner * cap_bytes are inuse and none are available
          ok &= omp_alloc::inuse(thread) == n_inner*cap_bytes + static_inuse;
          ok &= omp_alloc::available(thread) == 0;
          // return the memrory to omp_alloc
          for(j = 0; j < n_inner; j++)
               omp_alloc::return_memory(v_ptr[j]);
          // check that now n_inner * cap_bytes are now available
          // and none are in use
          ok &= omp_alloc::inuse(thread) == static_inuse;
          ok &= omp_alloc::available(thread) == n_inner * cap_bytes;
     }
     // return all the available memory to the system
     omp_alloc::free_available(thread);
     ok &= ! CppAD::memory_leak();

     return ok;
}

class my_char {
public:
     char ch_ ;
     my_char(void) : ch_(' ')
     { }
     my_char(const my_char& my_ch) : ch_(my_ch.ch_)
     { }
};

bool omp_alloc_array(void)
{     bool ok = true;
     using CppAD::omp_alloc;
     size_t i;

     // check initial memory values
     size_t thread = omp_alloc::get_thread_num();
     ok &= thread == 0;
     ok &= ! CppAD::memory_leak();
     size_t static_inuse = omp_alloc::inuse(0);

     // initial allocation of an array
     size_t  size_min  = 3;
     size_t  size_one;
     my_char *array_one  =
          omp_alloc::create_array<my_char>(size_min, size_one);

     // check the values and change them to null 'x'
     for(i = 0; i < size_one; i++)
     {     ok &= array_one[i].ch_ == ' ';
          array_one[i].ch_ = 'x';
     }

     // now create a longer array
     size_t size_two;
     my_char *array_two =
          omp_alloc::create_array<my_char>(2 * size_min, size_two);

     // check the values in array one
     for(i = 0; i < size_one; i++)
          ok &= array_one[i].ch_ == 'x';

     // check the values in array two
     for(i = 0; i < size_two; i++)
          ok &= array_two[i].ch_ == ' ';

     // check the amount of inuse and available memory
     // (an extra size_t value is used for each memory block).
     size_t check = static_inuse + sizeof(my_char)*(size_one + size_two);
     ok   &= omp_alloc::inuse(thread) - check < sizeof(my_char);
     ok   &= omp_alloc::available(thread) == 0;

     // delete the arrays
     omp_alloc::delete_array(array_one);
     omp_alloc::delete_array(array_two);
     ok   &= omp_alloc::inuse(thread) == static_inuse;
     check = sizeof(my_char)*(size_one + size_two);
     ok   &= omp_alloc::available(thread) - check < sizeof(my_char);

     // free the memory for use by this thread
     omp_alloc::free_available(thread);
     ok &= ! CppAD::memory_leak();

     return ok;
}
} // End empty namespace

bool omp_alloc(void)
{     bool ok  = true;
     using CppAD::omp_alloc;

     // check initial state of allocator
     ok  &= omp_alloc::get_max_num_threads() == 1;

     // set the maximum number of threads greater than one
     // so that omp_alloc holds onto memory
     CppAD::omp_alloc::set_max_num_threads(2);
     ok  &= omp_alloc::get_max_num_threads() == 2;
     ok  &= ! CppAD::memory_leak();

     // now use memory allocator in state where it holds onto memory
     ok   &= omp_alloc_bytes();
     ok   &= omp_alloc_array();

     // check that the tests have not held onto memory
     ok  &= ! CppAD::memory_leak();

     // set the maximum number of threads back to one
     // so that omp_alloc no longer holds onto memory
     CppAD::omp_alloc::set_max_num_threads(1);

     return ok;
}


Input File: example/deprecated/omp_alloc.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.7: Memory Leak Detection

12.8.7.a: Deprecated 2012-04-06
This routine has been deprecated. You should instead use the routine 8.23.14: ta_free_all .

12.8.7.b: Syntax
# include <cppad/utility/memory_leak.hpp>
flag = memory_leak()
flag = memory_leak(add_static)

12.8.7.c: Purpose
This routine checks that the are no memory leaks caused by improper use of 8.23: thread_alloc memory allocator. The deprecated memory allocator 12.8.5: TrackNewDel is also checked. Memory errors in the deprecated 12.8.6: omp_alloc allocator are reported as being in thread_alloc.

12.8.7.d: thread
It is assumed that 8.23.4: in_parallel() is false and 8.23.5: thread_num is zero when memory_leak is called.

12.8.7.e: add_static
This argument has prototype
     size_t 
add_static
and its default value is zero. Static variables hold onto memory forever. If the argument add_static is present (and non-zero), memory_leak adds this amount of memory to the 8.23.10: inuse sum that corresponds to static variables in the program. A call with add_static should be make after a routine that has static variables which use 8.23.6: get_memory to allocate memory. The value of add_static should be the difference of
     thread_alloc::inuse(0)
before and after the call. Since multiple statics may be allocated in different places in the program, it is expected that there will be multiple calls that use this option.

12.8.7.f: flag
The return value flag has prototype
     bool 
flag
If add_static is non-zero, the return value for memory_leak is false. Otherwise, the return value for memory_leak should be false (indicating that the only allocated memory corresponds to static variables).

12.8.7.g: inuse
It is assumed that, when memory_leak is called, there should not be any memory 8.23.10: inuse or 12.8.6.7: omp_inuse for any thread (except for inuse memory corresponding to static variables). If there is, a message is printed and memory_leak returns false.

12.8.7.h: available
It is assumed that, when memory_leak is called, there should not be any memory 8.23.11: available or 12.8.6.8: omp_available for any thread; i.e., it all has been returned to the system. If there is memory still available for any thread, memory_leak returns false.

12.8.7.i: TRACK_COUNT
It is assumed that, when memory_leak is called, 12.8.5.n: TrackCount will return a zero value. If it returns a non-zero value, memory_leak returns false.

12.8.7.j: Error Message
If this is the first call to memory_leak, no message is printed. Otherwise, if it returns true, an error message is printed to standard output describing the memory leak that was detected.
Input File: cppad/utility/memory_leak.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.8: Machine Epsilon For AD Types

12.8.8.a: Deprecated 2012-06-17
This routine has been deprecated. You should use the 4.4.6: numeric_limits epsilon instead.

12.8.8.b: Syntax
eps = epsilon<Float>()

12.8.8.c: Purpose
Obtain the value of machine epsilon corresponding to the type Float .

12.8.8.d: Float
this type can either be AD<Base> , or it can be Base for any AD<Base> type.

12.8.8.e: eps
The result eps has prototype
     
Float eps

Input File: cppad/core/epsilon.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.9: Choosing The Vector Testing Template Class

12.8.9.a: Deprecated 2012-07-03
The CPPAD_TEST_VECTOR macro has been deprecated, use 10.5: CPPAD_TESTVECTOR instead.

12.8.9.b: Syntax
CPPAD_TEST_VECTOR<Scalar>

12.8.9.c: Introduction
Many of the CppAD 10: examples and tests use the CPPAD_TEST_VECTOR template class to pass information. The default definition for this template class is 8.22: CppAD::vector .

12.8.9.d: MS Windows
The include path for boost is not defined in the Windows project files. If we are using Microsofts compiler, the following code overrides the setting of CPPAD_BOOSTVECTOR:
// The next 7 lines are C++ source code.
# ifdef _MSC_VER
# if CPPAD_BOOSTVECTOR
# undef  CPPAD_BOOSTVECTOR
# define CPPAD_BOOSTVECTOR 0
# undef  CPPAD_CPPADVECTOR
# define CPPAD_CPPADVECTOR 1
# endif
# endif

12.8.9.e: CppAD::vector
By default CPPAD_CPPADVECTOR is true and CPPAD_TEST_VECTOR is defined by the following source code
// The next 3 line are C++ source code.
# if CPPAD_CPPADVECTOR
# define CPPAD_TEST_VECTOR CppAD::vector
# endif
If you specify --with-eigenvector on the 12.8.13.d: configure command line, CPPAD_EIGENVECTOR is true. This vector type cannot be supported by CPPAD_TEST_VECTOR (use 10.5: CPPAD_TESTVECTOR for this support) so CppAD::vector is used in this case
// The next 3 line are C++ source code.
# if CPPAD_EIGENVECTOR
# define CPPAD_TEST_VECTOR CppAD::vector
# endif

12.8.9.f: std::vector
If you specify --with-stdvector on the 12.8.13.d: configure command line during CppAD installation, CPPAD_STDVECTOR is true and CPPAD_TEST_VECTOR is defined by the following source code
// The next 4 lines are C++ source code.
# if CPPAD_STDVECTOR
# include <vector>
# define CPPAD_TEST_VECTOR std::vector
# endif
In this case CppAD will use std::vector for its examples and tests. Use of CppAD::vector, std::vector, and std::valarray with CppAD is always tested to some degree. Specifying --with-stdvector will increase the amount of std::vector testing.

12.8.9.g: boost::numeric::ublas::vector
If you specify a value for boost_dir on the configure command line during CppAD installation, CPPAD_BOOSTVECTOR is true and CPPAD_TEST_VECTOR is defined by the following source code
// The next 4 lines are C++ source code.
# if CPPAD_BOOSTVECTOR
# include <boost/numeric/ublas/vector.hpp>
# define CPPAD_TEST_VECTOR boost::numeric::ublas::vector
# endif
In this case CppAD will use Ublas vectors for its examples and tests. Use of CppAD::vector, std::vector, and std::valarray with CppAD is always tested to some degree. Specifying boost_dir will increase the amount of Ublas vector testing.

12.8.9.h: CppADvector Deprecated 2007-07-28
The preprocessor symbol CppADvector is defined to have the same value as CPPAD_TEST_VECTOR but its use is deprecated:

# define CppADvector CPPAD_TEST_VECTOR

Input File: cppad/core/test_vector.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt

12.8.10.a: Deprecated 2012-11-28
This interface to Ipopt is deprecated, use 9: ipopt_solve instead.

12.8.10.b: Syntax
# include "cppad_ipopt_nlp.hpp"
cppad_ipopt_solution solution;
cppad_ipopt_nlp cppad_nlp(
     
nmx_ix_lx_ug_lg_u, &fg_info, &solution
)

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:
ipopt_library_paths


12.8.10.c: Purpose
The class cppad_ipopt_nlp is used to solve nonlinear programming problems of the form @[@ \begin{array}{rll} {\rm minimize} & f(x) \\ {\rm subject \; to} & g^l \leq g(x) \leq g^u \\ & x^l \leq x \leq x^u \end{array} @]@ This is done using Ipopt (http://www.coin-or.org/projects/Ipopt.xml) optimizer and CppAD (http://www.coin-or.org/CppAD/) Algorithmic Differentiation package.

12.8.10.d: cppad_ipopt namespace
All of the declarations for these routines are in the cppad_ipopt namespace (not the CppAD namespace). For example; 12.8.10.h: SizeVector below actually denotes the type cppad_ipopt::SizeVector.

12.8.10.e: ipopt_library_paths
If you are linking to a shared version of the Ipopt library, you may have to add some paths the LD_LIBRARY_PATH shell variable using the export command in the syntax above. For example, if the file the ipopt library is
     
ipopt_prefix/lib64/libipopt.a
you will need to add the corresponding directory; e.g.,
     export LD_LIBRARY_PATH="
ipopt_prefix/lib64:$LD_LIBRARY_PATH"
see 2.2.5: ipopt_prefix .

12.8.10.f: fg(x)
The function @(@ fg : \B{R}^n \rightarrow \B{R}^{m+1} @)@ is defined by @[@ \begin{array}{rcl} fg_0 (x) & = & f(x) \\ fg_1 (x) & = & g_0 (x) \\ & \vdots & \\ fg_m (x) & = & g_{m-1} (x) \end{array} @]@

12.8.10.f.a: Index Vector
We define an index vector as a vector of non-negative integers for which none of the values are equal; i.e., it is both a vector and a set. If @(@ I @)@ is an index vector @(@ |I| @)@ is used to denote the number of elements in @(@ I @)@ and @(@ \| I \| @)@ is used to denote the value of the maximum element in @(@ I @)@.

12.8.10.f.b: Projection
Given an index vector @(@ J @)@ and a positive integer @(@ n @)@ where @(@ n > \| J \| @)@, we use @(@ J \otimes n @)@ for the mapping @(@ ( J \otimes n ) : \B{R}^n \rightarrow \B{R}^{|J|} @)@ defined by @[@ [ J \otimes n ] (x)_j = x_{J(j)} @]@ for @(@ j = 0 , \ldots |J| - 1 @)@.

12.8.10.f.c: Injection
Given an index vector @(@ I @)@ and a positive integer @(@ m @)@ where @(@ m > \| I \| @)@, we use @(@ m \otimes I @)@ for the mapping @(@ ( m \otimes I ): \B{R}^{|I|} \rightarrow \B{R}^m @)@ defined by @[@ [ m \otimes I ] (y)_i = \left\{ \begin{array}{ll} y_k & {\rm if} \; i = I(k) \; {\rm for \; some} \; k \in \{ 0 , \cdots, |I|-1 \} \\ 0 & {\rm otherwise} \end{array} \right. @]@

12.8.10.f.d: Representation
In many applications, each of the component functions of @(@ fg(x) @)@ only depend on a few of the components of @(@ x @)@. In this case, expressing @(@ fg(x) @)@ in terms of simpler functions with fewer arguments can greatly reduce the amount of work required to compute its derivatives.

We use the functions @(@ r_k : \B{R}^{q(k)} \rightarrow \B{R}^{p(k)} @)@ for @(@ k = 0 , \ldots , K @)@ to express our representation of @(@ fg(x) @)@ in terms of simpler functions as follows @[@ fg(x) = \sum_{k=0}^{K-1} \; \sum_{\ell=0}^{L(k) - 1} [ (m+1) \otimes I_{k,\ell} ] \; \circ \; r_k \; \circ \; [ J_{k,\ell} \otimes n ] \; (x) @]@ where @(@ \circ @)@ represents function composition, for @(@ k = 0 , \ldots , K - 1 @)@, and @(@ \ell = 0 , \ldots , L(k) @)@, @(@ I_{k,\ell} @)@ and @(@ J_{k,\ell} @)@ are index vectors with @(@ | J_{k,\ell} | = q(k) @)@, @(@ \| J_{k,\ell} \| < n @)@, @(@ | I_{k,\ell} | = p(k) @)@, and @(@ \| I_{k,\ell} \| \leq m @)@.

12.8.10.g: Simple Representation
In the simple representation, @(@ r_0 (x) = fg(x) @)@, @(@ K = 1 @)@, @(@ q(0) = n @)@, @(@ p(0) = m+1 @)@, @(@ L(0) = 1 @)@, @(@ I_{0,0} = (0 , \ldots , m) @)@, and @(@ J_{0,0} = (0 , \ldots , n-1) @)@.

12.8.10.h: SizeVector
The type SizeVector is defined by the cppad_ipopt_nlp.hpp include file to be a 8.9: SimpleVector class with elements of type size_t.

12.8.10.i: NumberVector
The type NumberVector is defined by the cppad_ipopt_nlp.hpp include file to be a 8.9: SimpleVector class with elements of type Ipopt::Number.

12.8.10.j: ADNumber
The type ADNumber is defined by the cppad_ipopt_nlp.hpp include file to be a an AD type that can be used to compute derivatives.

12.8.10.k: ADVector
The type ADVector is defined by the cppad_ipopt_nlp.hpp include file to be a 8.9: SimpleVector class with elements of type ADNumber.

12.8.10.l: n
The argument n has prototype
     size_t 
n
It specifies the dimension of the argument space; i.e., @(@ x \in \B{R}^n @)@.

12.8.10.m: m
The argument m has prototype
     size_t 
m
It specifies the dimension of the range space for @(@ g @)@; i.e., @(@ g : \B{R}^n \rightarrow \B{R}^m @)@.

12.8.10.n: x_i
The argument x_i has prototype
     const NumberVector& 
x_i
and its size is equal to @(@ n @)@. It specifies the initial point where Ipopt starts the optimization process.

12.8.10.o: x_l
The argument x_l has prototype
     const NumberVector& 
x_l
and its size is equal to @(@ n @)@. It specifies the lower limits for the argument in the optimization problem; i.e., @(@ x^l @)@.

12.8.10.p: x_u
The argument x_u has prototype
     const NumberVector& 
x_u
and its size is equal to @(@ n @)@. It specifies the upper limits for the argument in the optimization problem; i.e., @(@ x^u @)@.

12.8.10.q: g_l
The argument g_l has prototype
     const NumberVector& 
g_l
and its size is equal to @(@ m @)@. It specifies the lower limits for the constraints in the optimization problem; i.e., @(@ g^l @)@.

12.8.10.r: g_u
The argument g_u has prototype
     const NumberVector& 
g_u
and its size is equal to @(@ n @)@. It specifies the upper limits for the constraints in the optimization problem; i.e., @(@ g^u @)@.

12.8.10.s: fg_info
The argument fg_info has prototype
     
FG_info fg_info
where the class FG_info is derived from the base class cppad_ipopt_fg_info. Certain virtual member functions of fg_info are used to compute the value of @(@ fg(x) @)@. The specifications for these member functions are given below:

12.8.10.s.a: fg_info.number_functions
This member function has prototype
     virtual size_t cppad_ipopt_fg_info::number_functions(void)
If K has type size_t, the syntax
     
K = fg_info.number_functions()
sets K to the number of functions used in the representation of @(@ fg(x) @)@; i.e., @(@ K @)@ in the 12.8.10.f.d: representation above.

The cppad_ipopt_fg_info implementation of this function corresponds to the simple representation mentioned above; i.e. K = 1 .

12.8.10.s.b: fg_info.eval_r
This member function has the prototype
virtual ADVector cppad_ipopt_fg_info::eval_r(size_t 
k, const ADVector& u) = 0;
Thus it is a pure virtual function and must be defined in the derived class FG_info .

This function computes the value of @(@ r_k (u) @)@ used in the 12.8.10.f.d: representation for @(@ fg(x) @)@. If k in @(@ \{0 , \ldots , K-1 \} @)@ has type size_t, u is an ADVector of size q(k) and r is an ADVector of size p(k) the syntax
     
r = fg_info.eval_r(ku)
set r to the vector @(@ r_k (u) @)@.

12.8.10.s.c: fg_info.retape
This member function has the prototype
     virtual bool cppad_ipopt_fg_info::retape(size_t 
k)
If k in @(@ \{0 , \ldots , K-1 \} @)@ has type size_t, and retape has type bool, the syntax
        
retape = fg_info.retape(k)
sets retape to true or false. If retape is true, cppad_ipopt_nlp will retape the operation sequence corresponding to @(@ r_k (u) @)@ for every value of u . An cppad_ipopt_nlp object should use much less memory and run faster if retape is false. You can test both the true and false cases to make sure the operation sequence does not depend on u .

The cppad_ipopt_fg_info implementation of this function sets retape to true (while slower it is also safer to always retape).

12.8.10.s.d: fg_info.domain_size
This member function has prototype
     virtual size_t cppad_ipopt_fg_info::domain_size(size_t 
k)
If k in @(@ \{0 , \ldots , K-1 \} @)@ has type size_t, and q has type size_t, the syntax
     
q = fg_info.domain_size(k)
sets q to the dimension of the domain space for @(@ r_k (u) @)@; i.e., @(@ q(k) @)@ in the 12.8.10.f.d: representation above.

The cppad_ipopt_h_base implementation of this function corresponds to the simple representation mentioned above; i.e., @(@ q = n @)@.

12.8.10.s.e: fg_info.range_size
This member function has prototype
     virtual size_t cppad_ipopt_fg_info::range_size(size_t 
k)
If k in @(@ \{0 , \ldots , K-1 \} @)@ has type size_t, and p has type size_t, the syntax
     
p = fg_info.range_size(k)
sets p to the dimension of the range space for @(@ r_k (u) @)@; i.e., @(@ p(k) @)@ in the 12.8.10.f.d: representation above.

The cppad_ipopt_h_base implementation of this function corresponds to the simple representation mentioned above; i.e., @(@ p = m+1 @)@.

12.8.10.s.f: fg_info.number_terms
This member function has prototype
     virtual size_t cppad_ipopt_fg_info::number_terms(size_t 
k)
If k in @(@ \{0 , \ldots , K-1 \} @)@ has type size_t, and L has type size_t, the syntax
     
L = fg_info.number_terms(k)
sets L to the number of terms in representation for this value of k ; i.e., @(@ L(k) @)@ in the 12.8.10.f.d: representation above.

The cppad_ipopt_h_base implementation of this function corresponds to the simple representation mentioned above; i.e., @(@ L = 1 @)@.

12.8.10.s.g: fg_info.index
This member function has prototype
     virtual void cppad_ipopt_fg_info::index(
          size_t 
k, size_t ell, SizeVector& I, SizeVector& J
     )
The argument
     k
has type size_t and is a value between zero and @(@ K-1 @)@ inclusive. The argument
     ell
has type size_t and is a value between zero and @(@ L(k)-1 @)@ inclusive. The argument
     I
is a 8.9: SimpleVector with elements of type size_t and size greater than or equal to @(@ p(k) @)@. The input value of the elements of I does not matter. The output value of the first @(@ p(k) @)@ elements of I must be the corresponding elements of @(@ I_{k,ell} @)@ in the 12.8.10.f.d: representation above. The argument
     J
is a 8.9: SimpleVector with elements of type size_t and size greater than or equal to @(@ q(k) @)@. The input value of the elements of J does not matter. The output value of the first @(@ q(k) @)@ elements of J must be the corresponding elements of @(@ J_{k,ell} @)@ in the 12.8.10.f.d: representation above.

The cppad_ipopt_h_base implementation of this function corresponds to the simple representation mentioned above; i.e., for @(@ i = 0 , \ldots , m @)@, I[i] = i , and for @(@ j = 0 , \ldots , n-1 @)@, J[j] = j .

12.8.10.t: solution
After the optimization process is completed, solution contains the following information:

12.8.10.t.a: status
The status field of solution has prototype
     cppad_ipopt_solution::solution_status 
solution.status
It is the final Ipopt status for the optimizer. Here is a list of the possible values for the status:
status Meaning
not_defined The optimizer did not return a final status to this cppad_ipopt_nlp object.
unknown The status returned by the optimizer is not defined in the Ipopt documentation for finalize_solution.
success Algorithm terminated successfully at a point satisfying the convergence tolerances (see Ipopt options).
maxiter_exceeded The maximum number of iterations was exceeded (see Ipopt options).
stop_at_tiny_step Algorithm terminated because progress was very slow.
stop_at_acceptable_point Algorithm stopped at a point that was converged, not to the 'desired' tolerances, but to 'acceptable' tolerances (see Ipopt options).
local_infeasibility Algorithm converged to a non-feasible point (problem may have no solution).
user_requested_stop This return value should not happen.
diverging_iterates It the iterates are diverging.
restoration_failure Restoration phase failed, algorithm doesn't know how to proceed.
error_in_step_computation An unrecoverable error occurred while Ipopt tried to compute the search direction.
invalid_number_detected Algorithm received an invalid number (such as nan or inf) from the users function fg_info.eval or from the CppAD evaluations of its derivatives (see the Ipopt option check_derivatives_for_naninf).
internal_error An unknown Ipopt internal error occurred. Contact the Ipopt authors through the mailing list.

12.8.10.t.b: x
The x field of solution has prototype
     NumberVector 
solution.x
and its size is equal to @(@ n @)@. It is the final @(@ x @)@ value for the optimizer.

12.8.10.t.c: z_l
The z_l field of solution has prototype
     NumberVector 
solution.z_l
and its size is equal to @(@ n @)@. It is the final Lagrange multipliers for the lower bounds on @(@ x @)@.

12.8.10.t.d: z_u
The z_u field of solution has prototype
     NumberVector 
solution.z_u
and its size is equal to @(@ n @)@. It is the final Lagrange multipliers for the upper bounds on @(@ x @)@.

12.8.10.t.e: g
The g field of solution has prototype
     NumberVector 
solution.g
and its size is equal to @(@ m @)@. It is the final value for the constraint function @(@ g(x) @)@.

12.8.10.t.f: lambda
The lambda field of solution has prototype
     NumberVector 
solution.lambda
and its size is equal to @(@ m @)@. It is the final value for the Lagrange multipliers corresponding to the constraint function.

12.8.10.t.g: obj_value
The obj_value field of solution has prototype
     Number 
solution.obj_value
It is the final value of the objective function @(@ f(x) @)@.

12.8.10.u: Example
The file 12.8.10.1: ipopt_nlp_get_started.cpp is an example and test of cppad_ipopt_nlp that uses the 12.8.10.g: simple representation . It returns true if it succeeds and false otherwise. The section 12.8.10.2: ipopt_nlp_ode discusses an example that uses a more complex representation.

12.8.10.v: Wish List
This is a list of possible future improvements to cppad_ipopt_nlp that would require changed to the user interface:
  1. The routine fg_info.eval_r(ku) should also support NumberVector for the type of the argument u (this would certainly be more efficient when fg_info.retape(k) is true and @(@ L(k) > 1 @)@). It could be an option for the user to provide this as well as the necessary ADVector definition.
  2. There should a 4.4.5: Discrete routine that the user can call to determine the value of @(@ \ell @)@ during the evaluation of fg_info.eval_r(ku) . This way data, which does not affect the derivative values, can be included in the function recording and evaluation.

Input File: cppad_ipopt/src/cppad_ipopt_nlp.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test

12.8.10.1.a: Purpose
This example program demonstrates how to use the class cppad_ipopt_nlp to solve the example problem in the Ipopt documentation; i.e., the problem @[@ \begin{array}{lc} {\rm minimize \; } & x_1 * x_4 * (x_1 + x_2 + x_3) + x_3 \\ {\rm subject \; to \; } & x_1 * x_2 * x_3 * x_4 \geq 25 \\ & x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40 \\ & 1 \leq x_1, x_2, x_3, x_4 \leq 5 \end{array} @]@

12.8.10.1.b: Configuration Requirement
This example will be compiled and tested provided that a value for ipopt_prefix is specified on the 2.2: cmake command line.

# include <cppad_ipopt_nlp.hpp>

namespace {
     using namespace cppad_ipopt;

     class FG_info : public cppad_ipopt_fg_info
     {
     private:
          bool retape_;
     public:
          // derived class part of constructor
          FG_info(bool retape_in)
          : retape_ (retape_in)
          { }
          // Evaluation of the objective f(x), and constraints g(x)
          // using an Algorithmic Differentiation (AD) class.
          ADVector eval_r(size_t k, const ADVector&  x)
          {     ADVector fg(3);

               // Fortran style indexing
               ADNumber x1 = x[0];
               ADNumber x2 = x[1];
               ADNumber x3 = x[2];
               ADNumber x4 = x[3];
               // f(x)
               fg[0] = x1 * x4 * (x1 + x2 + x3) + x3;
               // g_1 (x)
               fg[1] = x1 * x2 * x3 * x4;
               // g_2 (x)
               fg[2] = x1 * x1 + x2 * x2 + x3 * x3 + x4 * x4;
               return fg;
          }
          bool retape(size_t k)
          {     return retape_; }
     };
}

bool ipopt_get_started(void)
{     bool ok = true;
     size_t j;


     // number of independent variables (domain dimension for f and g)
     size_t n = 4;
     // number of constraints (range dimension for g)
     size_t m = 2;
     // initial value of the independent variables
     NumberVector x_i(n);
     x_i[0] = 1.0;
     x_i[1] = 5.0;
     x_i[2] = 5.0;
     x_i[3] = 1.0;
     // lower and upper limits for x
     NumberVector x_l(n);
     NumberVector x_u(n);
     for(j = 0; j < n; j++)
     {     x_l[j] = 1.0;
          x_u[j] = 5.0;
     }
     // lower and upper limits for g
     NumberVector g_l(m);
     NumberVector g_u(m);
     g_l[0] = 25.0;     g_u[0] = 1.0e19;
     g_l[1] = 40.0;     g_u[1] = 40.0;

     size_t icase;
     for(icase = 0; icase <= 1; icase++)
     {     // Should cppad_ipopt_nlp retape the operation sequence for
          // every new x. Can test both true and false cases because
          // the operation sequence does not depend on x (for this case).
          bool retape = icase != 0;

          // object in derived class
          FG_info fg_info(retape);

          // create the Ipopt interface
          cppad_ipopt_solution solution;
          Ipopt::SmartPtr<Ipopt::TNLP> cppad_nlp = new cppad_ipopt_nlp(
          n, m, x_i, x_l, x_u, g_l, g_u, &fg_info, &solution
          );

          // Create an instance of the IpoptApplication
          using Ipopt::IpoptApplication;
          Ipopt::SmartPtr<IpoptApplication> app = new IpoptApplication();

          // turn off any printing
          app->Options()->SetIntegerValue("print_level", 0);
          app->Options()->SetStringValue("sb", "yes");

          // maximum number of iterations
          app->Options()->SetIntegerValue("max_iter", 10);

          // approximate accuracy in first order necessary conditions;
          // see Mathematical Programming, Volume 106, Number 1,
          // Pages 25-57, Equation (6)
          app->Options()->SetNumericValue("tol", 1e-9);

          // derivative testing
          app->Options()->
          SetStringValue("derivative_test", "second-order");
          app->Options()-> SetNumericValue(
               "point_perturbation_radius", 0.
          );

          // Initialize the IpoptApplication and process the options
          Ipopt::ApplicationReturnStatus status = app->Initialize();
          ok    &= status == Ipopt::Solve_Succeeded;

          // Run the IpoptApplication
          status = app->OptimizeTNLP(cppad_nlp);
          ok    &= status == Ipopt::Solve_Succeeded;

          /*
          Check some of the solution values
          */
          ok &= solution.status == cppad_ipopt_solution::success;
          //
          double check_x[]   = { 1.000000, 4.743000, 3.82115, 1.379408 };
          double check_z_l[] = { 1.087871, 0.,       0.,      0.       };
          double check_z_u[] = { 0.,       0.,       0.,      0.       };
          double rel_tol     = 1e-6;  // relative tolerance
          double abs_tol     = 1e-6;  // absolute tolerance
          for(j = 0; j < n; j++)
          {     ok &= CppAD::NearEqual(
               check_x[j],   solution.x[j],   rel_tol, abs_tol
               );
               ok &= CppAD::NearEqual(
               check_z_l[j], solution.z_l[j], rel_tol, abs_tol
               );
               ok &= CppAD::NearEqual(
               check_z_u[j], solution.z_u[j], rel_tol, abs_tol
               );
          }
     }

     return ok;
}

Input File: cppad_ipopt/example/get_started.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem

12.8.10.2.a: Contents
12.8.10.2.1: An ODE Inverse Problem Example
12.8.10.2.2: ODE Fitting Using Simple Representation
12.8.10.2.3: ODE Fitting Using Fast Representation
12.8.10.2.4: Driver for Running the Ipopt ODE Example
12.8.10.2.5: Correctness Check for Both Simple and Fast Representations

Input File: cppad_ipopt/example/ode1.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.1: An ODE Inverse Problem Example

12.8.10.2.1.a: Notation
The table below contains the name of a variable, the meaning of the variable value, and the value for this particular example. If the value is not specified in the table below, the corresponding value in 12.8.10.2.1.1: ipopt_nlp_ode_problem.hpp can be changed and the example should still run (with no other changes).
Name Meaning Value
@(@ Na @)@ number of parameters to fit 3
@(@ Ny @)@ number components in ODE 2
@(@ Nz @)@ number of measurements 4
@(@ N(i) @)@ number of grid points between i-1-th and i-th measurement
@(@ S(i) @)@ number of grid points up to an including the i-th measurement

12.8.10.2.1.b: Forward Problem
We consider the following ordinary differential equation: @[@ \begin{array}{rcl} \partial_t y_0 ( t , a ) & = & - a_1 * y_0 (t, a ) \\ \partial_t y_1 (t , a ) & = & + a_1 * y_0 (t, a ) - a_2 * y_1 (t, a ) \end{array} @]@ with the initial conditions @[@ y_0 (0 , a) = F(a) = \left( \begin{array}{c} a_0 \\ 0 \end{array} \right) @]@ where @(@ Na @)@ is the number of parameters, @(@ a \in \B{R}^{Na} @)@ is an unknown parameter vector. The function and @(@ F : \B{R}^{Na} \rightarrow \B{R}^{Ny} @)@ is defined by the equation above where @(@ Ny @)@ is the number of components in @(@ y(t, a) @)@. Our forward problem is stated as follows: Given @(@ a \in \B{R}^{Na} @)@ determine the value of @(@ y ( t , a ) @)@, for @(@ t \in R @)@, that solves the initial value problem above.

12.8.10.2.1.c: Measurements
We use @(@ Nz @)@ to denote the number of measurements. Suppose we are also given a measurement vector @(@ z \in \B{R}^{Nz} @)@ and for @(@ i = 1, \ldots, Nz @)@, we model @(@ z_i @)@ by @[@ z_i = y_1 ( s_i , a) + e_i @]@ where @(@ s_i \in \B{R} @)@ is the time for the i-th measurement, @(@ e_i \sim {\bf N} (0 , \sigma^2 ) @)@ is the corresponding noise, and @(@ \sigma \in \B{R}_+ @)@ is the corresponding standard deviation.

12.8.10.2.1.c.a: Simulation Analytic Solution
The following analytic solution to the forward problem is used to simulate a data set: @[@ \begin{array}{rcl} y_0 (t , a) & = & a_0 * \exp( - a_1 * t ) \\ y_1 (t , a) & = & a_0 * a_1 * \frac{\exp( - a_2 * t ) - \exp( -a_1 * t )}{ a_1 - a_2 } \end{array} @]@

12.8.10.2.1.c.b: Simulation Parameter Values
@(@ \bar{a}_0 = 1 @)@   initial value of @(@ y_0 (t, a) @)@
@(@ \bar{a}_1 = 2 @)@   transfer rate from compartment zero to compartment one
@(@ \bar{a}_2 = 1 @)@   transfer rate from compartment one to outside world
@(@ \sigma = 0 @)@   standard deviation of measurement noise
@(@ e_i = 0 @)@   simulated measurement noise, @(@ i = 1 , \ldots , Nz @)@
@(@ s_i = i * .5 @)@   time corresponding to the i-th measurement, @(@ i = 1 , \ldots , Nz @)@

12.8.10.2.1.c.c: Simulated Measurement Values
The simulated measurement values are given by the equation @[@ \begin{array}{rcl} z_i & = & e_i + y_1 ( s_i , \bar{a} ) \\ & = & e_i + \bar{a}_0 * \bar{a}_1 * \frac{\exp( - \bar{a}_2 * s_i ) - \exp( -\bar{a}_1 * s_i )} { \bar{a}_1 - \bar{a}_2 } \end{array} @]@ for @(@ k = 1, \ldots , Nz @)@.

12.8.10.2.1.d: Inverse Problem
The maximum likelihood estimate for @(@ a @)@ given @(@ z @)@ solves the following inverse problem @[@ \begin{array}{rcl} {\rm minimize} \; & \sum_{i=1}^{Nz} H_i [ y( s_i , a ) , a ] & \;{\rm w.r.t} \; a \in \B{R}^{Na} \end{array} @]@ where the functions @(@ H_i : \B{R}^{Ny} \times \B{R}^{Na} \rightarrow \B{R} @)@ is defined by @[@ H_i (y, a) = ( z_i - y_1 )^2 @]@

12.8.10.2.1.e: Trapezoidal Approximation
This example uses a trapezoidal approximation to solve the ODE. This approximation procedures starts with @[@ y^0 = y(0, a) = \left( \begin{array}{c} a_0 \\ 0 \end{array} \right) @]@ Given a time grid @(@ \{ t_i \} @)@ and an approximate value @(@ y^{i-1} @)@ for @(@ y ( t_{i-1} , a ) @)@, the a trapezoidal method approximates @(@ y ( t_i , a ) @)@ (denoted by @(@ y^i @)@ ) by solving the equation @[@ y^i = y^{i-1} + \left[ G( y^i , a ) + G( y^{i-1} , a ) \right] * \frac{t_i - t_{i-1} }{ 2 } @]@ where @(@ G : \B{R}^{Ny} \times \B{R}^{Na} \rightarrow \B{R}^{Ny} @)@ is the function representing this ODE; i.e. @[@ G(y, a) = \left( \begin{array}{c} - a_1 * y_0 \\ + a_1 * y_0 - a_2 * y_1 \end{array} \right) @]@ This @(@ G(y, a) @)@ is linear with respect to @(@ y @)@, hence the implicit equation defining @(@ y^i @)@ can be solved inverting the a set of linear equations. In the general case, where @(@ G(y, a) @)@ is non-linear with respect to @(@ y @)@, an iterative procedure is used to calculate @(@ y^i @)@ from @(@ y^{i-1} @)@.

12.8.10.2.1.e.a: Trapezoidal Time Grid
The discrete time grid, used for the trapezoidal approximation, is denoted by @(@ \{ t_i \} @)@ which is defined by: @(@ t_0 = 0 @)@ and for @(@ i = 1 , \ldots , Nz @)@ and for @(@ j = 1 , \ldots , N(i) @)@, @[@ \begin{array}{rcl} \Delta t_i & = & ( s_i - s_{i-1} ) / N(i) \\ t_{S(i-1)+j} & = & s_{i-1} + \Delta t_i * j \end{array} @]@ where @(@ s_0 = 0 @)@, @(@ N(i) @)@ is the number of time grid points between @(@ s_{i-1} @)@ and @(@ s_i @)@, @(@ S(0) = 0 @)@, and @(@ S(i) = N(1) + \ldots + N(i) @)@. Note that for @(@ i = 0 , \ldots , S(Nz) @)@, @(@ y^i @)@ denotes our approximation for @(@ y( t_i , a ) @)@ and @(@ t_{S(i)} @)@ is equal to @(@ s_i @)@.

12.8.10.2.1.f: Black Box Method
A common approach to an inverse problem is to treat the forward problem as a black box (that we do not look inside of or try to understand). In this approach, for each value of the parameter vector @(@ a @)@ one uses the 12.8.10.2.1.e: trapezoidal approximation (on a finer grid that @(@ \{ s_i \} @)@) to solve for @(@ y_1 ( s_i , a ) @)@ for @(@ i = 1 , \ldots , Nz @)@.

12.8.10.2.1.f.a: Two levels of Iteration
As noted above, the trapezoidal approximation often requires an iterative procedure. Thus, in this approach, there are two levels of iterations, one with respect to the parameter values during the minimization and the other for solving the trapezoidal approximation equation.

12.8.10.2.1.f.b: Derivatives
In addition, in the black box approach, differentiating the ODE solution often involves differentiating an iterative procedure. Direct application of AD to compute these derivatives requires a huge amount of memory and calculations to differentiate the iterative procedure. (There are special techniques for applying AD to the solutions of iterative procedures, but that is outside the scope of this presentation).

12.8.10.2.1.g: Simultaneous Method
The simultaneous forward and inverse method uses constraints to include the solution of the forward problem in the inverse problem. To be specific for our example, @[@ \begin{array}{rcl} {\rm minimize} & \sum_{i=1}^{Nz} H_i ( y^{N(i)} , a ) & \; {\rm w.r.t} \; y^1 \in \B{R}^{Ny} , \ldots , y^{S(Nz)} \in \B{R}^{Ny} , \; a \in \B{R}^{Na} \\ {\rm subject \; to} & y^j = y^{j-1} + \left[ G( y^{j-1} , a ) + G( y^j , a ) \right] * \frac{ t_j - t_{j-1} }{ 2 } & \; {\rm for} \; j = 1 , \ldots , S(Nz) \\ & y^0 = F(a) \end{array} @]@ where for @(@ i = 1, \ldots , Nz @)@, @(@ N(i) @)@ is the number of time intervals between @(@ s_{i-1} @)@ and @(@ s_i @)@ (with @(@ s_0 = 0 @)@) and @(@ S(i) = N(1) + \ldots + N(i) @)@. Note that, in this form, the iterations of the optimization procedure also solve the forward problem equations. In addition, the functions that need to be differentiated do not involve an iterative procedure.

12.8.10.2.1.h: Source
The file 12.8.10.2.1.1: ipopt_nlp_ode_problem.hpp contains source code that defines the example values and functions defined above.
Input File: cppad_ipopt/example/ode2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
# include "../src/cppad_ipopt_nlp.hpp"

namespace {
     //------------------------------------------------------------------
     typedef Ipopt::Number Number;
     Number a0 = 1.;  // simulation value for a[0]
     Number a1 = 2.;  // simulation value for a[1]
     Number a2 = 1.;  // simulatioln value for a[2]

     // function used to simulate data
     Number y_one(Number t)
     {     Number y_1 =  a0*a1 * (exp(-a2*t) - exp(-a1*t)) / (a1 - a2);
          return y_1;
     }

     // time points were we have data (no data at first point)
     double s[] = { 0.0,        0.5,        1.0,        1.5,        2.0 };
     // Simulated data for case with no noise (first point is not used)
     double z[] = { 0.0,  y_one(0.5), y_one(1.0), y_one(1.5), y_one(2.0) };
     // Number of measurement values
     size_t Nz  = sizeof(z) / sizeof(z[0]) - 1;
     // Number of components in the function y(t, a)
     size_t Ny  = 2;
     // Number of components in the vectro a
     size_t Na  = 3;

     // Initial Condition function, F(a) = y(t, a) at t = 0
     // (for this particular example)
     template <class Vector>
     Vector eval_F(const Vector &a)
     {     Vector F(Ny);
          // y_0 (t) = a[0]*exp(-a[1] * t)
          F[0] = a[0];
          // y_1 (t) =
          // a[0]*a[1]*(exp(-a[2] * t) - exp(-a[1] * t))/(a[1] - a[2])
          F[1] = 0.;
          return F;
     }
     // G(y, a) =  \partial_t y(t, a); i.e. the differential equation
     // (for this particular example)
     template <class Vector>
     Vector eval_G(const Vector &y , const Vector &a)
     {     Vector G(Ny);
          // y_0 (t) = a[0]*exp(-a[1] * t)
          G[0] = -a[1] * y[0];
          // y_1 (t) =
          // a[0]*a[1]*(exp(-a[2] * t) - exp(-a[1] * t))/(a[1] - a[2])
          G[1] = +a[1] * y[0] - a[2] * y[1];
          return G;
     }
     // H(i, y, a) = contribution to objective at i-th data point
     // (for this particular example)
     template <class Scalar, class Vector>
     Scalar eval_H(size_t i, const Vector &y, const Vector &a)
     {     // This particular H is for a case where y_1 (t) is measured
          Scalar diff = z[i] - y[1];
          return diff * diff;
     }
     // function used to count the number of calls to eval_r
     size_t count_eval_r(void)
     {     static size_t count = 0;
          ++count;
          return count;
     }
}

Input File: cppad_ipopt/example/ode_problem.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.2: ODE Fitting Using Simple Representation

12.8.10.2.2.a: Purpose
In this section we represent the objective and constraint functions, (in the simultaneous forward and reverse optimization problem) using the 12.8.10.g: simple representation in the sense of cppad_ipopt_nlp.

12.8.10.2.2.b: Argument Vector
The argument vector that we are optimizing with respect to ( @(@ x @)@ in 12.8.10: cppad_ipopt_nlp ) has the following structure @[@ x = ( y^0 , \cdots , y^{S(Nz)} , a ) @]@ Note that @(@ x \in \B{R}^{S(Nz) + Na} @)@ and @[@ \begin{array}{rcl} y^i & = & ( x_{Ny * i} , \ldots , x_{Ny * i + Ny - 1} ) \\ a & = & ( x_{Ny *S(Nz) + Ny} , \ldots , x_{Ny * S(Nz) + Na - 1} ) \end{array} @]@

12.8.10.2.2.c: Objective Function
The objective function ( @(@ fg_0 (x) @)@ in 12.8.10: cppad_ipopt_nlp ) has the following representation, @[@ fg_0 (x) = \sum_{i=1}^{Nz} H_i ( y^{S(i)} , a ) @]@

12.8.10.2.2.d: Initial Condition Constraint
For @(@ i = 1 , \ldots , Ny @)@, we define the component functions @(@ fg_i (x) @)@, and corresponding constraint equations, by @[@ 0 = fg_i ( x ) = y_i^0 - F_i (a) @]@

12.8.10.2.2.e: Trapezoidal Approximation Constraint
For @(@ i = 1, \ldots , S(Nz) @)@, and for @(@ j = 1 , \ldots , Ny @)@, we define the component functions @(@ fg_{Ny*i + j} (x) @)@, and corresponding constraint equations, by @[@ 0 = fg_{Ny*i + j } = y_j^{i} - y_j^{i-1} - \left[ G_j ( y^i , a ) + G_j ( y^{i-1} , a ) \right] * \frac{t_i - t_{i-1} }{ 2 } @]@

12.8.10.2.2.f: Source
The file 12.8.10.2.2.1: ipopt_nlp_ode_simple.hpp contains source code for this representation of the objective and constraints.
Input File: cppad_ipopt/example/ode2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.2.1: ODE Fitting Using Simple Representation
# include "ode_problem.hpp"

// define in the empty namespace
namespace {
     using namespace cppad_ipopt;

     class FG_simple : public cppad_ipopt_fg_info
     {
     private:
          bool       retape_;
          SizeVector N_;
          SizeVector S_;
     public:
          // derived class part of constructor
          FG_simple(bool retape_in, const SizeVector& N)
          : retape_ (retape_in), N_(N)
          {     assert( N_[0] == 0 );
               S_.resize( N.size() );
               S_[0] = 0;
               for(size_t i = 1; i < N_.size(); i++)
                    S_[i] = S_[i-1] + N_[i];
          }
          // Evaluation of the objective f(x), and constraints g(x)
          // using an Algorithmic Differentiation (AD) class.
          ADVector eval_r(size_t not_used, const ADVector&  x)
          {     count_eval_r();

               // temporary indices
               size_t i, j, k;
               // # of components of x corresponding to values for y
               size_t ny_inx = (S_[Nz] + 1) * Ny;
               // # of constraints (range dimension of g)
               size_t m = ny_inx;
               // # of components in x (domain dimension for f and g)
               assert ( x.size() == ny_inx + Na );
               // vector for return value
               ADVector fg(m + 1);
               // vector of parameters
               ADVector a(Na);
               for(j = 0; j < Na; j++)
                    a[j] = x[ny_inx + j];
               // vector for value of y(t)
               ADVector y(Ny);
               // objective function -------------------------------
               fg[0] = 0.;
               for(k = 0; k < Nz; k++)
               {     for(j = 0; j < Ny; j++)
                         y[j] = x[Ny*S_[k+1] + j];
                    fg[0] += eval_H<ADNumber>(k+1, y, a);
               }
               // initial condition ---------------------------------
               ADVector F = eval_F(a);
               for(j = 0; j < Ny; j++)
               {     y[j]    = x[j];
                    fg[1+j] = y[j] - F[j];
               }
               // trapezoidal approximation --------------------------
               ADVector ym(Ny), G(Ny), Gm(Ny);
               G = eval_G(y, a);
               ADNumber dy;
               for(k = 0; k < Nz; k++)
               {     // interval between data points
                    Number T  = s[k+1] - s[k];
                    // integration step size
                    Number dt = T / Number( N_[k+1] );
                    for(j = 0; j < N_[k+1]; j++)
                    {     size_t Index = (j + S_[k]) * Ny;
                         // y(t) at end of last step
                         ym = y;
                         // G(y, a) at end of last step
                         Gm = G;
                         // value of y(t) at end of this step
                         for(i = 0; i < Ny; i++)
                              y[i] = x[Ny + Index + i];
                         // G(y, a) at end of this step
                         G = eval_G(y, a);
                         // trapezoidal approximation residual
                         for(i = 0; i < Ny; i++)
                         {     dy = (G[i] + Gm[i]) * dt / 2;
                              fg[1+Ny+Index+i] =
                                   y[i] - ym[i] - dy;
                         }
                    }
               }
               return fg;
          }
          // The operations sequence for r_eval does not depend on u,
          // hence retape = false should work and be faster.
          bool retape(size_t k)
          {     return retape_; }
     };

}

Input File: cppad_ipopt/example/ode_simple.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.3: ODE Fitting Using Fast Representation

12.8.10.2.3.a: Purpose
In this section we represent a more complex representation of the simultaneous forward and reverse ODE fitting problem (described above). The representation defines the problem using simpler functions that are faster to differentiate (either by hand coding or by using AD).

12.8.10.2.3.b: Objective Function
We use the following representation for the 12.8.10.2.2.c: objective function : For @(@ k = 0 , \ldots , Nz - 1 @)@, we define the function @(@ r^k : \B{R}^{Ny+Na} \rightarrow \B{R} @)@ by @[@ \begin{array}{rcl} fg_0 (x) & = & \sum_{i=1}^{Nz} H_i ( y^{S(i)} , a ) \\ fg_0 (x) & = & \sum_{k=0}^{Nz-1} r^k ( u^{k,0} ) \end{array} @]@ where for @(@ k = 0 , \ldots , Nz-1 @)@, @(@ u^{k,0} \in \B{R}^{Ny + Na} @)@ is defined by @(@ u^{k,0} = ( y^{S(k+1)} , a ) @)@

12.8.10.2.3.b.a: Range Indices I(k,0)
For @(@ k = 0 , \ldots , Nz - 1 @)@, the range index in the vector @(@ fg (x) @)@ corresponding to @(@ r^k ( u^{k,0} ) @)@ is 0. Thus, the range indices are given by @(@ I(k,0) = \{ 0 \} @)@ for @(@ k = 0 , \ldots , Nz-1 @)@.

12.8.10.2.3.b.b: Domain Indices J(k,0)
For @(@ k = 0 , \ldots , Nz - 1 @)@, the components of the vector @(@ x @)@ corresponding to the vector @(@ u^{k,0} @)@ are @[@ \begin{array}{rcl} u^{k,0} & = & ( y^{S(k+1} , a ) \\ & = & ( x_{Ny * S(k+1)} \; , \; \ldots \; , \; x_{Ny * S(k+1) + Ny - 1} \; , \; x_{Ny * S(Nz) + Ny } \; , \; \ldots \; , \; x_{Ny * S(Nz) + Ny + Na - 1} ) \end{array} @]@ Thus, the domain indices are given by @[@ J(k,0) = \{ Ny * S(k+1) \; , \; \ldots \; , \; Ny * S(k+1) + Ny - 1 \; , \; Ny * S(Nz) + Ny \; , \; \ldots \; , \; Ny * S(Nz) + Ny + Na - 1 \} @]@

12.8.10.2.3.c: Initial Condition
We use the following representation for the 12.8.10.2.2.d: initial condition constraint : For @(@ k = Nz @)@ we define the function @(@ r^k : \B{R}^{Ny} \times \B{R}^{Na + Ny} @)@ by @[@ \begin{array}{rcl} 0 & = & fg_i ( x ) = y_i^0 - F_i (a) \\ 0 & = & r_{i-1}^k ( u^{k,0} ) = y_i^0 - F_i(a) \end{array} @]@ where @(@ i = 1 , \ldots , Ny @)@ and where @(@ u^{k,0} \in \B{R}^{Ny + Na} @)@ is defined by @(@ u^{k,0} = ( y^0 , a) @)@.

12.8.10.2.3.c.a: Range Indices I(k,0)
For @(@ k = Nz @)@, the range index in the vector @(@ fg (x) @)@ corresponding to @(@ r^k ( u^{k,0} ) @)@ are @(@ I(k,0) = \{ 1 , \ldots , Ny \} @)@.

12.8.10.2.3.c.b: Domain Indices J(k,0)
For @(@ k = Nz @)@, the components of the vector @(@ x @)@ corresponding to the vector @(@ u^{k,0} @)@ are @[@ \begin{array}{rcl} u^{k,0} & = & ( y^0 , a) \\ & = & ( x_0 \; , \; \ldots \; , \; x_{Ny-1} \; , \; x_{Ny * S(Nz) + Ny } \; , \; \ldots \; , \; x_{Ny * S(Nz) + Ny + Na - 1} ) \end{array} @]@ Thus, the domain indices are given by @[@ J(k,0) = \{ 0 \; , \; \ldots \; , \; Ny - 1 \; , \; Ny * S(Nz) + Ny \; , \; \ldots \; , \; Ny * S(Nz) + Ny + Na - 1 \} @]@

12.8.10.2.3.d: Trapezoidal Approximation
We use the following representation for the 12.8.10.2.2.e: trapezoidal approximation constraint : For @(@ k = 1 , \ldots , Nz @)@, we define the function @(@ r^{Nz+k} : \B{R}^{2*Ny+Na} \rightarrow \B{R}^{Ny} @)@ by @[@ r^{Nz+k} ( y , w , a ) = y - w - [ G( y , a ) + G( w , a ) ] * \frac{ \Delta t_k }{ 2 } @]@ For @(@ \ell = 0 , \ldots , N(k)-1 @)@, using the notation @(@ i = Ny * S(k-1) + \ell + 1 @)@, the corresponding trapezoidal approximation is represented by @[@ \begin{array}{rcl} 0 & = & fg_{Ny+i} (x) \\ & = & y^i - y^{i-1} - \left[ G( y^i , a ) + G( y^{i-1} , a ) \right] * \frac{\Delta t_k }{ 2 } \\ & = & r^{Nz+k} ( u^{Nz+k , \ell} ) \end{array} @]@ where @(@ u^{Nz+k,\ell} \in \B{R}^{2*Ny + Na} @)@ is defined by @(@ u^{Nz+k,\ell} = ( y^{i-1} , y^i , a) @)@.

12.8.10.2.3.d.a: Range Indices I(k,0)
For @(@ k = Nz + 1, \ldots , 2*Nz @)@, and @(@ \ell = 0 , \ldots , N(k)-1 @)@, the range index in the vector @(@ fg (x) @)@ corresponding to @(@ r^k ( u^{k,\ell} ) @)@ are @(@ I(k,\ell) = \{ Ny + i , \ldots , 2*Ny + i - 1 \} @)@ where @(@ i = Ny * S(k-1) + \ell + 1 @)@.

12.8.10.2.3.d.b: Domain Indices J(k,0)
For @(@ k = Nz + 1, \ldots , 2*Nz @)@, and @(@ \ell = 0 , \ldots , N(k)-1 @)@, define @(@ i = Ny * S(k-1) + \ell + 1 @)@. The components of the vector @(@ x @)@ corresponding to the vector @(@ u^{k,\ell} @)@ are (and the function @(@ fg (x) @)@ in 12.8.10: cppad_ipopt_nlp ) @[@ \begin{array}{rcl} u^{k, \ell} & = & ( y^{i-1} , y^i , a ) \\ & = & ( x_{Ny * (i-1)} \; , \; \ldots \; , \; x_{Ny * (i+1) - 1} \; , \; x_{Ny * S(Nz) + Ny } \; , \; \ldots \; , \; x_{Ny * S(Nz) + Ny + Na - 1} ) \end{array} @]@ Thus, the domain indices are given by @[@ J(k,\ell) = \{ Ny * (i-1) \; , \; \ldots \; , \; Ny * (i+1) - 1 \; , \; Ny * S(Nz) + Ny \; , \; \ldots \; , \; Ny * S(Nz) + Ny + Na - 1 \} @]@

12.8.10.2.3.e: Source
The file 12.8.10.2.3.1: ipopt_nlp_ode_fast.hpp contains source code for this representation of the objective and constraints.
Input File: cppad_ipopt/example/ode2.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.3.1: ODE Fitting Using Fast Representation
# include "ode_problem.hpp"

namespace {
     using namespace cppad_ipopt;

     class FG_fast : public cppad_ipopt_fg_info
     {
     private:
          bool       retape_;
          SizeVector N_;
          SizeVector S_;
     public:
          // derived class part of constructor
          FG_fast(bool retape_in, const SizeVector& N)
          : retape_ (retape_in), N_(N)
          {     assert( N_[0] == 0 );
               S_.resize( N_.size() );
               S_[0] = 0;
               for(size_t i = 1; i < N_.size(); i++)
                    S_[i] = S_[i-1] + N_[i];
          }
          // r^k for k = 0, 1, ..., Nz-1 used for measurements
          // r^k for k = Nz              use for initial condition
          // r^k for k = Nz+1, ..., 2*Nz used for trapezoidal approx
          size_t number_functions(void)
          {     return Nz + 1 + Nz; }
          ADVector eval_r(size_t k, const ADVector &u)
          {     count_eval_r();

               size_t j;
               ADVector y(Ny), a(Na);
               // objective function --------------------------------
               if( k < Nz )
               {     // used for measurement with index k+1
                    ADVector r(1); // return value is a scalar
                    // u is [y( s[k+1] ) , a]
                    for(j = 0; j < Ny; j++)
                         y[j] = u[j];
                    for(j = 0; j < Na; j++)
                         a[j] = u[Ny + j];
                    r[0] = eval_H<ADNumber>(k+1, y, a);
                    return r;
               }
               // initial condition ---------------------------------
               if( k == Nz )
               {     ADVector r(Ny), F(Ny);
                    // u is [y(t), a] at t = 0
                    for(j = 0; j < Ny; j++)
                         y[j] = u[j];
                    for(j = 0; j < Na; j++)
                         a[j] = u[Ny + j];
                    F    = eval_F(a);
                    for(j = 0; j < Ny; j++)
                         r[j]   = y[j] - F[j];
                    return  r;
               }
               // trapezoidal approximation -------------------------
               ADVector ym(Ny), G(Ny), Gm(Ny), r(Ny);
               // r^k for k = Nz+1, ... , 2*Nz
               // interval between data samples
               Number T = s[k-Nz] - s[k-Nz-1];
               // integration step size
               Number dt = T / Number( N_[k-Nz] );
               // u = [ y(t[i-1], a) , y(t[i], a), a )
               for(j = 0; j < Ny; j++)
               {     ym[j] = u[j];
                    y[j]  = u[Ny + j];
               }
               for(j = 0; j < Na; j++)
                    a[j] = u[2 * Ny + j];
               Gm  = eval_G(ym, a);
               G   = eval_G(y,  a);
               for(j = 0; j < Ny; j++)
                    r[j] = y[j] - ym[j] - (G[j] + Gm[j]) * dt / 2.;
               return r;
          }
          // The operations sequence for r_eval does not depend on u,
          // hence retape = false should work and be faster.
          bool retape(size_t k)
          {     return retape_; }
          // size of the vector u in eval_r
          size_t domain_size(size_t k)
          {     if( k < Nz )
                    return Ny + Na;   // objective function
               if( k == Nz )
                    return Ny + Na;  // initial value constraint
               return 2 * Ny + Na;      // trapezodial constraints
          }
          // size of the return value from eval_r
          size_t range_size(size_t k)
          {     if( k < Nz )
                    return 1;
               return Ny;
          }
          // number of terms that use this value of k
          size_t number_terms(size_t k)
          {     if( k <= Nz )
                    return 1;  // r^k used once for k <= Nz
               // r^k used N_[k-Nz] times for k > Nz
               return N_[k-Nz];
          }
          void index(size_t k, size_t ell, SizeVector& I, SizeVector& J)
          {     size_t i, j;
               // # of components of x corresponding to values for y
               size_t ny_inx = (S_[Nz] + 1) * Ny;
               // objective function -------------------------------
               if( k < Nz )
               {     // index in fg corresponding to objective
                    I[0] = 0;
                    // u = [ y(t, a) , a ]
                    // The first Ny components of u is y(t) at
                    //     t = s[k+1] = t[S_[k+1]]
                    // x indices corresponding to this value of y
                    for(j = 0; j < Ny; j++)
                         J[j] = S_[k + 1] * Ny + j;
                    // components of x correspondig to a
                    for(j = 0; j < Na; j++)
                         J[Ny + j] = ny_inx + j;
                    return;
               }
               // initial conditions --------------------------------
               if( k == Nz )
               {     // index in fg for inidial condition constraint
                    for(j = 0; j < Ny; j++)
                         I[j] = 1 + j;
                    // u = [ y(t, a) , a ] where t = 0
                    // x indices corresponding to this value of y
                    for(j = 0; j < Ny; j++)
                         J[j] = j;
                    // following that, u contains the vector a
                    for(j = 0; j < Na; j++)
                         J[Ny + j] = ny_inx + j;
                    return;
               }
               // trapoziodal approximation -------------------------
               // index of first grid point in this approximation
               i = S_[k - Nz - 1]  + ell;
               // There are Ny difference equations for each time
               // point.  Add one for the objective function, and Ny
               // for the initial value constraints.
               for(j = 0; j < Ny; j++)
                    I[j] = 1 + Ny + i * Ny + j;
               // u = [ y(t, a) , y(t+dt, a) , a ] at t = t[i]
               for(j = 0; j < Ny; j++)
               {     J[j]      = i * Ny  + j; // y^i indices
                    J[Ny + j] = J[j] + Ny;   // y^{i+1} indices
               }
               for(j = 0; j < Na; j++)
                    J[2 * Ny + j] = ny_inx + j; // a indices
          }
     };

}

Input File: cppad_ipopt/example/ode_fast.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.4: Driver for Running the Ipopt ODE Example
# include "ode_problem.hpp"

namespace { // BEGIN empty namespace -----------------------------------------
using namespace cppad_ipopt;

template <class FG_info>
void ipopt_ode_case(
     bool  retape        ,
     const SizeVector& N ,
     NumberVector&     x )
{     bool ok = true;
     size_t i, j;

     // compute the partial sums of the number of grid points
     assert( N.size() == Nz + 1);
     assert( N[0] == 0 );
     SizeVector S(Nz+1);
     S[0] = 0;
     for(i = 1; i <= Nz; i++)
          S[i] = S[i-1] + N[i];

     // number of components of x corresponding to values for y
     size_t ny_inx = (S[Nz] + 1) * Ny;
     // number of constraints (range dimension of g)
     size_t m      = ny_inx;
     // number of components in x (domain dimension for f and g)
     size_t n      = ny_inx + Na;
     // the argument vector for the optimization is
     // y(t) at t[0] , ... , t[S[Nz]] , followed by a
     NumberVector x_i(n), x_l(n), x_u(n);
     for(j = 0; j < ny_inx; j++)
     {     x_i[j] = 0.;       // initial y(t) for optimization
          x_l[j] = -1.0e19;  // no lower limit
          x_u[j] = +1.0e19;  // no upper limit
     }
     for(j = 0; j < Na; j++)
     {     x_i[ny_inx + j ] = .5;       // initiali a for optimization
          x_l[ny_inx + j ] =  -1.e19;  // no lower limit
          x_u[ny_inx + j ] =  +1.e19;  // no upper
     }
     // all of the difference equations are constrained to the value zero
     NumberVector g_l(m), g_u(m);
     for(i = 0; i < m; i++)
     {     g_l[i] = 0.;
          g_u[i] = 0.;
     }

     // object defining the objective f(x) and constraints g(x)
     FG_info fg_info(retape, N);

     // create the CppAD Ipopt interface
     cppad_ipopt_solution solution;
     Ipopt::SmartPtr<Ipopt::TNLP> cppad_nlp = new cppad_ipopt_nlp(
          n, m, x_i, x_l, x_u, g_l, g_u, &fg_info, &solution
     );

     // Create an Ipopt application
     using Ipopt::IpoptApplication;
     Ipopt::SmartPtr<IpoptApplication> app = new IpoptApplication();

     // turn off any printing
     app->Options()->SetIntegerValue("print_level", 0);
     app->Options()->SetStringValue("sb", "yes");

     // maximum number of iterations
     app->Options()->SetIntegerValue("max_iter", 30);

     // approximate accuracy in first order necessary conditions;
     // see Mathematical Programming, Volume 106, Number 1,
     // Pages 25-57, Equation (6)
     app->Options()->SetNumericValue("tol", 1e-9);

     // Derivative testing is very slow for large problems
     // so comment this out if you use a large value for N[].
     app->Options()-> SetStringValue( "derivative_test", "second-order");
     app->Options()-> SetNumericValue( "point_perturbation_radius", 0.);

     // Initialize the application and process the options
     Ipopt::ApplicationReturnStatus status = app->Initialize();
     ok    &= status == Ipopt::Solve_Succeeded;

     // Run the application
     status = app->OptimizeTNLP(cppad_nlp);
     ok    &= status == Ipopt::Solve_Succeeded;

     // return the solution
     x.resize( solution.x.size() );
     for(j = 0; j < x.size(); j++)
          x[j] = solution.x[j];

     return;
}
} // END empty namespace ----------------------------------------------------

Input File: cppad_ipopt/example/ode_run.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
# include "ode_run.hpp"

bool ode_check(const SizeVector& N, const NumberVector& x)
{     bool ok = true;
     size_t i, j;

     // number of components of x corresponding to values for y
     size_t ny_inx = x.size() - Na;

     // compute the partial sums of the number of grid points
     // and the maximum step size for the trapezoidal approximation
     SizeVector S(Nz+1);
     S[0] = 0;
     Number max_step = 0.;
     for(i = 1; i <= Nz; i++)
     {     S[i] = S[i-1] + N[i];
          max_step = std::max(max_step, Number(s[i] - s[i-1]) / Number(N[i]) );
     }

     // split out return values
     NumberVector a(Na), y_0(Ny), y_1(Ny), y_2(Ny);
     for(j = 0; j < Na; j++)
          a[j] = x[ny_inx+j];
     for(j = 0; j < Ny; j++)
     {     y_0[j] = x[j];
          y_1[j] = x[Ny + j];
          y_2[j] = x[2 * Ny + j];
     }

     // Check some of the optimal a value
     Number rel_tol = max_step * max_step;
     Number abs_tol = rel_tol;
     Number check_a[] = {a0, a1, a2}; // see the y_one function
     for(j = 0; j < Na; j++)
     {
          ok &= CppAD::NearEqual(
               check_a[j], a[j], rel_tol, abs_tol
          );
     }

     // check accuarcy of constraint equations
     rel_tol = 1e-9;
     abs_tol = 1e-9;

     // check the initial value constraint
     NumberVector F = eval_F(a);
     for(j = 0; j < Ny; j++)
          ok &= CppAD::NearEqual(F[j], y_0[j], rel_tol, abs_tol);

     // check the first trapezoidal equation
     NumberVector G_0 = eval_G(y_0, a);
     NumberVector G_1 = eval_G(y_1, a);
     Number dt = (s[1] - s[0]) / Number(N[1]);
     Number check;
     for(j = 0; j < Ny; j++)
     {     check = y_1[j] - y_0[j] - (G_1[j]+G_0[j])*dt/2;
          ok &= CppAD::NearEqual( check, 0., rel_tol, abs_tol);
     }
     //
     // check the second trapezoidal equation
     NumberVector G_2 = eval_G(y_2, a);
     if( N[1] == 1 )
          dt = (s[2] - s[1]) / Number(N[2]);
     for(j = 0; j < Ny; j++)
     {     check = y_2[j] - y_1[j] - (G_2[j]+G_1[j])*dt/2;
          ok &= CppAD::NearEqual( check, 0., rel_tol, abs_tol);
     }
     //
     // check the objective function (specialized to this case)
     check = 0.;
     NumberVector y_i(Ny);
     for(size_t k = 0; k < Nz; k++)
     {     for(j = 0; j < Ny; j++)
               y_i[j] =  x[S[k+1] * Ny + j];
          check += eval_H<Number>(k + 1, y_i, a);
     }
     Number obj_value = 0.; // optimal object (no noise in simulation)
     ok &= CppAD::NearEqual(check, obj_value, rel_tol, abs_tol);

     // Use this empty namespace function to avoid warning that it is not used
     static size_t ode_check_count = 0;
     ode_check_count++;
     ok &= count_eval_r() == ode_check_count;

     return ok;
}

Input File: cppad_ipopt/example/ode_check.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.10.3: Speed Test for Both Simple and Fast Representations
# include "../example/ode_run.hpp"
# include "../example/ode_simple.hpp"
# include "../example/ode_fast.hpp"
# include <cassert>
# include <cstring>

# if CPPAD_HAS_GETTIMEOFDAY & CPPAD_NO_MICROSOFT
# include <sys/time.h>
# else
# include <ctime>
# endif

namespace {
     double current_second(void)
     {
# if CPPAD_HAS_GETTIMEOFDAY & CPPAD_NOT_MICOROSOFT
          struct timeval value;
          gettimeofday(&value, 0);
          return double(value.tv_sec) + double(value.tv_usec) * 1e-6;
# else
          return (double) clock() / (double) CLOCKS_PER_SEC;
# endif
     }
}

double ode_speed(const char* name, size_t& count)
{
     // determine simple and retape flags
     bool simple = true, retape = true;
     if( std::strcmp(name, "simple_retape_no") == 0 )
     {     simple = true; retape = false; }
     else if( std::strcmp(name, "simple_retape_yes") == 0 )
     {     simple = true; retape = true; }
     else if( std::strcmp(name, "fast_retape_no") == 0 )
     {     simple = false; retape = false; }
     else if( std::strcmp(name, "fast_retape_yes") == 0 )
     {     simple = false; retape = true; }
     else     assert(false);

     size_t i;
        double s0, s1;
     size_t  c0, c1;

     // solution vector
     NumberVector x;

     // number of time grid intervals between measurement values
     SizeVector N(Nz + 1);
     N[0] = 0;
     for(i = 1; i <= Nz; i++)
     {     N[i] = 10;
          // n   += N[i] * Ny;
     }
     // n += Na;

     s0              = current_second();
     c0              = count_eval_r();
     if( simple )
          ipopt_ode_case<FG_simple>(retape, N, x);
     else     ipopt_ode_case<FG_fast>(retape, N, x);
     s1              = current_second();
     c1              = count_eval_r();
     count           = c1 - c0 - 1;
     return s1 - s0;
}

Input File: cppad_ipopt/speed/ode_speed.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11: User Defined Atomic AD Functions

12.8.11.a: Deprecated 2013-05-27
Using CPPAD_USER_ATOMIC has been deprecated. Use 4.4.7.2: atomic_base instead.

12.8.11.b: Syntax Function
CPPAD_USER_ATOMIC(afunTvectorBase,
     
forwardreversefor_jac_sparserev_jac_sparserev_hes_sparse
)

12.8.11.b.a: Use Function
afun(idaxay)

12.8.11.b.b: Callback Routines
ok = forward(idknmvxvytxty)
ok = reverse(idknmtxtypxpy)
ok = for_jac_sparse(idnmqrs)
ok = rev_jac_sparse(idnmqrs)
ok = rev_hes_sparse(idnmqrstuv)

12.8.11.b.c: Free Static Memory
user_atomic<Base>::clear()

12.8.11.c: Purpose
In some cases, the user knows how to compute the derivative of a function @[@ y = f(x) \; {\rm where} \; f : B^n \rightarrow B^m @]@ more efficiently than by coding it using AD<Base> 12.4.g.a: atomic operations and letting CppAD do the rest. In this case, CPPAD_USER_ATOMIC can be used add the user code for @(@ f(x) @)@, and its derivatives, to the set of AD<Base> atomic operations.

Another possible purpose is to reduce the size of the tape; see 12.8.11.t.b: use AD

12.8.11.d: Partial Implementation
The routines 12.8.11.n: forward , 12.8.11.o: reverse , 12.8.11.p: for_jac_sparse , 12.8.11.q: rev_jac_sparse , and 12.8.11.r: rev_hes_sparse , must be defined by the user. The forward the routine, for the case k = 0 , must be implemented. Functions with the correct prototype, that just return false, can be used for the other cases (unless they are required by your calculations). For example, you need not implement forward for the case k == 2 until you require forward mode calculation of second derivatives.

12.8.11.e: CPPAD_USER_ATOMIC
The macro
CPPAD_USER_ATOMIC(
afunTvectorBase,
     
forwardreversefor_jac_sparserev_jac_sparserev_hes_sparse
)
defines the AD<Base> routine afun . This macro can be placed within a namespace (not the CppAD namespace) but must be outside of any routine.

12.8.11.e.a: Tvector
The macro argument Tvector must be a 8.9: simple vector template class . It determines the type of vectors used as arguments to the routine afun .

12.8.11.e.b: Base
The macro argument Base specifies the 4.7: base type corresponding to AD<Base> operation sequences. Calling the routine afun will add the operator defined by this macro to an AD<Base> operation sequence.

12.8.11.f: ok
For all routines documented below, the return value ok has prototype
     bool 
ok
If it is true, the corresponding evaluation succeeded, otherwise it failed.

12.8.11.g: id
For all routines documented below, the argument id has prototype
     size_t 
id
Its value in all other calls is the same as in the corresponding call to afun . It can be used to store and retrieve extra information about a specific call to afun .

12.8.11.h: k
For all routines documented below, the argument k has prototype
     size_t 
k
The value k is the order of the Taylor coefficient that we are evaluating (12.8.11.n: forward ) or taking the derivative of (12.8.11.o: reverse ).

12.8.11.i: n
For all routines documented below, the argument n has prototype
     size_t 
n
It is the size of the vector ax in the corresponding call to afun(idaxay) ; i.e., the dimension of the domain space for @(@ y = f(x) @)@.

12.8.11.j: m
For all routines documented below, the argument m has prototype
     size_t 
m
It is the size of the vector ay in the corresponding call to afun(idaxay) ; i.e., the dimension of the range space for @(@ y = f(x) @)@.

12.8.11.k: tx
For all routines documented below, the argument tx has prototype
     const CppAD::vector<
Base>& tx
and tx.size() >= (k + 1) * n . For @(@ j = 0 , \ldots , n-1 @)@ and @(@ \ell = 0 , \ldots , k @)@, we use the Taylor coefficient notation @[@ \begin{array}{rcl} x_j^\ell & = & tx [ j * ( k + 1 ) + \ell ] \\ X_j (t) & = & x_j^0 + x_j^1 t^1 + \cdots + x_j^k t^k \end{array} @]@ If tx.size() > (k + 1) * n , the other components of tx are not specified and should not be used. Note that superscripts represent an index for @(@ x_j^\ell @)@ and an exponent for @(@ t^\ell @)@. Also note that the Taylor coefficients for @(@ X(t) @)@ correspond to the derivatives of @(@ X(t) @)@ at @(@ t = 0 @)@ in the following way: @[@ x_j^\ell = \frac{1}{ \ell ! } X_j^{(\ell)} (0) @]@

12.8.11.l: ty
In calls to 12.8.11.n: forward , the argument ty has prototype
     CppAD::vector<
Base>& ty
while in calls to 12.8.11.o: reverse it has prototype
     const CppAD::vector<
Base>& ty
For all calls, tx.size() >= (k + 1) * m . For @(@ i = 0 , \ldots , m-1 @)@ and @(@ \ell = 0 , \ldots , k @)@, we use the Taylor coefficient notation @[@ \begin{array}{rcl} y_i^\ell & = & ty [ i * ( k + 1 ) + \ell ] \\ Y_i (t) & = & y_i^0 + y_i^1 t^1 + \cdots + y_i^k t^k + o ( t^k ) \end{array} @]@ where @(@ o( t^k ) / t^k \rightarrow 0 @)@ as @(@ t \rightarrow 0 @)@. If ty.size() > (k + 1) * m , the other components of ty are not specified and should not be used. Note that superscripts represent an index for @(@ y_j^\ell @)@ and an exponent for @(@ t^\ell @)@. Also note that the Taylor coefficients for @(@ Y(t) @)@ correspond to the derivatives of @(@ Y(t) @)@ at @(@ t = 0 @)@ in the following way: @[@ y_j^\ell = \frac{1}{ \ell ! } Y_j^{(\ell)} (0) @]@

12.8.11.l.a: forward
In the case of forward , for @(@ i = 0 , \ldots , m-1 @)@, @(@ ty[ i *( k + 1) + k ] @)@ is an output and all the other components of ty are inputs.

12.8.11.l.b: reverse
In the case of reverse , all the components of ty are inputs.

12.8.11.m: afun
The macro argument afun , is the name of the AD function corresponding to this atomic operation (as it is used in the source code). CppAD uses the other functions, where the arguments are vectors with elements of type Base , to implement the function
     
afun(idaxay)
where the argument are vectors with elements of type AD<Base> .

12.8.11.m.a: ax
The afun argument ax has prototype
     const 
Tvector< AD<Base> >& ax
It is the argument vector @(@ x \in B^n @)@ at which the AD<Base> version of @(@ y = f(x) @)@ is to be evaluated. The dimension of the domain space for @(@ y = f (x) @)@ is specified by 12.8.11.i: n ax.size() , which must be greater than zero.

12.8.11.m.b: ay
The afun result ay has prototype
     
Tvector< AD<Base> >& ay
The input values of its elements are not specified (must not matter). Upon return, it is the AD<Base> version of the result vector @(@ y = f(x) @)@. The dimension of the range space for @(@ y = f (x) @)@ is specified by 12.8.11.j: m ay.size() , which must be greater than zero.

12.8.11.m.c: Parallel Mode
The first call to
     
afun(idaxay)
must not be in 8.23.4: parallel mode. In addition, the 12.8.11.s: old_atomic clear routine cannot be called while in parallel mode.

12.8.11.n: forward
The macro argument forward is a user defined function
     
ok = forward(idknmvxvytxty)
that computes results during a 5.3: forward mode sweep. For this call, we are given the Taylor coefficients in tx form order zero through k , and the Taylor coefficients in ty with order less than k . The forward routine computes the k order Taylor coefficients for @(@ y @)@ using the definition @(@ Y(t) = f[ X(t) ] @)@. For example, for @(@ i = 0 , \ldots , m-1 @)@, @[@ \begin{array}{rcl} y_i^0 & = & Y(0) = f_i ( x^0 ) \\ y_i^1 & = & Y^{(1)} ( 0 ) = f_i^{(1)} ( x^0 ) X^{(1)} ( 0 ) = f_i^{(1)} ( x^0 ) x^1 \\ y_i^2 & = & \frac{1}{2 !} Y^{(2)} (0) \\ & = & \frac{1}{2} X^{(1)} (0)^\R{T} f_i^{(2)} ( x^0 ) X^{(1)} ( 0 ) + \frac{1}{2} f_i^{(1)} ( x^0 ) X^{(2)} ( 0 ) \\ & = & \frac{1}{2} (x^1)^\R{T} f_i^{(2)} ( x^0 ) x^1 + f_i^{(1)} ( x^0 ) x^2 \end{array} @]@ Then, for @(@ i = 0 , \ldots , m-1 @)@, it sets @[@ ty [ i * (k + 1) + k ] = y_i^k @]@ The other components of ty must be left unchanged.

12.8.11.n.a: Usage
This routine is used, with vx.size() > 0 and k == 0 , by calls to afun . It is used, with vx.size() = 0 and k equal to the order of the derivative begin computed, by calls to 5.3.4: forward .

12.8.11.n.b: vx
The forward argument vx has prototype
     const CppAD::vector<bool>& 
vx
The case vx.size() > 0 occurs once for each call to afun , during the call, and before any of the other callbacks corresponding to that call. Hence such a call can be used to cache information attached to the corresponding id (such as the elements of vx ). If vx.size() > 0 then k == 0 , vx.size() >= n , and for @(@ j = 0 , \ldots , n-1 @)@, vx[j] is true if and only if ax[j] is a 12.4.m: variable .

If vx.size() == 0 , then vy.size() == 0 and neither of these vectors should be used.

12.8.11.n.c: vy
The forward argument vy has prototype
     CppAD::vector<bool>& 
vy
If vy.size() == 0 , it should not be used. Otherwise, k == 0 and vy.size() >= m . The input values of the elements of vy are not specified (must not matter). Upon return, for @(@ j = 0 , \ldots , m-1 @)@, vy[i] is true if and only if ay[j] is a variable. (CppAD uses vy to reduce the necessary computations.)

12.8.11.o: reverse
The macro argument reverse is a user defined function
     
ok = reverse(idknmtxtypxpy)
that computes results during a 5.4: reverse mode sweep. The input value of the vectors tx and ty contain Taylor coefficient, up to order k , for @(@ X(t) @)@ and @(@ Y(t) @)@ respectively. We use the @(@ \{ x_j^\ell \} @)@ and @(@ \{ y_i^\ell \} @)@ to denote these Taylor coefficients where the implicit range indices are @(@ i = 0 , \ldots , m-1 @)@, @(@ j = 0 , \ldots , n-1 @)@, @(@ \ell = 0 , \ldots , k @)@. Using the calculations done by 12.8.11.n: forward , the Taylor coefficients @(@ \{ y_i^\ell \} @)@ are a function of the Taylor coefficients for @(@ \{ x_j^\ell \} @)@; i.e., given @(@ y = f(x) @)@ we define the function @(@ F : B^{n \times (k+1)} \rightarrow B^{m \times (k+1)} @)@ by @[@ y_i^\ell = F_i^\ell ( \{ x_j^\ell \} ) @]@ We use @(@ G : B^{m \times (k+1)} \rightarrow B @)@ to denote an arbitrary scalar valued function of the Taylor coefficients for @(@ Y(t) @)@ and write @(@ z = G( \{ y_i^\ell \} ) @)@. The reverse routine is given the derivative of @(@ z @)@ with respect to @(@ \{ y_i^\ell \} @)@ and computes its derivative with respect to @(@ \{ x_j^\ell \} @)@.

12.8.11.o.a: Usage
This routine is used, with k + 1 equal to the order of the derivative being calculated, by calls to 5.4.3: reverse .

12.8.11.o.b: py
The reverse argument py has prototype
     const CppAD::vector<
Base>& py
and py.size() >= (k + 1) * m . For @(@ i = 0 , \ldots , m-1 @)@ and @(@ \ell = 0 , \ldots , k @)@, @[@ py[ i * (k + 1 ) + \ell ] = \partial G / \partial y_i^\ell @]@ If py.size() > (k + 1) * m , the other components of py are not specified and should not be used.

12.8.11.o.c: px
We define the function @[@ H ( \{ x_j^\ell \} ) = G[ F( \{ x_j^\ell \} ) ] @]@ The reverse argument px has prototype
     CppAD::vector<
Base>& px
and px.size() >= (k + 1) * n . The input values of the elements of px are not specified (must not matter). Upon return, for @(@ j = 0 , \ldots , n-1 @)@ and @(@ p = 0 , \ldots , k @)@, @[@ \begin{array}{rcl} px [ j * (k + 1) + p ] & = & \partial H / \partial x_j^p \\ & = & ( \partial G / \partial \{ y_i^\ell \} ) ( \partial \{ y_i^\ell \} / \partial x_j^p ) \\ & = & \sum_{i=0}^{m-1} \sum_{\ell=0}^k ( \partial G / \partial y_i^\ell ) ( \partial y_i^\ell / \partial x_j^p ) \\ & = & \sum_{i=0}^{m-1} \sum_{\ell=p}^k py[ i * (k + 1 ) + \ell ] ( \partial F_i^\ell / \partial x_j^p ) \end{array} @]@ Note that we have used the fact that for @(@ \ell < p @)@, @(@ \partial F_i^\ell / \partial x_j^p = 0 @)@. If px.size() > (k + 1) * n , the other components of px are not specified and should not be used.

12.8.11.p: for_jac_sparse
The macro argument for_jac_sparse is a user defined function
     
ok = for_jac_sparse(idnmqrs)
that is used to compute results during a forward Jacobian sparsity sweep. For a fixed @(@ n \times q @)@ matrix @(@ R @)@, the Jacobian of @(@ f( x + R * u) @)@ with respect to @(@ u \in B^q @)@ is @[@ S(x) = f^{(1)} (x) * R @]@ Given a 12.4.j: sparsity pattern for @(@ R @)@, for_jac_sparse computes a sparsity pattern for @(@ S(x) @)@.

12.8.11.p.a: Usage
This routine is used by calls to 5.5.2: ForSparseJac .

12.8.11.p.b: q
The for_jac_sparse argument q has prototype
     size_t 
q
It specifies the number of columns in @(@ R \in B^{n \times q} @)@ and the Jacobian @(@ S(x) \in B^{m \times q} @)@.

12.8.11.p.c: r
The for_jac_sparse argument r has prototype
     const CppAD::vector< std::set<size_t> >& 
r
and r.size() >= n . For @(@ j = 0 , \ldots , n-1 @)@, all the elements of r[j] are between zero and q-1 inclusive. This specifies a sparsity pattern for the matrix @(@ R @)@.

12.8.11.p.d: s
The for_jac_sparse return value s has prototype
     CppAD::vector< std::set<size_t> >& 
s
and s.size() >= m . The input values of its sets are not specified (must not matter). Upon return for @(@ i = 0 , \ldots , m-1 @)@, all the elements of s[i] are between zero and q-1 inclusive. This represents a sparsity pattern for the matrix @(@ S(x) @)@.

12.8.11.q: rev_jac_sparse
The macro argument rev_jac_sparse is a user defined function
     
ok = rev_jac_sparse(idnmqrs)
that is used to compute results during a reverse Jacobian sparsity sweep. For a fixed @(@ q \times m @)@ matrix @(@ S @)@, the Jacobian of @(@ S * f( x ) @)@ with respect to @(@ x \in B^n @)@ is @[@ R(x) = S * f^{(1)} (x) @]@ Given a 12.4.j: sparsity pattern for @(@ S @)@, rev_jac_sparse computes a sparsity pattern for @(@ R(x) @)@.

12.8.11.q.a: Usage
This routine is used by calls to 5.5.4: RevSparseJac and to 5.7: optimize .

12.8.11.q.b: q
The rev_jac_sparse argument q has prototype
     size_t 
q
It specifies the number of rows in @(@ S \in B^{q \times m} @)@ and the Jacobian @(@ R(x) \in B^{q \times n} @)@.

12.8.11.q.c: s
The rev_jac_sparse argument s has prototype
     const CppAD::vector< std::set<size_t> >& 
s
and s.size() >= m . For @(@ i = 0 , \ldots , m-1 @)@, all the elements of s[i] are between zero and q-1 inclusive. This specifies a sparsity pattern for the matrix @(@ S^\R{T} @)@.

12.8.11.q.d: r
The rev_jac_sparse return value r has prototype
     CppAD::vector< std::set<size_t> >& 
r
and r.size() >= n . The input values of its sets are not specified (must not matter). Upon return for @(@ j = 0 , \ldots , n-1 @)@, all the elements of r[j] are between zero and q-1 inclusive. This represents a sparsity pattern for the matrix @(@ R(x)^\R{T} @)@.

12.8.11.r: rev_hes_sparse
The macro argument rev_hes_sparse is a user defined function
     
ok = rev_hes_sparse(idnmqrstuv)
There is an unspecified scalar valued function @(@ g : B^m \rightarrow B @)@. Given a sparsity pattern for @(@ R @)@ and information about the function @(@ z = g(y) @)@, this routine computes the sparsity pattern for @[@ V(x) = (g \circ f)^{(2)}( x ) R @]@

12.8.11.r.a: Usage
This routine is used by calls to 5.5.6: RevSparseHes .

12.8.11.r.b: q
The rev_hes_sparse argument q has prototype
     size_t 
q
It specifies the number of columns in the sparsity patterns.

12.8.11.r.c: r
The rev_hes_sparse argument r has prototype
     const CppAD::vector< std::set<size_t> >& 
r
and r.size() >= n . For @(@ j = 0 , \ldots , n-1 @)@, all the elements of r[j] are between zero and q-1 inclusive. This specifies a sparsity pattern for the matrix @(@ R \in B^{n \times q} @)@.

12.8.11.r.d: s
The rev_hes_sparse argument s has prototype
     const CppAD::vector<bool>& 
s
and s.size() >= m . This specifies a sparsity pattern for the matrix @(@ S(x) = g^{(1)} (y) \in B^{1 \times m} @)@.

12.8.11.r.e: t
The rev_hes_sparse argument t has prototype
     CppAD::vector<bool>& 
t
and t.size() >= n . The input values of its elements are not specified (must not matter). Upon return it represents a sparsity pattern for the matrix @(@ T(x) \in B^{1 \times n} @)@ defined by @[@ T(x) = (g \circ f)^{(1)} (x) = S(x) * f^{(1)} (x) @]@

12.8.11.r.f: u
The rev_hes_sparse argument u has prototype
     const CppAD::vector< std::set<size_t> >& 
u
and u.size() >= m . For @(@ i = 0 , \ldots , m-1 @)@, all the elements of u[i] are between zero and q-1 inclusive. This specifies a sparsity pattern for the matrix @(@ U(x) \in B^{m \times q} @)@ defined by @[@ \begin{array}{rcl} U(x) & = & \partial_u \{ \partial_y g[ y + f^{(1)} (x) R u ] \}_{u=0} \\ & = & \partial_u \{ g^{(1)} [ y + f^{(1)} (x) R u ] \}_{u=0} \\ & = & g^{(2)} (y) f^{(1)} (x) R \end{array} @]@

12.8.11.r.g: v
The rev_hes_sparse argument v has prototype
     CppAD::vector< std::set<size_t> >& 
v
and v.size() >= n . The input values of its elements are not specified (must not matter). Upon return, for @(@ j = 0, \ldots , n-1 @)@, all the elements of v[j] are between zero and q-1 inclusive. This represents a sparsity pattern for the matrix @(@ V(x) \in B^{n \times q} @)@ defined by @[@ \begin{array}{rcl} V(x) & = & \partial_u [ \partial_x (g \circ f) ( x + R u ) ]_{u=0} \\ & = & \partial_u [ (g \circ f)^{(1)}( x + R u ) ]_{u=0} \\ & = & (g \circ f)^{(2)}( x ) R \\ & = & f^{(1)} (x)^\R{T} g^{(2)} ( y ) f^{(1)} (x) R + \sum_{i=1}^m [ g^{(1)} (y) ]_i \; f_i^{(2)} (x) R \\ & = & f^{(1)} (x)^\R{T} U(x) + \sum_{i=1}^m S(x)_i \; f_i^{(2)} (x) R \end{array} @]@

12.8.11.s: clear
User atomic functions hold onto static work space in order to increase speed by avoiding system memory allocation calls. The function call
     user_atomic<
Base>::clear()
makes to work space 8.23.11: available to for other uses by the same thread. This should be called when you are done using the user atomic functions for a specific value of Base .

12.8.11.s.a: Restriction
The user atomic clear routine cannot be called while in 8.23.4: parallel execution mode.

12.8.11.t: Example

12.8.11.t.a: Simple
The file 12.8.11.1: old_reciprocal.cpp contains the simplest example and test of a user atomic operation.

12.8.11.t.b: Use AD
The examples 12.8.11.2: old_usead_1.cpp and 12.8.11.3: old_usead_2.cpp use AD to compute the derivatives inside a user defined atomic function. This may have the advantage of reducing the size of the tape, because a repeated section of code would only be taped once.

12.8.11.t.c: Tangent Function
The file 12.8.11.4: old_tan.cpp contains an example and test implementation of the tangent function as a user atomic operation.

12.8.11.t.d: Matrix Multiplication
The file 12.8.11.5: old_mat_mul.cpp contains an example and test implementation of matrix multiplication a a user atomic operation.
Input File: cppad/core/old_atomic.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11.1: Old Atomic Operation Reciprocal: Example and Test

12.8.11.1.a: Deprecated 2013-05-27
This example has been deprecated; see 4.4.7.2.13: atomic_reciprocal.cpp instead.

12.8.11.1.b: Theory
The example below defines the user atomic function @(@ f : \B{R}^n \rightarrow \B{R}^m @)@ where @(@ n = 1 @)@, @(@ m = 1 @)@, and @(@ f(x) = 1 / x @)@.
# include <cppad/cppad.hpp>

namespace { // Begin empty namespace
     using CppAD::vector;
     // ----------------------------------------------------------------------
     // a utility to compute the union of two sets.
     using CppAD::set_union;

     // ----------------------------------------------------------------------
     // forward mode routine called by CppAD
     bool reciprocal_forward(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {     assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          assert( k == 0 || vx.size() == 0 );
          bool ok = false;
          double f, fp, fpp;

          // Must always define the case k = 0.
          // Do not need case k if not using f.Forward(q, xp) for q >= k.
          switch(k)
          {     case 0:
               // this case must  be implemented
               if( vx.size() > 0 )
                    vy[0] = vx[0];
               // y^0 = f( x^0 ) = 1 / x^0
               ty[0] = 1. / tx[0];
               ok    = true;
               break;

               case 1:
               // needed if first order forward mode is used
               assert( vx.size() == 0 );
               // y^1 = f'( x^0 ) x^1
               f     = ty[0];
               fp    = - f / tx[0];
               ty[1] = fp * tx[1];
               ok    = true;
               break;

               case 2:
               // needed if second order forward mode is used
               assert( vx.size() == 0 );
               // Y''(t) = X'(t)^\R{T} f''[X(t)] X'(t) + f'[X(t)] X''(t)
               // 2 y^2  = x^1 * f''( x^0 ) x^1 + 2 f'( x^0 ) x^2
               f     = ty[0];
               fp    = - f / tx[0];
               fpp   = - 2.0 * fp / tx[0];
               ty[2] = tx[1] * fpp * tx[1] / 2.0 + fp * tx[2];
               ok    = true;
               break;
          }
          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse mode routine called by CppAD
     bool reciprocal_reverse(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
          vector<double>&          px ,
          const vector<double>&    py
     )
     {     // Do not need case k if not using f.Reverse(k+1, w).
          assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          bool ok = false;

          double f, fp, fpp, fppp;
          switch(k)
          {     case 0:
               // needed if first order reverse mode is used
               // reverse: F^0 ( tx ) = y^0 = f( x^0 )
               f     = ty[0];
               fp    = - f / tx[0];
               px[0] = py[0] * fp;;
               ok    = true;
               break;

               case 1:
               // needed if second order reverse mode is used
               // reverse: F^1 ( tx ) = y^1 = f'( x^0 ) x^1
               f      = ty[0];
               fp     = - f / tx[0];
               fpp    = - 2.0 * fp / tx[0];
               px[1]  = py[1] * fp;
               px[0]  = py[1] * fpp * tx[1];
               // reverse: F^0 ( tx ) = y^0 = f( x^0 );
               px[0] += py[0] * fp;

               ok     = true;
               break;

               case 2:
               // needed if third order reverse mode is used
               // reverse: F^2 ( tx ) = y^2 =
               //            = x^1 * f''( x^0 ) x^1 / 2 + f'( x^0 ) x^2
               f      = ty[0];
               fp     = - f / tx[0];
               fpp    = - 2.0 * fp / tx[0];
               fppp   = - 3.0 * fpp / tx[0];
               px[2]  = py[2] * fp;
               px[1]  = py[2] * fpp * tx[1];
               px[0]  = py[2] * tx[1] * fppp * tx[1] / 2.0 + fpp * tx[2];
               // reverse: F^1 ( tx ) = y^1 = f'( x^0 ) x^1
               px[1] += py[1] * fp;
               px[0] += py[1] * fpp * tx[1];
               // reverse: F^0 ( tx ) = y^0 = f( x^0 );
               px[0] += py[0] * fp;

               ok = true;
               break;
          }
          return ok;
     }
     // ----------------------------------------------------------------------
     // forward Jacobian sparsity routine called by CppAD
     bool reciprocal_for_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          vector< std::set<size_t> >&           s )
     {     // Can just return false if not using f.ForSparseJac
          assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );

          // sparsity for S(x) = f'(x) * R is same as sparsity for R
          s[0] = r[0];

          return true;
     }
     // ----------------------------------------------------------------------
     // reverse Jacobian sparsity routine called by CppAD
     bool reciprocal_rev_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          vector< std::set<size_t> >&           r ,
          const vector< std::set<size_t> >&     s )
     {     // Can just return false if not using RevSparseJac.
          assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );

          // sparsity for R(x) = S * f'(x) is same as sparsity for S
          for(size_t q = 0; q < p; q++)
               r[q] = s[q];

          return true;
     }
     // ----------------------------------------------------------------------
     // reverse Hessian sparsity routine called by CppAD
     bool reciprocal_rev_hes_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector<bool>&                   s ,
                vector<bool>&                   t ,
          const vector< std::set<size_t> >&     u ,
                vector< std::set<size_t> >&     v )
     {     // Can just return false if not use RevSparseHes.
          assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );

          // sparsity for T(x) = S(x) * f'(x) is same as sparsity for S
          t[0] = s[0];

          // V(x) = [ f'(x)^T * g''(y) * f'(x) + g'(y) * f''(x) ] * R
          // U(x) = g''(y) * f'(x) * R
          // S(x) = g'(y)

          // back propagate the sparsity for U because derivative of
          // reciprocal may be non-zero
          v[0] = u[0];

          // convert forward Jacobian sparsity to Hessian sparsity
          // because second derivative of reciprocal may be non-zero
          if( s[0] )
               v[0] = set_union(v[0], r[0] );


          return true;
     }
     // ---------------------------------------------------------------------
     // Declare the AD<double> routine reciprocal(id, ax, ay)
     CPPAD_USER_ATOMIC(
          reciprocal                 ,
          CppAD::vector              ,
          double                     ,
          reciprocal_forward         ,
          reciprocal_reverse         ,
          reciprocal_for_jac_sparse  ,
          reciprocal_rev_jac_sparse  ,
          reciprocal_rev_hes_sparse
     )
} // End empty namespace

bool old_reciprocal(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // --------------------------------------------------------------------
     // Create the function f(x)
     //
     // domain space vector
     size_t n  = 1;
     double  x0 = 0.5;
     vector< AD<double> > ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     vector< AD<double> > ay(m);

     // call user function and store reciprocal(x) in au[0]
     vector< AD<double> > au(m);
     size_t id = 0;           // not used
     reciprocal(id, ax, au);     // u = 1 / x

     // call user function and store reciprocal(u) in ay[0]
     reciprocal(id, au, ay);     // y = 1 / u = x

     // create f: x -> y and stop tape recording
     CppAD::ADFun<double> f;
     f.Dependent (ax, ay);  // f(x) = x

     // --------------------------------------------------------------------
     // Check forward mode results
     //
     // check function value
     double check = x0;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> x_q(n), y_q(m);
     q      = 0;
     x_q[0] = x0;
     y_q    = f.Forward(q, x_q);
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check first order forward mode
     q      = 1;
     x_q[0] = 1;
     y_q    = f.Forward(q, x_q);
     check  = 1.;
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check second order forward mode
     q      = 2;
     x_q[0] = 0;
     y_q    = f.Forward(q, x_q);
     check  = 0.;
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // --------------------------------------------------------------------
     // Check reverse mode results
     //
     // third order reverse mode
     q     = 3;
     vector<double> w(m), dw(n * q);
     w[0]  = 1.;
     dw    = f.Reverse(q, w);
     check = 1.;
     ok &= NearEqual(dw[0] , check,  eps, eps);
     check = 0.;
     ok &= NearEqual(dw[1] , check,  eps, eps);
     ok &= NearEqual(dw[2] , check,  eps, eps);

     // --------------------------------------------------------------------
     // forward mode sparstiy pattern
     size_t p = n;
     CppAD::vectorBool r1(n * p), s1(m * p);
     r1[0] = true;          // compute sparsity pattern for x[0]
     s1    = f.ForSparseJac(p, r1);
     ok  &= s1[0] == true;  // f[0] depends on x[0]

     // --------------------------------------------------------------------
     // reverse mode sparstiy pattern
     q = m;
     CppAD::vectorBool s2(q * m), r2(q * n);
     s2[0] = true;          // compute sparsity pattern for f[0]
     r2    = f.RevSparseJac(q, s2);
     ok  &= r2[0] == true;  // f[0] depends on x[0]

     // --------------------------------------------------------------------
     // Hessian sparsity (using previous ForSparseJac call)
     CppAD::vectorBool s3(m), h(p * n);
     s3[0] = true;        // compute sparsity pattern for f[0]
     h     = f.RevSparseHes(p, s3);
     ok  &= h[0] == true; // second partial of f[0] w.r.t. x[0] may be non-zero

     // -----------------------------------------------------------------
     // Free all temporary work space associated with old_atomic objects.
     // (If there are future calls to user atomic functions, they will
     // create new temporary work space.)
     CppAD::user_atomic<double>::clear();

     return ok;
}

Input File: example/deprecated/old_reciprocal.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11.2: Using AD to Compute Atomic Function Derivatives

12.8.11.2.a: Deprecated 2013-05-27
This example has been deprecated because it is easier to use the 4.4.7.1: checkpoint class instead.

12.8.11.2.b: Purpose
Consider the case where an inner function is used repeatedly in the definition of an outer function. In this case, it may reduce the number of variables 5.1.5.g: size_var , and hence the required memory.

12.8.11.2.c: Simple Case
This example is the same as 12.8.11.1: old_reciprocal.cpp , except that it uses AD to compute the derivatives needed by an atomic function. This is a simple example of an inner function, and hence not really useful for the purpose above; see 12.8.11.3: old_usead_2.cpp for a more complete example.
# include <cppad/cppad.hpp>

namespace { // Begin empty namespace
     using CppAD::AD;
     using CppAD::ADFun;
     using CppAD::vector;

     // ----------------------------------------------------------------------
     // function that computes reciprocal
     ADFun<double>* r_ptr_;
     void create_r(void)
     {     vector< AD<double> > ax(1), ay(1);
          ax[0]  = 1;
          CppAD::Independent(ax);
          ay[0]  = 1.0 / ax[0];
          r_ptr_ = new ADFun<double>(ax, ay);
     }
     void destroy_r(void)
     {     delete r_ptr_;
          r_ptr_ = CPPAD_NULL;
     }

     // ----------------------------------------------------------------------
     // forward mode routine called by CppAD
     bool reciprocal_forward(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {     assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          assert( k == 0 || vx.size() == 0 );
          bool ok = true;
          vector<double> x_q(1), y_q(1);

          // check for special case
          if( vx.size() > 0 )
               vy[0] = vx[0];

          // make sure r_ has proper lower order Taylor coefficients stored
          // then compute ty[k]
          for(size_t q = 0; q <= k; q++)
          {     x_q[0] = tx[q];
               y_q    = r_ptr_->Forward(q, x_q);
               if( q == k )
                    ty[k] = y_q[0];
               assert( q == k || ty[q] == y_q[0] );
          }
          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse mode routine called by CppAD
     bool reciprocal_reverse(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
          vector<double>&          px ,
          const vector<double>&    py
     )
     {     assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          bool ok = true;
          vector<double> x_q(1), w(k+1), dw(k+1);

          // make sure r_ has proper forward mode coefficients
          size_t q;
          for(q = 0; q <= k; q++)
          {     x_q[0] = tx[q];
# ifdef NDEBUG
               r_ptr_->Forward(q, x_q);
# else
               vector<double> y_q(1);
               y_q    = r_ptr_->Forward(q, x_q);
               assert( ty[q] == y_q[0] );
# endif
          }
          for(q = 0; q <=k; q++)
               w[q] = py[q];
          dw = r_ptr_->Reverse(k+1, w);
          for(q = 0; q <=k; q++)
               px[q] = dw[q];

          return ok;
     }
     // ----------------------------------------------------------------------
     // forward Jacobian sparsity routine called by CppAD
     bool reciprocal_for_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          vector< std::set<size_t> >&           s )
     {     assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          bool ok = true;

          vector< std::set<size_t> > R(1), S(1);
          R[0] = r[0];
          S = r_ptr_->ForSparseJac(p, R);
          s[0] = S[0];

          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse Jacobian sparsity routine called by CppAD
     bool reciprocal_rev_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          vector< std::set<size_t> >&           r ,
          const vector< std::set<size_t> >&     s )
     {
          assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          bool ok = true;

          vector< std::set<size_t> > R(p), S(p);
          size_t q;
          for(q = 0; q < p; q++)
               S[q] = s[q];
          R = r_ptr_->RevSparseJac(p, S);
          for(q = 0; q < p; q++)
               r[q] = R[q];

          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse Hessian sparsity routine called by CppAD
     bool reciprocal_rev_hes_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector<bool>&                   s ,
          vector<bool>&                         t ,
          const vector< std::set<size_t> >&     u ,
          vector< std::set<size_t> >&           v )
     {     // Can just return false if not use RevSparseHes.
          assert( id == 0 );
          assert( n == 1 );
          assert( m == 1 );
          bool ok = true;

          // compute sparsity pattern for T(x) = S(x) * f'(x)
          vector<bool> T(1), S(1);
          S[0]   = s[0];
          T      = r_ptr_->RevSparseJac(1, S);
          t[0]   = T[0];

          // compute sparsity pattern for A(x) = U(x)^T * f'(x)
          vector<bool> Ut(p), A(p);
          size_t q;
          for(q = 0; q < p; q++)
               Ut[q] = false;
          std::set<size_t>::iterator itr;
          for(itr = u[0].begin(); itr != u[0].end(); itr++)
               Ut[*itr] = true;
          A = r_ptr_-> RevSparseJac(p, Ut);

          // compute sparsity pattern for H(x) = R^T * (S * F)''(x)
          vector<bool> H(p), R(n);
          for(q = 0; q < p; q++)
               R[q] = false;
          for(itr = r[0].begin(); itr != r[0].end(); itr++)
               R[*itr] = true;
          r_ptr_->ForSparseJac(p, R);
          H = r_ptr_->RevSparseHes(p, S);

          // compute sparsity pattern for V(x) = A(x)^T + H(x)^T
          v[0].clear();
          for(q = 0; q < p; q++)
               if( A[q] | H[q] )
                    v[0].insert(q);

          return ok;
     }
     // ---------------------------------------------------------------------
     // Declare the AD<double> routine reciprocal(id, ax, ay)
     CPPAD_USER_ATOMIC(
          reciprocal                 ,
          CppAD::vector              ,
          double                     ,
          reciprocal_forward         ,
          reciprocal_reverse         ,
          reciprocal_for_jac_sparse  ,
          reciprocal_rev_jac_sparse  ,
          reciprocal_rev_hes_sparse
     )
} // End empty namespace

bool old_usead_1(void)
{     bool ok = true;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // --------------------------------------------------------------------
     // Create the ADFun<doulbe> r_
     create_r();

     // --------------------------------------------------------------------
     // Create the function f(x)
     //
     // domain space vector
     size_t n  = 1;
     double  x0 = 0.5;
     vector< AD<double> > ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 1;
     vector< AD<double> > ay(m);

     // call user function and store reciprocal(x) in au[0]
     vector< AD<double> > au(m);
     size_t id = 0;           // not used
     reciprocal(id, ax, au);     // u = 1 / x

     // call user function and store reciprocal(u) in ay[0]
     reciprocal(id, au, ay);     // y = 1 / u = x

     // create f: x -> y and stop tape recording
     ADFun<double> f;
     f.Dependent(ax, ay);  // f(x) = x

     // --------------------------------------------------------------------
     // Check function value results
     //
     // check function value
     double check = x0;
     ok &= NearEqual( Value(ay[0]) , check,  eps, eps);

     // check zero order forward mode
     size_t q;
     vector<double> x_q(n), y_q(m);
     q      = 0;
     x_q[0] = x0;
     y_q    = f.Forward(q, x_q);
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check first order forward mode
     q      = 1;
     x_q[0] = 1;
     y_q    = f.Forward(q, x_q);
     check  = 1.;
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // check second order forward mode
     q      = 2;
     x_q[0] = 0;
     y_q    = f.Forward(q, x_q);
     check  = 0.;
     ok &= NearEqual(y_q[0] , check,  eps, eps);

     // --------------------------------------------------------------------
     // Check reverse mode results
     //
     // third order reverse mode
     q     = 3;
     vector<double> w(m), dw(n * q);
     w[0]  = 1.;
     dw    = f.Reverse(q, w);
     check = 1.;
     ok &= NearEqual(dw[0] , check,  eps, eps);
     check = 0.;
     ok &= NearEqual(dw[1] , check,  eps, eps);
     ok &= NearEqual(dw[2] , check,  eps, eps);

     // --------------------------------------------------------------------
     // forward mode sparstiy pattern
     size_t p = n;
     CppAD::vectorBool r1(n * p), s1(m * p);
     r1[0] = true;          // compute sparsity pattern for x[0]
     s1    = f.ForSparseJac(p, r1);
     ok  &= s1[0] == true;  // f[0] depends on x[0]

     // --------------------------------------------------------------------
     // reverse mode sparstiy pattern
     q = m;
     CppAD::vectorBool s2(q * m), r2(q * n);
     s2[0] = true;          // compute sparsity pattern for f[0]
     r2    = f.RevSparseJac(q, s2);
     ok  &= r2[0] == true;  // f[0] depends on x[0]

     // --------------------------------------------------------------------
     // Hessian sparsity (using previous ForSparseJac call)
     CppAD::vectorBool s3(m), h(p * n);
     s3[0] = true;        // compute sparsity pattern for f[0]
     h     = f.RevSparseJac(p, s3);
     ok  &= h[0] == true; // second partial of f[0] w.r.t. x[0] may be non-zero

     // -----------------------------------------------------------------
     // Free all memory associated with the object r_ptr
     destroy_r();

     // -----------------------------------------------------------------
     // Free all temporary work space associated with old_atomic objects.
     // (If there are future calls to user atomic functions, they will
     // create new temporary work space.)
     CppAD::user_atomic<double>::clear();

     return ok;
}

Input File: example/deprecated/old_usead_1.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11.3: Using AD to Compute Atomic Function Derivatives

12.8.11.3.a: Deprecated 2013-05-27
This example has been deprecated because it is easier to use the 4.4.7.1: checkpoint class instead.

12.8.11.3.b: Purpose
Consider the case where an inner function is used repeatedly in the definition of an outer function. In this case, it may reduce the number of variables 5.1.5.g: size_var , and hence the required memory.
# include <cppad/cppad.hpp>

namespace { // Begin empty namespace
     using CppAD::AD;
     using CppAD::ADFun;
     using CppAD::vector;

     // ----------------------------------------------------------------------
     // ODE for [t, t^2 / 2 ] in form required by Runge45
     class Fun {
     public:
          void Ode(
               const AD<double>           &t,
               const vector< AD<double> > &z,
               vector< AD<double> >       &f)
          {     assert( z.size() == 2 );
               assert( f.size() == 2 );
               f[0] =  1.0;
               f[1] =  z[0];
          }
     };

     // ----------------------------------------------------------------------
     // Create function that takes on Runge45 step for the ODE above
     ADFun<double>* r_ptr_;
     void create_r(void)
     {     size_t n = 3, m = 2;
          vector< AD<double> > x(n), zi(m), y(m), e(m);
          // The value of x does not matter because the operation sequence
          // does not depend on x.
          x[0]  = 0.0;  // initial value z_0 (t) at t = ti
          x[1]  = 0.0;  // initial value z_1 (t) at t = ti
          x[2]  = 0.1;  // final time for this integration
          CppAD::Independent(x);
          zi[0]         = x[0];  // z_0 (t) at t = ti
          zi[1]         = x[1];  // z_1 (t) at t = ti
          AD<double> ti = 0.0;   // t does not appear in ODE so does not matter
          AD<double> tf = x[2];  // final time
          size_t M      = 3;     // number of Runge45 steps to take
          Fun F;
          y             = CppAD::Runge45(F, M, ti, tf, zi, e);
          r_ptr_        = new ADFun<double>(x, y);
     }
     void destroy_r(void)
     {     delete r_ptr_;
          r_ptr_ = CPPAD_NULL;
     }

     // ----------------------------------------------------------------------
     // forward mode routine called by CppAD
     bool solve_ode_forward(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {     assert( id == 0 );
          assert( n == 3 );
          assert( m == 2 );
          assert( k == 0 || vx.size() == 0 );
          bool ok = true;
          vector<double> xp(n), yp(m);
          size_t i, j;

          // check for special case
          if( vx.size() > 0 )
          {     //Compute r, a Jacobian sparsity pattern.
               // Use reverse mode because m < n.
               vector< std::set<size_t> > s(m), r(m);
               for(i = 0; i < m; i++)
                    s[i].insert(i);
               r = r_ptr_->RevSparseJac(m, s);
               std::set<size_t>::const_iterator itr;
               for(i = 0; i < m; i++)
               {     vy[i] = false;
                    for(itr = s[i].begin(); itr != s[i].end(); itr++)
                    {     j = *itr;
                         assert( j < n );
                         // y[i] depends on the value of x[j]
                         // Visual Studio 2013 generates warning without bool below
                         vy[i] |= bool( vx[j] );
                    }
               }
          }
          // make sure r_ has proper lower order Taylor coefficients stored
          // then compute ty[k]
          for(size_t q = 0; q <= k; q++)
          {     for(j = 0; j < n; j++)
                    xp[j] = tx[j * (k+1) + q];
               yp    = r_ptr_->Forward(q, xp);
               if( q == k )
               {     for(i = 0; i < m; i++)
                         ty[i * (k+1) + q] = yp[i];
               }
# ifndef NDEBUG
               else
               {     for(i = 0; i < m; i++)
                         assert( ty[i * (k+1) + q] == yp[i] );
               }
# endif
          }
          // no longer need the Taylor coefficients in r_ptr_
          // (have to reconstruct them every time)
          r_ptr_->capacity_order(0);
          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse mode routine called by CppAD
     bool solve_ode_reverse(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
          vector<double>&          px ,
          const vector<double>&    py
     )
     {     assert( id == 0 );
          assert( n == 3 );
          assert( m == 2 );
          bool ok = true;
          vector<double> xp(n), w( (k+1) * m ), dw( (k+1) * n );

          // make sure r_ has proper forward mode coefficients
          size_t i, j, q;
          for(q = 0; q <= k; q++)
          {     for(j = 0; j < n; j++)
                    xp[j] = tx[j * (k+1) + q];
# ifdef NDEBUG
               r_ptr_->Forward(q, xp);
# else
               vector<double> yp(m);
               yp = r_ptr_->Forward(q, xp);
               for(i = 0; i < m; i++)
                    assert( ty[i * (k+1) + q] == yp[i] );
# endif
          }
          for(i = 0; i < m; i++)
          {     for(q = 0; q <=k; q++)
                    w[ i * (k+1) + q] = py[ i * (k+1) + q];
          }
          dw = r_ptr_->Reverse(k+1, w);
          for(j = 0; j < n; j++)
          {     for(q = 0; q <=k; q++)
                    px[ j * (k+1) + q] = dw[ j * (k+1) + q];
          }
          // no longer need the Taylor coefficients in r_ptr_
          // (have to reconstruct them every time)
          r_ptr_->capacity_order(0);

          return ok;
     }
     // ----------------------------------------------------------------------
     // forward Jacobian sparsity routine called by CppAD
     bool solve_ode_for_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          vector< std::set<size_t> >&           s )
     {     assert( id == 0 );
          assert( n == 3 );
          assert( m == 2 );
          bool ok = true;

          vector< std::set<size_t> > R(n), S(m);
          for(size_t j = 0; j < n; j++)
               R[j] = r[j];
          S = r_ptr_->ForSparseJac(p, R);
          for(size_t i = 0; i < m; i++)
               s[i] = S[i];

          // no longer need the forward mode sparsity pattern
          // (have to reconstruct them every time)
          r_ptr_->size_forward_set(0);

          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse Jacobian sparsity routine called by CppAD
     bool solve_ode_rev_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          vector< std::set<size_t> >&           r ,
          const vector< std::set<size_t> >&     s )
     {
          assert( id == 0 );
          assert( n == 3 );
          assert( m == 2 );
          bool ok = true;

          vector< std::set<size_t> > R(p), S(p);
          std::set<size_t>::const_iterator itr;
          size_t i;
          // untranspose s
          for(i = 0; i < m; i++)
          {     for(itr = s[i].begin(); itr != s[i].end(); itr++)
                    S[*itr].insert(i);
          }
          R = r_ptr_->RevSparseJac(p, S);
          // transpose r
          for(i = 0; i < m; i++)
               r[i].clear();
          for(i = 0; i < p; i++)
          {     for(itr = R[i].begin(); itr != R[i].end(); itr++)
                    r[*itr].insert(i);
          }
          return ok;
     }
     // ----------------------------------------------------------------------
     // reverse Hessian sparsity routine called by CppAD
     bool solve_ode_rev_hes_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector<bool>&                   s ,
          vector<bool>&                         t ,
          const vector< std::set<size_t> >&     u ,
          vector< std::set<size_t> >&           v )
     {     // Can just return false if not use RevSparseHes.
          assert( id == 0 );
          assert( n == 3 );
          assert( m == 2 );
          bool ok = true;
          std::set<size_t>::const_iterator itr;

          // compute sparsity pattern for T(x) = S(x) * f'(x)
          vector< std::set<size_t> > S(1);
          size_t i, j;
          S[0].clear();
          for(i = 0; i < m; i++)
               if( s[i] )
                    S[0].insert(i);
          t = r_ptr_->RevSparseJac(1, s);

          // compute sparsity pattern for A(x)^T = U(x)^T * f'(x)
          vector< std::set<size_t> > Ut(p), At(p);
          for(i = 0; i < m; i++)
          {     for(itr = u[i].begin(); itr != u[i].end(); itr++)
                    Ut[*itr].insert(i);
          }
          At = r_ptr_->RevSparseJac(p, Ut);

          // compute sparsity pattern for H(x)^T = R^T * (S * F)''(x)
          vector< std::set<size_t> > R(n), Ht(p);
          for(j = 0; j < n; j++)
               R[j] = r[j];
          r_ptr_->ForSparseJac(p, R);
          Ht = r_ptr_->RevSparseHes(p, S);

          // compute sparsity pattern for V(x) = A(x) + H(x)^T
          for(j = 0; j < n; j++)
               v[j].clear();
          for(i = 0; i < p; i++)
          {     for(itr = At[i].begin(); itr != At[i].end(); itr++)
                    v[*itr].insert(i);
               for(itr = Ht[i].begin(); itr != Ht[i].end(); itr++)
                    v[*itr].insert(i);
          }

          // no longer need the forward mode sparsity pattern
          // (have to reconstruct them every time)
          r_ptr_->size_forward_set(0);

          return ok;
     }
     // ---------------------------------------------------------------------
     // Declare the AD<double> routine solve_ode(id, ax, ay)
     CPPAD_USER_ATOMIC(
          solve_ode                 ,
          CppAD::vector             ,
          double                    ,
          solve_ode_forward         ,
          solve_ode_reverse         ,
          solve_ode_for_jac_sparse  ,
          solve_ode_rev_jac_sparse  ,
          solve_ode_rev_hes_sparse
     )
} // End empty namespace

bool old_usead_2(void)
{     bool ok = true;
     using CppAD::NearEqual;
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // --------------------------------------------------------------------
     // Create the ADFun<doulbe> r_
     create_r();

     // --------------------------------------------------------------------
     // domain and range space vectors
     size_t n = 3, m = 2;
     vector< AD<double> > au(n), ax(n), ay(m);
     au[0]         = 0.0;        // value of z_0 (t) = t, at t = 0
     ax[1]         = 0.0;        // value of z_1 (t) = t^2/2, at t = 0
     au[2]         = 1.0;        // final t
     CppAD::Independent(au);
     size_t M      = 2;          // number of r steps to take
     ax[0]         = au[0];      // value of z_0 (t) = t, at t = 0
     ax[1]         = au[1];      // value of z_1 (t) = t^2/2, at t = 0
     AD<double> dt = au[2] / double(M);  // size of each r step
     ax[2]         = dt;
     for(size_t i_step = 0; i_step < M; i_step++)
     {     size_t id = 0;               // not used
          solve_ode(id, ax, ay);
          ax[0] = ay[0];
          ax[1] = ay[1];
     }

     // create f: u -> y and stop tape recording
     // y_0(t) = u_0 + t                   = u_0 + u_2
     // y_1(t) = u_1 + u_0 * t + t^2 / 2   = u_1 + u_0 * u_2 + u_2^2 / 2
     // where t = u_2
     ADFun<double> f;
     f.Dependent(au, ay);

     // --------------------------------------------------------------------
     // Check forward mode results
     //
     // zero order forward
     vector<double> up(n), yp(m);
     size_t q  = 0;
     double u0 = 0.5;
     double u1 = 0.25;
     double u2 = 0.75;
     double check;
     up[0]     = u0;
     up[1]     = u1;
     up[2]     = u2;
     yp        = f.Forward(q, up);
     check     = u0 + u2;
     ok       &= NearEqual( yp[0], check,  eps, eps);
     check     = u1 + u0 * u2 + u2 * u2 / 2.0;
     ok       &= NearEqual( yp[1], check,  eps, eps);
     //
     // forward mode first derivative w.r.t t
     q         = 1;
     up[0]     = 0.0;
     up[1]     = 0.0;
     up[2]     = 1.0;
     yp        = f.Forward(q, up);
     check     = 1.0;
     ok       &= NearEqual( yp[0], check,  eps, eps);
     check     = u0 + u2;
     ok       &= NearEqual( yp[1], check,  eps, eps);
     //
     // forward mode second order Taylor coefficient w.r.t t
     q         = 2;
     up[0]     = 0.0;
     up[1]     = 0.0;
     up[2]     = 0.0;
     yp        = f.Forward(q, up);
     check     = 0.0;
     ok       &= NearEqual( yp[0], check,  eps, eps);
     check     = 1.0 / 2.0;
     ok       &= NearEqual( yp[1], check,  eps, eps);
     // --------------------------------------------------------------------
     // reverse mode derivatives of \partial_t y_1 (t)
     vector<double> w(m * q), dw(n * q);
     w[0 * q + 0]  = 0.0;
     w[1 * q + 0]  = 0.0;
     w[0 * q + 1]  = 0.0;
     w[1 * q + 1]  = 1.0;
     dw        = f.Reverse(q, w);
     // derivative of y_1(u) = u_1 + u_0 * u_2 + u_2^2 / 2,  w.r.t. u
     // is equal deritative of \partial_u2 y_1(u) w.r.t \partial_u2 u
     check     = u2;
     ok       &= NearEqual( dw[0 * q + 1], check,  eps, eps);
     check     = 1.0;
     ok       &= NearEqual( dw[1 * q + 1], check,  eps, eps);
     check     = u0 + u2;
     ok       &= NearEqual( dw[2 * q + 1], check,  eps, eps);
     // derivative of \partial_t y_1 w.r.t u = u_0 + t,  w.r.t u
     check     = 1.0;
     ok       &= NearEqual( dw[0 * q + 0], check,  eps, eps);
     check     = 0.0;
     ok       &= NearEqual( dw[1 * q + 0], check,  eps, eps);
     check     = 1.0;
     ok       &= NearEqual( dw[2 * q + 0], check,  eps, eps);
     // --------------------------------------------------------------------
     // forward mode sparsity pattern for the Jacobian
     // f_u = [   1, 0,   1 ]
     //       [ u_2, 1, u_2 ]
     size_t i, j, p = n;
     CppAD::vectorBool r(n * p), s(m * p);
     // r = identity sparsity pattern
     for(i = 0; i < n; i++)
          for(j = 0; j < p; j++)
               r[i*n +j] = (i == j);
     s   = f.ForSparseJac(p, r);
     ok &= s[ 0 * p + 0] == true;
     ok &= s[ 0 * p + 1] == false;
     ok &= s[ 0 * p + 2] == true;
     ok &= s[ 1 * p + 0] == true;
     ok &= s[ 1 * p + 1] == true;
     ok &= s[ 1 * p + 2] == true;
     // --------------------------------------------------------------------
     // reverse mode sparsity pattern for the Jacobian
     q = m;
     s.resize(q * m);
     r.resize(q * n);
     // s = identity sparsity pattern
     for(i = 0; i < q; i++)
          for(j = 0; j < m; j++)
               s[i*m +j] = (i == j);
     r   = f.RevSparseJac(q, s);
     ok &= r[ 0 * n + 0] == true;
     ok &= r[ 0 * n + 1] == false;
     ok &= r[ 0 * n + 2] == true;
     ok &= r[ 1 * n + 0] == true;
     ok &= r[ 1 * n + 1] == true;
     ok &= r[ 1 * n + 2] == true;

     // --------------------------------------------------------------------
     // Hessian sparsity for y_1 (u) = u_1 + u_0 * u_2 + u_2^2 / 2
     s.resize(m);
     s[0] = false;
     s[1] = true;
     r.resize(n * n);
     for(i = 0; i < n; i++)
          for(j = 0; j < n; j++)
               r[ i * n + j ] = (i == j);
     CppAD::vectorBool h(n * n);
     h   = f.RevSparseHes(n, s);
     ok &= h[0 * n + 0] == false;
     ok &= h[0 * n + 1] == false;
     ok &= h[0 * n + 2] == true;
     ok &= h[1 * n + 0] == false;
     ok &= h[1 * n + 1] == false;
     ok &= h[1 * n + 2] == false;
     ok &= h[2 * n + 0] == true;
     ok &= h[2 * n + 1] == false;
     ok &= h[2 * n + 2] == true;

     // --------------------------------------------------------------------
     destroy_r();

     // Free all temporary work space associated with old_atomic objects.
     // (If there are future calls to user atomic functions, they will
     // create new temporary work space.)
     CppAD::user_atomic<double>::clear();

     return ok;
}

Input File: example/deprecated/old_usead_2.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test

12.8.11.4.a: Deprecated 2013-05-27
This example has not deprecated; see 4.4.7.2.15: atomic_tangent.cpp instead.

12.8.11.4.b: Theory
The code below uses the 12.3.1.8: tan_forward and 12.3.2.8: tan_reverse to implement the tangent ( id == 0 ) and hyperbolic tangent ( id == 1 ) functions as user atomic operations.
# include <cppad/cppad.hpp>

namespace { // Begin empty namespace
     using CppAD::vector;

     // a utility to compute the union of two sets.
     using CppAD::set_union;

     // ----------------------------------------------------------------------
     // forward mode routine called by CppAD
     bool old_tan_forward(
          size_t                   id ,
          size_t                order ,
          size_t                    n ,
          size_t                    m ,
          const vector<bool>&      vx ,
          vector<bool>&           vzy ,
          const vector<float>&     tx ,
          vector<float>&          tzy
     )
     {
          assert( id == 0 || id == 1 );
          assert( n == 1 );
          assert( m == 2 );
          assert( tx.size() >= (order+1) * n );
          assert( tzy.size() >= (order+1) * m );

          size_t n_order = order + 1;
          size_t j = order;
          size_t k;

          // check if this is during the call to old_tan(id, ax, ay)
          if( vx.size() > 0 )
          {     assert( vx.size() >= n );
               assert( vzy.size() >= m );

               // now setvzy
               vzy[0] = vx[0];
               vzy[1] = vx[0];
          }

          if( j == 0 )
          {     // z^{(0)} = tan( x^{(0)} ) or tanh( x^{(0)} )
               if( id == 0 )
                    tzy[0] = float( tan( tx[0] ) );
               else     tzy[0] = float( tanh( tx[0] ) );

               // y^{(0)} = z^{(0)} * z^{(0)}
               tzy[n_order + 0] = tzy[0] * tzy[0];
          }
          else
          {     float j_inv = 1.f / float(j);
               if( id == 1 )
                    j_inv = - j_inv;

               // z^{(j)} = x^{(j)} +- sum_{k=1}^j k x^{(k)} y^{(j-k)} / j
               tzy[j] = tx[j];
               for(k = 1; k <= j; k++)
                    tzy[j] += tx[k] * tzy[n_order + j-k] * float(k) * j_inv;

               // y^{(j)} = sum_{k=0}^j z^{(k)} z^{(j-k)}
               tzy[n_order + j] = 0.;
               for(k = 0; k <= j; k++)
                    tzy[n_order + j] += tzy[k] * tzy[j-k];
          }

          // All orders are implemented and there are no possible errors
          return true;
     }
     // ----------------------------------------------------------------------
     // reverse mode routine called by CppAD
     bool old_tan_reverse(
          size_t                   id ,
          size_t                order ,
          size_t                    n ,
          size_t                    m ,
          const vector<float>&     tx ,
          const vector<float>&    tzy ,
          vector<float>&           px ,
          const vector<float>&    pzy
     )
     {
          assert( id == 0 || id == 1 );
          assert( n == 1 );
          assert( m == 2 );
          assert( tx.size() >= (order+1) * n );
          assert( tzy.size() >= (order+1) * m );
          assert( px.size() >= (order+1) * n );
          assert( pzy.size() >= (order+1) * m );

          size_t n_order = order + 1;
          size_t j, k;

          // copy because partials w.r.t. y and z need to change
          vector<float> qzy = pzy;

          // initialize accumultion of reverse mode partials
          for(k = 0; k < n_order; k++)
               px[k] = 0.;

          // eliminate positive orders
          for(j = order; j > 0; j--)
          {     float j_inv = 1.f / float(j);
               if( id == 1 )
                    j_inv = - j_inv;

               // H_{x^{(k)}} += delta(j-k) +- H_{z^{(j)} y^{(j-k)} * k / j
               px[j] += qzy[j];
               for(k = 1; k <= j; k++)
                    px[k] += qzy[j] * tzy[n_order + j-k] * float(k) * j_inv;

               // H_{y^{j-k)} += +- H_{z^{(j)} x^{(k)} * k / j
               for(k = 1; k <= j; k++)
                    qzy[n_order + j-k] += qzy[j] * tx[k] * float(k) * j_inv;

               // H_{z^{(k)}} += H_{y^{(j-1)}} * z^{(j-k-1)} * 2.
               for(k = 0; k < j; k++)
                    qzy[k] += qzy[n_order + j-1] * tzy[j-k-1] * 2.f;
          }

          // eliminate order zero
          if( id == 0 )
               px[0] += qzy[0] * (1.f + tzy[n_order + 0]);
          else
               px[0] += qzy[0] * (1.f - tzy[n_order + 0]);

          return true;
     }
     // ----------------------------------------------------------------------
     // forward Jacobian sparsity routine called by CppAD
     bool old_tan_for_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          vector< std::set<size_t> >&           s )
     {
          assert( n == 1 );
          assert( m == 2 );
          assert( id == 0 || id == 1 );
          assert( r.size() >= n );
          assert( s.size() >= m );

          // sparsity for z and y are the same as for x
          s[0] = r[0];
          s[1] = r[0];

          return true;
     }
     // ----------------------------------------------------------------------
     // reverse Jacobian sparsity routine called by CppAD
     bool old_tan_rev_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          vector< std::set<size_t> >&           r ,
          const vector< std::set<size_t> >&     s )
     {
          assert( n == 1 );
          assert( m == 2 );
          assert( id == 0 || id == 1 );
          assert( r.size() >= n );
          assert( s.size() >= m );

          // note that, if the users code only uses z, and not y,
          // we could just set r[0] = s[0]
          r[0] = set_union(s[0], s[1]);
          return true;
     }
     // ----------------------------------------------------------------------
     // reverse Hessian sparsity routine called by CppAD
     bool old_tan_rev_hes_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector<bool>&                   s ,
          vector<bool>&                         t ,
          const vector< std::set<size_t> >&     u ,
          vector< std::set<size_t> >&           v )
     {
          assert( n == 1 );
          assert( m == 2 );
          assert( id == 0 || id == 1 );
          assert( r.size() >= n );
          assert( s.size() >= m );
          assert( t.size() >= n );
          assert( u.size() >= m );
          assert( v.size() >= n );

          // back propagate Jacobian sparsity. If users code only uses z,
          // we could just set t[0] = s[0];
          t[0] =  s[0] | s[1];

          // back propagate Hessian sparsity, ...
          v[0] = set_union(u[0], u[1]);

          // convert forward Jacobian sparsity to Hessian sparsity
          // because tan and tanh are nonlinear
          if( t[0] )
               v[0] = set_union(v[0], r[0]);

          return true;
     }
     // ---------------------------------------------------------------------
     // Declare the AD<float> routine old_tan(id, ax, ay)
     CPPAD_USER_ATOMIC(
          old_tan                 ,
          CppAD::vector           ,
          float                   ,
          old_tan_forward         ,
          old_tan_reverse         ,
          old_tan_for_jac_sparse  ,
          old_tan_rev_jac_sparse  ,
          old_tan_rev_hes_sparse
     )
} // End empty namespace

bool old_tan(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     float eps = 10.f * CppAD::numeric_limits<float>::epsilon();

     // domain space vector
     size_t n  = 1;
     float  x0 = 0.5;
     CppAD::vector< AD<float> > ax(n);
     ax[0]     = x0;

     // declare independent variables and start tape recording
     CppAD::Independent(ax);

     // range space vector
     size_t m = 3;
     CppAD::vector< AD<float> > af(m);

     // temporary vector for old_tan computations
     // (old_tan computes tan or tanh and its square)
     CppAD::vector< AD<float> > az(2);

     // call user tan function and store tan(x) in f[0] (ignore tan(x)^2)
     size_t id = 0;
     old_tan(id, ax, az);
     af[0] = az[0];

     // call user tanh function and store tanh(x) in f[1] (ignore tanh(x)^2)
     id = 1;
     old_tan(id, ax, az);
     af[1] = az[0];

     // put a constant in f[2] = tanh(1.) (for sparsity pattern testing)
     CppAD::vector< AD<float> > one(1);
     one[0] = 1.;
     old_tan(id, one, az);
     af[2] = az[0];

     // create f: x -> f and stop tape recording
     CppAD::ADFun<float> F;
     F.Dependent(ax, af);

     // check function value
     float tan = std::tan(x0);
     ok &= NearEqual(af[0] , tan,  eps, eps);
     float tanh = std::tanh(x0);
     ok &= NearEqual(af[1] , tanh,  eps, eps);

     // check zero order forward
     CppAD::vector<float> x(n), f(m);
     x[0] = x0;
     f    = F.Forward(0, x);
     ok &= NearEqual(f[0] , tan,  eps, eps);
     ok &= NearEqual(f[1] , tanh,  eps, eps);

     // compute first partial of f w.r.t. x[0] using forward mode
     CppAD::vector<float> dx(n), df(m);
     dx[0] = 1.;
     df    = F.Forward(1, dx);

     // compute derivative of tan - tanh using reverse mode
     CppAD::vector<float> w(m), dw(n);
     w[0]  = 1.;
     w[1]  = 1.;
     w[2]  = 0.;
     dw    = F.Reverse(1, w);

     // tan'(x)   = 1 + tan(x)  * tan(x)
     // tanh'(x)  = 1 - tanh(x) * tanh(x)
     float tanp  = 1.f + tan * tan;
     float tanhp = 1.f - tanh * tanh;
     ok   &= NearEqual(df[0], tanp, eps, eps);
     ok   &= NearEqual(df[1], tanhp, eps, eps);
     ok   &= NearEqual(dw[0], w[0]*tanp + w[1]*tanhp, eps, eps);

     // compute second partial of f w.r.t. x[0] using forward mode
     CppAD::vector<float> ddx(n), ddf(m);
     ddx[0] = 0.;
     ddf    = F.Forward(2, ddx);

     // compute second derivative of tan - tanh using reverse mode
     CppAD::vector<float> ddw(2);
     ddw   = F.Reverse(2, w);

     // tan''(x)   = 2 *  tan(x) * tan'(x)
     // tanh''(x)  = - 2 * tanh(x) * tanh'(x)
     // Note that second order Taylor coefficient for u half the
     // corresponding second derivative.
     float two    = 2;
     float tanpp  =   two * tan * tanp;
     float tanhpp = - two * tanh * tanhp;
     ok   &= NearEqual(two * ddf[0], tanpp, eps, eps);
     ok   &= NearEqual(two * ddf[1], tanhpp, eps, eps);
     ok   &= NearEqual(ddw[0], w[0]*tanp  + w[1]*tanhp , eps, eps);
     ok   &= NearEqual(ddw[1], w[0]*tanpp + w[1]*tanhpp, eps, eps);

     // Forward mode computation of sparsity pattern for F.
     size_t p = n;
     // user vectorBool because m and n are small
     CppAD::vectorBool r1(p), s1(m * p);
     r1[0] = true;            // propagate sparsity for x[0]
     s1    = F.ForSparseJac(p, r1);
     ok  &= (s1[0] == true);  // f[0] depends on x[0]
     ok  &= (s1[1] == true);  // f[1] depends on x[0]
     ok  &= (s1[2] == false); // f[2] does not depend on x[0]

     // Reverse mode computation of sparsity pattern for F.
     size_t q = m;
     CppAD::vectorBool s2(q * m), r2(q * n);
     // Sparsity pattern for identity matrix
     size_t i, j;
     for(i = 0; i < q; i++)
     {     for(j = 0; j < m; j++)
               s2[i * q + j] = (i == j);
     }
     r2   = F.RevSparseJac(q, s2);
     ok  &= (r2[0] == true);  // f[0] depends on x[0]
     ok  &= (r2[1] == true);  // f[1] depends on x[0]
     ok  &= (r2[2] == false); // f[2] does not depend on x[0]

     // Hessian sparsity for f[0]
     CppAD::vectorBool s3(m), h(p * n);
     s3[0] = true;
     s3[1] = false;
     s3[2] = false;
     h    = F.RevSparseHes(p, s3);
     ok  &= (h[0] == true);  // Hessian is non-zero

     // Hessian sparsity for f[2]
     s3[0] = false;
     s3[2] = true;
     h    = F.RevSparseHes(p, s3);
     ok  &= (h[0] == false);  // Hessian is zero

     // check tanh results for a large value of x
     x[0]  = std::numeric_limits<float>::max() / two;
     f     = F.Forward(0, x);
     tanh  = 1.;
     ok   &= NearEqual(f[1], tanh, eps, eps);
     df    = F.Forward(1, dx);
     tanhp = 0.;
     ok   &= NearEqual(df[1], tanhp, eps, eps);

     // --------------------------------------------------------------------
     // Free all temporary work space associated with old_atomic objects.
     // (If there are future calls to user atomic functions, they will
     // create new temporary work space.)
     CppAD::user_atomic<float>::clear();

     return ok;
}

Input File: example/deprecated/old_tan.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test

12.8.11.5.a: Deprecated 2013-05-27
This example has been deprecated; use 4.4.7.2.19: atomic_mat_mul.cpp instead.

12.8.11.5.b: Include File
This routine uses the include file 12.8.11.5.1: old_mat_mul.hpp .
# include <cppad/cppad.hpp>
# include "old_mat_mul.hpp"

bool old_mat_mul(void)
{     bool ok = true;
     using CppAD::AD;

     // matrix sizes for this test
     size_t nr_result = 2;
     size_t n_middle  = 2;
     size_t nc_result = 2;

     // declare the AD<double> vectors ax and ay and X
     size_t n = nr_result * n_middle + n_middle * nc_result;
     size_t m = nr_result * nc_result;
     CppAD::vector< AD<double> > X(4), ax(n), ay(m);
     size_t i, j;
     for(j = 0; j < X.size(); j++)
          X[j] = (j + 1);

     // X is the vector of independent variables
     CppAD::Independent(X);
     // left matrix
     ax[0]  = X[0];  // left[0,0]   = x[0] = 1
     ax[1]  = X[1];  // left[0,1]   = x[1] = 2
     ax[2]  = 5.;    // left[1,0]   = 5
     ax[3]  = 6.;    // left[1,1]   = 6
     // right matrix
     ax[4]  = X[2];  // right[0,0]  = x[2] = 3
     ax[5]  = 7.;    // right[0,1]  = 7
     ax[6]  = X[3];  // right[1,0]  = x[3] = 4
     ax[7]  = 8.;    // right[1,1]  = 8
     /*
     [ x0 , x1 ] * [ x2 , 7 ] = [ x0*x2 + x1*x3 , x0*7 + x1*8 ]
     [ 5  , 6 ]    [ x3 , 8 ]   [ 5*x2  + 6*x3  , 5*7 + 6*8 ]
     */

     // The call back routines need to know the dimensions of the matrices.
     // Store information about the matrix multiply for this call to mat_mul.
     call_info info;
     info.nr_result = nr_result;
     info.n_middle  = n_middle;
     info.nc_result = nc_result;
     // info.vx gets set by forward during call to mat_mul below
     assert( info.vx.size() == 0 );
     size_t id      = info_.size();
     info_.push_back(info);

     // user defined AD<double> version of matrix multiply
     mat_mul(id, ax, ay);
     //----------------------------------------------------------------------
     // check AD<double>  results
     ok &= ay[0] == (1*3 + 2*4); ok &= Variable( ay[0] );
     ok &= ay[1] == (1*7 + 2*8); ok &= Variable( ay[1] );
     ok &= ay[2] == (5*3 + 6*4); ok &= Variable( ay[2] );
     ok &= ay[3] == (5*7 + 6*8); ok &= Parameter( ay[3] );
     //----------------------------------------------------------------------
     // use mat_mul to define a function g : X -> ay
     CppAD::ADFun<double> G;
     G.Dependent(X, ay);
     // g(x) = [ x0*x2 + x1*x3 , x0*7 + x1*8 , 5*x2  + 6*x3  , 5*7 + 6*8 ]^T
     //----------------------------------------------------------------------
     // Test zero order forward mode evaluation of g(x)
     CppAD::vector<double> x( X.size() ), y(m);
     for(j = 0; j <  X.size() ; j++)
          x[j] = double(j + 2);
     y = G.Forward(0, x);
     ok &= y[0] == x[0] * x[2] + x[1] * x[3];
     ok &= y[1] == x[0] * 7.   + x[1] * 8.;
     ok &= y[2] == 5. * x[2]   + 6. * x[3];
     ok &= y[3] == 5. * 7.     + 6. * 8.;

     //----------------------------------------------------------------------
     // Test first order forward mode evaluation of g'(x) * [1, 2, 3, 4]^T
     // g'(x) = [ x2, x3, x0, x1 ]
     //         [ 7 ,  8,  0, 0  ]
     //         [ 0 ,  0,  5, 6  ]
     //         [ 0 ,  0,  0, 0  ]
     CppAD::vector<double> dx( X.size() ), dy(m);
     for(j = 0; j <  X.size() ; j++)
          dx[j] = double(j + 1);
     dy = G.Forward(1, dx);
     ok &= dy[0] == 1. * x[2] + 2. * x[3] + 3. * x[0] + 4. * x[1];
     ok &= dy[1] == 1. * 7.   + 2. * 8.   + 3. * 0.   + 4. * 0.;
     ok &= dy[2] == 1. * 0.   + 2. * 0.   + 3. * 5.   + 4. * 6.;
     ok &= dy[3] == 1. * 0.   + 2. * 0.   + 3. * 0.   + 4. * 0.;

     //----------------------------------------------------------------------
     // Test second order forward mode
     // g_0^2 (x) = [ 0, 0, 1, 0 ], g_0^2 (x) * [1] = [3]
     //             [ 0, 0, 0, 1 ]              [2]   [4]
     //             [ 1, 0, 0, 0 ]              [3]   [1]
     //             [ 0, 1, 0, 0 ]              [4]   [2]
     CppAD::vector<double> ddx( X.size() ), ddy(m);
     for(j = 0; j <  X.size() ; j++)
          ddx[j] = 0.;
     ddy = G.Forward(2, ddx);
     // [1, 2, 3, 4] * g_0^2 (x) * [1, 2, 3, 4]^T = 1*3 + 2*4 + 3*1 + 4*2
     ok &= 2. * ddy[0] == 1. * 3. + 2. * 4. + 3. * 1. + 4. * 2.;
     // for i > 0, [1, 2, 3, 4] * g_i^2 (x) * [1, 2, 3, 4]^T = 0
     ok &= ddy[1] == 0.;
     ok &= ddy[2] == 0.;
     ok &= ddy[3] == 0.;

     //----------------------------------------------------------------------
     // Test second order reverse mode
     CppAD::vector<double> w(m), dw(2 *  X.size() );
     for(i = 0; i < m; i++)
          w[i] = 0.;
     w[0] = 1.;
     dw = G.Reverse(2, w);
     // g_0'(x) = [ x2, x3, x0, x1 ]
     ok &= dw[0*2 + 0] == x[2];
     ok &= dw[1*2 + 0] == x[3];
     ok &= dw[2*2 + 0] == x[0];
     ok &= dw[3*2 + 0] == x[1];
     // g_0'(x)   * [1, 2, 3, 4]  = 1 * x2 + 2 * x3 + 3 * x0 + 4 * x1
     // g_0^2 (x) * [1, 2, 3, 4]  = [3, 4, 1, 2]
     ok &= dw[0*2 + 1] == 3.;
     ok &= dw[1*2 + 1] == 4.;
     ok &= dw[2*2 + 1] == 1.;
     ok &= dw[3*2 + 1] == 2.;

     //----------------------------------------------------------------------
     // Test forward and reverse Jacobian sparsity pattern
     /*
     [ x0 , x1 ] * [ x2 , 7 ] = [ x0*x2 + x1*x3 , x0*7 + x1*8 ]
     [ 5  , 6 ]    [ x3 , 8 ]   [ 5*x2  + 6*x3  , 5*7 + 6*8 ]
     so the sparsity pattern should be
     s[0] = {0, 1, 2, 3}
     s[1] = {0, 1}
     s[2] = {2, 3}
     s[3] = {}
     */
     CppAD::vector< std::set<size_t> > r( X.size() ), s(m);
     for(j = 0; j <  X.size() ; j++)
     {     assert( r[j].empty() );
          r[j].insert(j);
     }
     s = G.ForSparseJac( X.size() , r);
     for(j = 0; j <  X.size() ; j++)
     {     // s[0] = {0, 1, 2, 3}
          ok &= s[0].find(j) != s[0].end();
          // s[1] = {0, 1}
          if( j == 0 || j == 1 )
               ok &= s[1].find(j) != s[1].end();
          else     ok &= s[1].find(j) == s[1].end();
          // s[2] = {2, 3}
          if( j == 2 || j == 3 )
               ok &= s[2].find(j) != s[2].end();
          else     ok &= s[2].find(j) == s[2].end();
     }
     // s[3] == {}
     ok &= s[3].empty();

     //----------------------------------------------------------------------
     // Test reverse Jacobian sparsity pattern
     /*
     [ x0 , x1 ] * [ x2 , 7 ] = [ x0*x2 + x1*x3 , x0*7 + x1*8 ]
     [ 5  , 6 ]    [ x3 , 8 ]   [ 5*x2  + 6*x3  , 5*7 + 6*8 ]
     so the sparsity pattern should be
     r[0] = {0, 1, 2, 3}
     r[1] = {0, 1}
     r[2] = {2, 3}
     r[3] = {}
     */
     for(i = 0; i <  m; i++)
     {     s[i].clear();
          s[i].insert(i);
     }
     r = G.RevSparseJac(m, s);
     for(j = 0; j <  X.size() ; j++)
     {     // r[0] = {0, 1, 2, 3}
          ok &= r[0].find(j) != r[0].end();
          // r[1] = {0, 1}
          if( j == 0 || j == 1 )
               ok &= r[1].find(j) != r[1].end();
          else     ok &= r[1].find(j) == r[1].end();
          // r[2] = {2, 3}
          if( j == 2 || j == 3 )
               ok &= r[2].find(j) != r[2].end();
          else     ok &= r[2].find(j) == r[2].end();
     }
     // r[3] == {}
     ok &= r[3].empty();

     //----------------------------------------------------------------------
     /* Test reverse Hessian sparsity pattern
     g_0^2 (x) = [ 0, 0, 1, 0 ] and for i > 0, g_i^2 = 0
                 [ 0, 0, 0, 1 ]
                 [ 1, 0, 0, 0 ]
                 [ 0, 1, 0, 0 ]
     so for the sparsity pattern for the first component of g is
     h[0] = {2}
     h[1] = {3}
     h[2] = {0}
     h[3] = {1}
     */
     CppAD::vector< std::set<size_t> > h( X.size() ), t(1);
     t[0].clear();
     t[0].insert(0);
     h = G.RevSparseHes(X.size() , t);
     size_t check[] = {2, 3, 0, 1};
     for(j = 0; j <  X.size() ; j++)
     {     // h[j] = { check[j] }
          for(i = 0; i < n; i++)
          {     if( i == check[j] )
                    ok &= h[j].find(i) != h[j].end();
               else     ok &= h[j].find(i) == h[j].end();
          }
     }
     t[0].clear();
     for( j = 1; j < X.size(); j++)
               t[0].insert(j);
     h = G.RevSparseHes(X.size() , t);
     for(j = 0; j <  X.size() ; j++)
     {     // h[j] = { }
          for(i = 0; i < X.size(); i++)
               ok &= h[j].find(i) == h[j].end();
     }

     // --------------------------------------------------------------------
     // Free temporary work space. (If there are future calls to
     // old_mat_mul they would create new temporary work space.)
     CppAD::user_atomic<double>::clear();
     info_.clear();

     return ok;
}

Input File: example/deprecated/old_mat_mul.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation

12.8.11.5.1.a: Syntax
This file is located in the example directory. It can be copied to the current working directory and included with the syntax
     # include "old_mat_mul.hpp"

12.8.11.5.1.b: Example
The file 12.8.11.5: old_mat_mul.cpp contains an example use of old_mat_mul.hpp. It returns true if it succeeds and false otherwise.

12.8.11.5.1.c: Begin Source

# include <cppad/cppad.hpp>      // Include CppAD definitions
namespace {                      // Begin empty namespace
     using CppAD::vector;        // Let vector denote CppAD::vector

12.8.11.5.1.d: Extra Call Information
     // Information we will attach to each mat_mul call
     struct call_info {
          size_t nr_result;
          size_t n_middle;
          size_t nc_result;
          vector<bool>  vx;
     };
     vector<call_info> info_; // vector of call information

     // number of orders for this operation (k + 1)
     size_t n_order_ = 0;
     // number of rows in the result matrix
     size_t nr_result_ = 0;
     // number of columns in left matrix and number of rows in right matrix
     size_t n_middle_ = 0;
     // number of columns in the result matrix
     size_t nc_result_ = 0;
     // which components of x are variables
     vector<bool>* vx_ = CPPAD_NULL;

     // get the information corresponding to this call
     void get_info(size_t id, size_t k, size_t n, size_t m)
     {     n_order_   = k + 1;
          nr_result_ = info_[id].nr_result;
          n_middle_  = info_[id].n_middle;
          nc_result_ = info_[id].nc_result;
          vx_        = &(info_[id].vx);

          assert(n == nr_result_ * n_middle_ + n_middle_ * nc_result_);
          assert(m ==  nr_result_ * nc_result_);
     }
12.8.11.5.1.e: Matrix Indexing
     // Convert left matrix index pair and order to a single argument index
     size_t left(size_t i, size_t j, size_t ell)
     {     assert( i < nr_result_ );
          assert( j < n_middle_ );
          return (i * n_middle_ + j) * n_order_ + ell;
     }
     // Convert right matrix index pair and order to a single argument index
     size_t right(size_t i, size_t j, size_t ell)
     {     assert( i < n_middle_ );
          assert( j < nc_result_ );
          size_t offset = nr_result_ * n_middle_;
          return (offset + i * nc_result_ + j) * n_order_ + ell;
     }
     // Convert result matrix index pair and order to a single result index
     size_t result(size_t i, size_t j, size_t ell)
     {     assert( i < nr_result_ );
          assert( j < nc_result_ );
          return (i * nc_result_ + j) * n_order_ + ell;
     }

12.8.11.5.1.f: One Matrix Multiply
Forward mode matrix multiply left times right and sum into result:
     void multiply_and_sum(
          size_t                order_left ,
          size_t                order_right,
          const vector<double>&         tx ,
          vector<double>&               ty )
     {     size_t i, j;
          size_t order_result = order_left + order_right;
          for(i = 0; i < nr_result_; i++)
          {     for(j = 0; j < nc_result_; j++)
               {     double sum = 0.;
                    size_t middle, im_left, mj_right, ij_result;
                    for(middle = 0; middle < n_middle_; middle++)
                    {     im_left  = left(i, middle, order_left);
                         mj_right = right(middle, j, order_right);
                         sum     += tx[im_left] * tx[mj_right];
                    }
                    ij_result = result(i, j, order_result);
                    ty[ ij_result ] += sum;
               }
          }
          return;
     }

12.8.11.5.1.g: Reverse Partials One Order
Compute reverse mode partials for one order and sum into px:
     void reverse_multiply(
          size_t                order_left ,
          size_t                order_right,
          const vector<double>&         tx ,
          const vector<double>&         ty ,
          vector<double>&               px ,
          const vector<double>&         py )
     {     size_t i, j;
          size_t order_result = order_left + order_right;
          for(i = 0; i < nr_result_; i++)
          {     for(j = 0; j < nc_result_; j++)
               {     size_t middle, im_left, mj_right, ij_result;
                    for(middle = 0; middle < n_middle_; middle++)
                    {     ij_result = result(i, j, order_result);
                         im_left   = left(i, middle, order_left);
                         mj_right  = right(middle, j, order_right);
                         // sum       += tx[im_left]  * tx[mj_right];
                         px[im_left]  += tx[mj_right] * py[ij_result];
                         px[mj_right] += tx[im_left]  * py[ij_result];
                    }
               }
          }
          return;
     }

12.8.11.5.1.h: Set Union

     using CppAD::set_union;

12.8.11.5.1.i: CppAD User Atomic Callback Functions
     // ----------------------------------------------------------------------
     // forward mode routine called by CppAD
     bool mat_mul_forward(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<bool>&      vx ,
          vector<bool>&            vy ,
          const vector<double>&    tx ,
          vector<double>&          ty
     )
     {     size_t i, j, ell;
          get_info(id, k, n, m);

          // check if this is during the call to mat_mul(id, ax, ay)
          if( vx.size() > 0 )
          {     assert( k == 0 && vx.size() > 0 );

               // store the vx information in info_
               assert( vx_->size() == 0 );
               info_[id].vx.resize(n);
               for(j = 0; j < n; j++)
                    info_[id].vx[j] = vx[j];
               assert( vx_->size() == n );

               // now compute vy
               for(i = 0; i < nr_result_; i++)
               {     for(j = 0; j < nc_result_; j++)
                    {     // compute vy[ result(i, j, 0) ]
                         bool   var = false;
                         bool   nz_left, nz_right;
                         size_t middle, im_left, mj_right, ij_result;
                         for(middle = 0; middle < n_middle_; middle++)
                         {     im_left  = left(i, middle, k);
                              mj_right = right(middle, j, k);
                              nz_left  = vx[im_left]  | (tx[im_left] != 0.);
                              nz_right = vx[mj_right] | (tx[mj_right]!= 0.);
                              // if not multiplying by the constant zero
                              if( nz_left & nz_right )
                                   var |= (vx[im_left] | vx[mj_right]);
                         }
                         ij_result     = result(i, j, k);
                         vy[ij_result] = var;
                    }
               }
          }

          // initialize result as zero
          for(i = 0; i < nr_result_; i++)
          {     for(j = 0; j < nc_result_; j++)
                    ty[ result(i, j, k) ] = 0.;
          }
          // sum the product of proper orders
          for(ell = 0; ell <=k; ell++)
               multiply_and_sum(ell, k-ell, tx, ty);

          // All orders are implemented and there are no possible error
          // conditions, so always return true.
          return true;
     }
     // ----------------------------------------------------------------------
     // reverse mode routine called by CppAD
     bool mat_mul_reverse(
          size_t                   id ,
          size_t                    k ,
          size_t                    n ,
          size_t                    m ,
          const vector<double>&    tx ,
          const vector<double>&    ty ,
          vector<double>&          px ,
          const vector<double>&    py
     )
     {     get_info(id, k, n, m);

          size_t ell = n * n_order_;
          while(ell--)
               px[ell] = 0.;

          size_t order = n_order_;
          while(order--)
          {     // reverse sum the products for specified order
               for(ell = 0; ell <=order; ell++)
                    reverse_multiply(ell, order-ell, tx, ty, px, py);
          }

          // All orders are implemented and there are no possible error
          // conditions, so always return true.
          return true;
     }

     // ----------------------------------------------------------------------
     // forward Jacobian sparsity routine called by CppAD
     bool mat_mul_for_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          vector< std::set<size_t> >&           s )
     {     size_t i, j, k, im_left, middle, mj_right, ij_result;
          k = 0;
          get_info(id, k, n, m);

          for(i = 0; i < nr_result_; i++)
          {     for(j = 0; j < nc_result_; j++)
               {     ij_result = result(i, j, k);
                    s[ij_result].clear();
                    for(middle = 0; middle < n_middle_; middle++)
                    {     im_left   = left(i, middle, k);
                         mj_right  = right(middle, j, k);

                         // s[ij_result] = union( s[ij_result], r[im_left] )
                         s[ij_result] = set_union(s[ij_result], r[im_left]);

                         // s[ij_result] = union( s[ij_result], r[mj_right] )
                         s[ij_result] = set_union(s[ij_result], r[mj_right]);
                    }
               }
          }
          return true;
     }
     // ----------------------------------------------------------------------
     // reverse Jacobian sparsity routine called by CppAD
     bool mat_mul_rev_jac_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          vector< std::set<size_t> >&           r ,
          const vector< std::set<size_t> >&     s )
     {     size_t i, j, k, im_left, middle, mj_right, ij_result;
          k = 0;
          get_info(id, k, n, m);

          for(j = 0; j < n; j++)
               r[j].clear();

          for(i = 0; i < nr_result_; i++)
          {     for(j = 0; j < nc_result_; j++)
               {     ij_result = result(i, j, k);
                    for(middle = 0; middle < n_middle_; middle++)
                    {     im_left   = left(i, middle, k);
                         mj_right  = right(middle, j, k);

                         // r[im_left] = union( r[im_left], s[ij_result] )
                         r[im_left] = set_union(r[im_left], s[ij_result]);

                         // r[mj_right] = union( r[mj_right], s[ij_result] )
                         r[mj_right] = set_union(r[mj_right], s[ij_result]);
                    }
               }
          }
          return true;
     }
     // ----------------------------------------------------------------------
     // reverse Hessian sparsity routine called by CppAD
     bool mat_mul_rev_hes_sparse(
          size_t                               id ,
          size_t                                n ,
          size_t                                m ,
          size_t                                p ,
          const vector< std::set<size_t> >&     r ,
          const vector<bool>&                   s ,
          vector<bool>&                         t ,
          const vector< std::set<size_t> >&     u ,
          vector< std::set<size_t> >&           v )
     {     size_t i, j, k, im_left, middle, mj_right, ij_result;
          k = 0;
          get_info(id, k, n, m);

          for(j = 0; j < n; j++)
          {     t[j] = false;
               v[j].clear();
          }

          assert( vx_->size() == n );
          for(i = 0; i < nr_result_; i++)
          {     for(j = 0; j < nc_result_; j++)
               {     ij_result = result(i, j, k);
                    for(middle = 0; middle < n_middle_; middle++)
                    {     im_left   = left(i, middle, k);
                         mj_right  = right(middle, j, k);

                         // back propagate Jacobian sparsity
                         t[im_left]   = (t[im_left] | s[ij_result]);
                         t[mj_right]  = (t[mj_right] | s[ij_result]);
                         // Visual Studio C++ 2008 warns unsafe mix of int and
                         // bool if we use the following code directly above:
                         // t[im_left]  |= s[ij_result];
                         // t[mj_right] |= s[ij_result];

                         // back propagate Hessian sparsity
                         // v[im_left]  = union( v[im_left],  u[ij_result] )
                         // v[mj_right] = union( v[mj_right], u[ij_result] )
                         v[im_left] = set_union(v[im_left],  u[ij_result] );
                         v[mj_right] = set_union(v[mj_right], u[ij_result] );

                         // Check for case where the (i,j) result element
                         // is in reverse Jacobian and both left and right
                         // operands in multiplication are variables
                         if(s[ij_result] & (*vx_)[im_left] & (*vx_)[mj_right])
                         {     // v[im_left] = union( v[im_left], r[mj_right] )
                              v[im_left] = set_union(v[im_left], r[mj_right] );
                              // v[mj_right] = union( v[mj_right], r[im_left] )
                              v[mj_right] = set_union(v[mj_right], r[im_left] );
                         }
                    }
               }
          }
          return true;
     }

12.8.11.5.1.j: Declare mat_mul Function
Declare the AD<double> routine mat_mul(idaxay) and end empty namespace (we could use any 8.9: simple vector template class instead of CppAD::vector):
     CPPAD_USER_ATOMIC(
          mat_mul                 ,
          CppAD::vector           ,
          double                  ,
          mat_mul_forward         ,
          mat_mul_reverse         ,
          mat_mul_for_jac_sparse  ,
          mat_mul_rev_jac_sparse  ,
          mat_mul_rev_hes_sparse
     )
} // End empty namespace

Input File: example/deprecated/old_mat_mul.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.12: zdouble: An AD Base Type With Absolute Zero

12.8.12.a: Deprecated 2015-09-26
Use the function 4.4.3.3: azmul instead.

12.8.12.b: Absolute Zero
The zdouble class acts like the double type with the added property that zero times any value is zero. This includes zero time 8.11: nan and zero times infinity. In addition, zero divided by any value and any value times zero are also zero.

12.8.12.c: Syntax

12.8.12.c.a: Constructor and Assignment
    zdouble z
    zdouble z(x)
    z1 op x
where x is a double or zdouble object and op is =, +=, -=, *= or /=-.

12.8.12.c.b: Comparison Operators
    b = z op x
    b = x op z
where b is a bool object, z is a zdouble object, x is a double or zdouble object, and op is ==, !=, <=, >=, < or >.

12.8.12.c.c: Arithmetic Operators
    z2 = z1 op x
    z2 = x op z1
where z1 , z2 are zdouble objects, x is a double or zdouble object, and op is +, -, * or /.

12.8.12.c.d: Standard Math
    z2 = fun(z1)
    z3 = pow(z1z2)
where z1 , z2 , z3 are zdouble objects and fun is a 4.4.2: unary_standard_math function.

12.8.12.c.e: Nan
There is a specialization of 8.11: nan so that
    z2
 = nan(z1)
returns 'not a number' when z1 has type zdouble. Note that this template function needs to be specialized because  zdouble(0.0) ==  zdouble(0.0) / zdouble(0.0)

12.8.12.d: Motivation

12.8.12.d.a: General
Often during computing (and more so in parallel computing) alternative values for an expression are computed and one of the alternatives is chosen using some boolean variable. This is often represented by
     
result = flag * value_if_true + (1 - flag) * value_if_false
where flag is one for true and zero for false. This representation does not work for double when the value being multiplied by zero is +inf, -inf, or nan.

12.8.12.d.b: CppAD
In CppAD one can use 4.4.4: conditional expressions to achieve the representation
     
result = flag * value_if_true + (1 - flag) * value_if_false
This works fine except when there are 10.2.10: multiple levels of AD ; e.g., when using AD< AD<double> > . In this case the corresponding AD function objects have type 5.1.2: ADFun< AD<double> > . When these AD function objects compute derivatives using 5.4: reverse mode, the conditional expressions are represented use zeros to multiply the expression that is not used. Using AD< AD<zdouble> > instead of AD< AD<double> > makes this representation work and fixes the problem.

12.8.12.e: Base Type Requirements
The type zdouble satisfies all of the CppAD 4.7: base type requirements .

12.8.12.f: Example
The file 12.8.12.1: zdouble.cpp contains an example and test of this class. It returns true if it succeeds and false otherwise.
Input File: cppad/core/zdouble.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.12.1: zdouble: Example and Test
# include <cppad/cppad.hpp>

namespace {
     template <class Base> bool test_one(void)
     {     bool ok = true;
          Base eps99 = 99. * std::numeric_limits<double>::epsilon();

          typedef CppAD::AD<Base>   a1type;
          typedef CppAD::AD<a1type> a2type;

          // value during taping
          size_t n = 2;
          CPPAD_TESTVECTOR(Base) x(n);
          x[0] = 0.0;
          x[1] = 0.0;

          // declare independent variable
          CPPAD_TESTVECTOR(a2type) a2x(n);
          for (size_t j = 0; j < n; j++)
               a2x[j] = a2type( a1type(x[j]) );
          Independent(a2x);

          // zero and one as a2type values
          a2type a2zero = a2type(0.0);
          a2type a2one  = a2type(1.0);

          // h(x) = x[0] / x[1] if x[1] > x[0] else 1.0
          a2type h_x = CondExpGt(a2x[1], a2x[0], a2x[0] / a2x[1], a2one);

          // f(x) = h(x) if x[0] > 0.0 else 0.0
          //      = x[0] / x[1] if x[1] > x[0]  and x[0] > 0.0
          //      = 1.0         if x[0] >= x[1] and x[0] > 0.0
          //      = 0.0         if x[0] <= 0.0
          a2type f_x = CondExpGt(a2x[0], a2zero, h_x, a2one);

          // define the function f(x)
          size_t m = 1;
          CPPAD_TESTVECTOR(a2type) a2y(m);
          a2y[0] = f_x;
          CppAD::ADFun<a1type> af1;
          af1.Dependent(a2x, a2y);

          // Define function g(x) = gradient of f(x)
          CPPAD_TESTVECTOR(a1type) a1x(n), a1z(n), a1w(m);
          for (size_t j = 0; j < n; j++)
               a1x[j] = a1type(x[j]);
          a1w[0] = a1type(1.0);
          Independent(a1x);
          af1.Forward(0, a1x);
          a1z = af1.Reverse(1, a1w);
          CppAD::ADFun<Base> g;
          g.Dependent(a1x, a1z);

          // check result for a case where f(x) = 0.0;
          CPPAD_TESTVECTOR(Base) z(2);
          x[0] = 0.0;
          x[1] = 0.0;
          z    = g.Forward(0, x);
          ok &= z[0] == 0.0;
          ok &= z[1] == 0.0;

          // check result for a case where f(x) = 1.0;
          x[0] = 1.0;
          x[1] = 0.5;
          z    = g.Forward(0, x);
          ok &= z[0] == 0.0;
          ok &= z[1] == 0.0;

          // check result for a case where f(x) = x[0] / x[1];
          x[0] = 1.0;
          x[1] = 2.0;
          z    = g.Forward(0, x);
          ok &= CppAD::NearEqual(z[0], 1.0/x[1], eps99, eps99);
          ok &= CppAD::NearEqual(z[1], - x[0]/(x[1]*x[1]), eps99, eps99);

          return ok;
     }
     bool test_two(void)
     {     bool ok = true;
          using CppAD::zdouble;
          //
          zdouble eps = CppAD::numeric_limits<zdouble>::epsilon();
          ok          &= eps == std::numeric_limits<double>::epsilon();
          //
          zdouble min = CppAD::numeric_limits<zdouble>::min();
          ok          &= min == std::numeric_limits<double>::min();
          //
          zdouble max = CppAD::numeric_limits<zdouble>::max();
          ok          &= max == std::numeric_limits<double>::max();
          //
          zdouble nan = CppAD::numeric_limits<zdouble>::quiet_NaN();
          ok          &= nan != nan;
          //
          int digits10 = CppAD::numeric_limits<zdouble>::digits10;
          ok          &= digits10 == std::numeric_limits<double>::digits10;
          //
          return ok;
     }
}

bool zdouble(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;
     using CppAD::zdouble;
     //
     ok &= test_one<zdouble>();
     ok &= test_one<double>();
     //
     ok &= test_two();
     //
     return ok;
}

Input File: example/deprecated/zdouble.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.8.13: Autotools Unix Test and Installation

12.8.13.a: Deprecated 2012-12-26
This install procedure has been deprecated and no improvements have been added since 2012. For example, this install procedure will not detect any of the c++11 extensions. You should use the 2.2: cmake instructions to install CppAD.

12.8.13.b: Distribution Directory
You must first obtain a copy of the CppAD distribution directory using the 2.1: download instructions. We refer to the corresponding 2.1.b: distribution directory as dist_dir .

12.8.13.c: Build Directory
Create the directory dist_dir/build , which will be referred to as the build directory below.

12.8.13.d: Configure
Execute the following command in the build directory:
./configure                                  \
     --prefix=
prefix_dir                    \
     --with-Documentation                     \
     --with-
testvector                      \
     MAX_NUM_THREADS=
max_num_threads         \
     CXX_FLAGS=
cxx_flags                     \
     OPENMP_FLAGS=
openmp_flags               \
     POSTFIX_DIR=
postfix_dir                 \
     ADOLC_DIR=
adolc_dir                     \
     BOOST_DIR=
boost_dir                     \
     EIGEN_DIR=
eigen_dir                     \
     FADBAD_DIR=
fadbad_dir                   \
     SADADO_DIR=
sacado_dir                   \
     IPOPT_DIR=
ipopt_dir                     \
     TAPE_ADDR_TYPE=
tape_addr_type           \
     TAPE_ID_TYPE=
tape_id_type
where only the configure line need appear; i.e., the entries in all of the other lines are optional. The text in italic above is replaced values that you choose (see discussion below).

12.8.13.e: make
The following command, in the build directory, copies the file configure.hpp from the build to the source directory and then builds some object libraries that are used by the tests:
     make

12.8.13.e.a: Examples and Tests
Once you have executed the make command, you can run the correctness and speed tests. The following command will build and run all the correctness and speed tests.
 
     make test

12.8.13.f: Profiling CppAD
The CppAD derivative speed tests mentioned above can be profiled. You can test that the results computed during this profiling are correct by executing the following commands starting in the build directory:
     cd speed/profile
     make test
After executing make test, you can run a profile speed test by executing the command
     ./speed_profile 
test seed option_list
see 11.1: speed_main for the meaning of the command line arguments to this program. After you have run a profiling speed test, you can then obtain the profiling results with
     gprof -b speed_profile
In C++, template parameters and argument types become part of a routines's name. This can make the gprof output hard to read (the routine names can be very long). You can remove the template parameters and argument types from the routine names by executing the following command
 
     gprof -b speed_profile | sed -f gprof.sed

12.8.13.g: prefix_dir
The default value for prefix directory is $HOME i.e., by default the CppAD include files will 12.8.13.v: install below $HOME. If you want to install elsewhere, you will have to use this option. As an example of using the --prefix=prefix_dir option, if you specify
 
     ./configure --prefix=/usr/local
the CppAD include files will be installed in the directory
     /usr/local/include/cppad
If 12.8.13.h: --with-Documentation is specified, the CppAD documentation files will be installed in the directory
     /usr/local/share/doc/cppad-
yyyymmdd
where yyyymmdd is the year, month, and day corresponding to the version of CppAD.

12.8.13.h: --with-Documentation
If the command line argument --with-Documentation is specified, the CppAD documentation HTML and XML files are copied to the directory
     
prefix_dir/share/doc/postfix_dir/cppad-yyyymmdd
(see 12.8.13.m: postfix_dir ). The top of the CppAD HTML documentation tree (with mathematics displayed as LaTex command) will be located at
     
prefix_dir/share/doc/postfix_dir/cppad-yyyymmdd/cppad.htm

12.8.13.i: --with-testvector
The 10.5: CPPAD_TESTVECTOR template class is used for many of the CppAD examples and tests. The default for this template class is CppAD::vector<Scalar> . If one, and only one, of the following command line arguments is specified:
 
     --with-stdvector
     --with-boostvector
     --with-eigenvector
the corresponding of the following template classes is used
     std::vector<
Scalar>
     boost::numeric::ublas::vector<
Scalar>
     Eigen::matrix<
Scalar, Eigen::Dynamic, 1>
See also, 12.8.13.o: boost_dir and 12.8.13.p: eigen_dir .

12.8.13.j: max_num_threads
this specifies the value for the default value for the preprocessor symbol 7.b: CPPAD_MAX_NUM_THREADS . It must be greater than or equal to four; i.e., max_num_threads >= 4 .

12.8.13.k: cxx_flags
If the command line argument CompilerFlags is present, it specifies compiler flags. For example,
     CXX_FLAGS="-Wall -ansi"
would specify that warning flags -Wall and -ansi should be included in all the C++ compile commands. The error and warning flags chosen must be valid options for the C++ compiler. The default value for CompilerFlags is the empty string.

12.8.13.l: openmp_flags
If the command line argument OpenmpFlags is present, it specifies the necessary flags so that the compiler will properly interpret OpenMP directives. For example, when using the GNU g++ compiler, the following setting includes the OpenMP tests:
     OPENMP_FLAGS=-fopenmp
If you specify configure command, the CppAD OpenMP correctness and speed tests will be built; see 7.2.c: threading multi-threading tests.

12.8.13.m: postfix_dir
By default, the postfix directory is empty; i.e., there is no postfix directory. As an example of using the POSTFIX_DIR=postfix_dir option, if you specify
 
     ./configure --prefix=/usr/local POSTFIX_DIR=coin
the CppAD include files will be 12.8.13.v: installed in the directory
     /usr/local/include/coin/cppad
If 12.8.13.h: --with-Documentation is specified, the CppAD documentation files will be installed in the directory
     /usr/local/share/doc/coin/cppad-
yyyymmdd

12.8.13.n: adolc_dir
If you have ADOL-C (https://projects.coin-or.org/ADOL-C) installed on your system, you can specify a value for adolc_dir in the 12.8.13.d: configure command line. The value of adolc_dir must be such that
     
adolc_dir/include/adolc/adouble.h
is a valid way to reference adouble.h. In this case, you can run the Adolc speed correctness tests by executing the following commands starting in the build directory:
     cd speed/adolc
     make test
After executing make test, you can run an Adolc speed tests by executing the command ./adolc; see 11.1: speed_main for the meaning of the command line options to this program. Note that these speed tests assume Adolc has been configure with its sparse matrix computations enabled using
     --with-colpack=
colpack_dir

12.8.13.n.a: Linux
If you are using Linux, you will have to add adolc_dir/lib to LD_LIBRARY_PATH. For example, if you use the bash shell to run your programs, you could include
     LD_LIBRARY_PATH=
adolc_dir/lib:${LD_LIBRARY_PATH}
     export LD_LIBRARY_PATH
in your $HOME/.bash_profile file.

12.8.13.n.b: Cygwin
If you are using Cygwin, you will have to add to following lines to the file .bash_profile in your home directory:
     PATH=
adolc_dir/bin:${PATH}
     export PATH
in order for Adolc to run properly. If adolc_dir begins with a disk specification, you must use the Cygwin format for the disk specification. For example, if d:/adolc_base is the proper directory, /cygdrive/d/adolc_base should be used for adolc_dir .

12.8.13.o: boost_dir
If the command line argument
     BOOST_DIR=
boost_dir
is present, it must be such that files
     
boost_dir/include/boost/numeric/ublas/vector.hpp
     
boost_dir/include/boost/thread.hpp
are present. In this case, these files will be used by CppAD. See also, 12.8.13.i: --with-boostvector

12.8.13.p: eigen_dir
If you have Eigen (http://eigen.tuxfamily.org) installed on your system, you can specify a value for eigen_dir . It must be such that
     
eigen_dir/include/Eigen/Core
is a valid include file. In this case CppAD will compile and test the Eigen examples; e.g., 10.2.4.2: eigen_array.cpp . See also, 12.8.13.i: --with-eigenvector

12.8.13.q: fadbad_dir
If you have Fadbad 2.1 (http://www.fadbad.com/) installed on your system, you can specify a value for fadbad_dir . It must be such that
     
fadbad_dir/include/FADBAD++/badiff.h
is a valid reference to badiff.h. In this case, you can run the Fadbad speed correctness tests by executing the following commands starting in the build directory:
     cd speed/fadbad
     make test
After executing make test, you can run a Fadbad speed tests by executing the command ./fadbad; see 11.1: speed_main for the meaning of the command line options to this program.

12.8.13.r: ipopt_dir
If you have Ipopt (http://www.coin-or.org/projects/Ipopt.xml) installed on your system, you can specify a value for ipopt_dir . It must be such that
     
ipopt_dir/include/coin/IpIpoptApplication.hpp
is a valid reference to IpIpoptApplication.hpp. In this case, the CppAD interface to Ipopt 12.8.10.u: examples can be built and tested by executing the following commands starting in the build directory:
     make
     #
     cd cppad_ipopt/example
     make test
     #
     cd ../test
     make test
     #
     cd ../speed
     make test
Once this has been done, you can execute the program ./speed in the build/cppad_ipopt/speed directory; see 12.8.10.3: ipopt_ode_speed.cpp .

12.8.13.s: sacado_dir
If you have Sacado (http://trilinos.sandia.gov/packages/sacado/) installed on your system, you can specify a value for sacado_dir . It must be such that
     
sacado_dir/include/Sacado.hpp
is a valid reference to Sacado.hpp. In this case, you can run the Sacado speed correctness tests by executing the following commands starting in the build directory:
     cd speed/sacado
     make test
After executing make test, you can run a Sacado speed tests by executing the command ./sacado; see 11.1: speed_main for the meaning of the command line options to this program.

12.8.13.t: tape_addr_type
If the command line argument tape_addr_type is present, it specifies the type used for address in the AD recordings (tapes). The valid values for this argument are unsigned short int, unsigned int, size_t. The smaller the value of sizeof(tape_addr_type) , the less memory is used. On the other hand, the value
     std::numeric_limits<
tape_addr_type>::max()
must be larger than any of the following: 5.1.5.i: size_op , 5.1.5.j: size_op_arg , 5.1.5.k: size_par , 5.1.5.h: size_par , 5.1.5.l: size_par .

12.8.13.u: tape_id_type
If the command line argument tape_id_type is present, it specifies the type used for identifying tapes. The valid values for this argument are unsigned short int, unsigned int, size_t. The smaller the value of sizeof(tape_id_type) , the less memory is used. On the other hand, the value
     std::numeric_limits<
tape_id_type>::max()
must be larger than the maximum number of tapes per thread times 12.8.13.j: max_num_threads .

12.8.13.v: make install
Once you are satisfied that the tests are giving correct results, you can install CppAD into easy to use directories by executing the command
 
     make install
This will install CppAD in the location specified by 12.8.13.g: prefix_dir . You must have permission to write in the prefix_dir directory to execute this command. You may optionally specify a destination directory for the install; i.e.,
     make install DESTDIR=
DestinationDirectory

Input File: omh/install/autotools.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9: Compare Speed of C and C++

12.9.a: Syntax
test_more/compare_c/det_by_minor_c
test_more/compare_c/det_by_minor_cpp

12.9.b: Purpose
Compares the speed of the exact same source code compiled using C versus C++.

12.9.c: Contents
det_of_minor_c: 12.9.1Determinant of a Minor
det_by_minor_c: 12.9.2Compute Determinant using Expansion by Minors
uniform_01_c: 12.9.3Simulate a [0,1] Uniform Random Variate
correct_det_by_minor_c: 12.9.4Correctness Test of det_by_minor Routine
repeat_det_by_minor_c: 12.9.5Repeat det_by_minor Routine A Specified Number of Times
elapsed_seconds_c: 12.9.6Returns Elapsed Number of Seconds
time_det_by_minor_c: 12.9.7Determine Amount of Time to Execute det_by_minor
main_compare_c: 12.9.8Main Program For Comparing C and C++ Speed

Input File: test_more/compare_c/CMakeLists.txt
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.1: Determinant of a Minor

12.9.1.a: Syntax
d = det_of_minor(amnrc)

12.9.1.b: Purpose
returns the determinant of a minor of the matrix @(@ A @)@ using expansion by minors. The elements of the @(@ n \times n @)@ minor @(@ M @)@ of the matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , n-1 @)@, by @[@ M_{i,j} = A_{R(i), C(j)} @]@ where the functions @(@ R(i) @)@ is defined by the 11.2.2.h: argument r and @(@ C(j) @)@ is defined by the 11.2.2.i: argument c .

This function is for example and testing purposes only. Expansion by minors is chosen as an example because it uses a lot of floating point operations yet does not require much source code (on the order of m factorial floating point operations and about 70 lines of source code including comments). This is not an efficient method for computing a determinant; for example, using an LU factorization would be better.

12.9.1.c: Determinant of A
If the following conditions hold, the minor is the entire matrix @(@ A @)@ and hence det_of_minor will return the determinant of @(@ A @)@:
  1. @(@ n = m @)@.
  2. for @(@ i = 0 , \ldots , m-1 @)@, @(@ r[i] = i+1 @)@, and @(@ r[m] = 0 @)@.
  3. for @(@ j = 0 , \ldots , m-1 @)@, @(@ c[j] = j+1 @)@, and @(@ c[m] = 0 @)@.


12.9.1.d: a
The argument a has prototype
     const double* 
a
and is a vector with size @(@ m * m @)@. The elements of the @(@ m \times m @)@ matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , m-1 @)@ and @(@ j = 0 , \ldots , m-1 @)@, by @[@ A_{i,j} = a[ i * m + j] @]@

12.9.1.e: m
The argument m has prototype
     size_t 
m
and is the size of the square matrix @(@ A @)@.

12.9.1.f: n
The argument n has prototype
     size_t 
n
and is the size of the square minor @(@ M @)@.

12.9.1.g: r
The argument r has prototype
     size_t* 
r
and is a vector with @(@ m + 1 @)@ elements. This vector defines the function @(@ R(i) @)@ which specifies the rows of the minor @(@ M @)@. To be specific, the function @(@ R(i) @)@ for @(@ i = 0, \ldots , n-1 @)@ is defined by @[@ \begin{array}{rcl} R(0) & = & r[m] \\ R(i+1) & = & r[ R(i) ] \end{array} @]@ All the elements of r must have value less than or equal m . The elements of vector r are modified during the computation, and restored to their original value before the return from det_of_minor.

12.9.1.h: c
The argument c has prototype
     size_t* 
c
and is a vector with @(@ m + 1 @)@ elements This vector defines the function @(@ C(i) @)@ which specifies the rows of the minor @(@ M @)@. To be specific, the function @(@ C(i) @)@ for @(@ j = 0, \ldots , n-1 @)@ is defined by @[@ \begin{array}{rcl} C(0) & = & c[m] \\ C(j+1) & = & c[ C(j) ] \end{array} @]@ All the elements of c must have value less than or equal m . The elements of vector c are modified during the computation, and restored to their original value before the return from det_of_minor.

12.9.1.i: d
The result d has prototype
     double 
d
and is equal to the determinant of the minor @(@ M @)@.

12.9.1.j: Source Code
double det_of_minor(
     const double*        a  ,
     size_t               m  ,
     size_t               n  ,
     size_t*              r  ,
     size_t*              c  )
{     size_t R0, Cj, Cj1, j;
     double detM, M0j, detS;
     int s;

     R0 = r[m]; /* R(0) */
     Cj = c[m]; /* C(j)    (case j = 0) */
     Cj1 = m;   /* C(j-1)  (case j = 0) */

     /* check for 1 by 1 case */
     if( n == 1 ) return a[ R0 * m + Cj ];

     /* initialize determinant of the minor M */
     detM = 0.;

     /* initialize sign of factor for neat sub-minor */
     s = 1;

     /* remove row with index 0 in M from all the sub-minors of M */
     r[m] = r[R0];

     /* for each column of M */
     for(j = 0; j < n; j++)
     {     /* element with index (0,j) in the minor M */
          M0j = a[ R0 * m + Cj ];

          /* remove column with index j in M to form next sub-minor S of M */
          c[Cj1] = c[Cj];

          /* compute determinant of the current sub-minor S */
          detS = det_of_minor(a, m, n - 1, r, c);

          /* restore column Cj to representation of M as a minor of A */
          c[Cj1] = Cj;

          /* include this sub-minor term in the summation */
          if( s > 0 )
               detM = detM + M0j * detS;
          else     detM = detM - M0j * detS;

          /* advance to neat column of M */
          Cj1 = Cj;
          Cj  = c[Cj];
          s   = - s;
     }

     /* restore row zero to the minor representation for M */
     r[m] = R0;

     /* return the determinant of the minor M */
     return detM;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.2: Compute Determinant using Expansion by Minors

12.9.2.a: Syntax
d = det_by_minor(an)

12.9.2.b: Purpose
returns the determinant of the matrix @(@ A @)@ using expansion by minors. The elements of the @(@ n \times n @)@ minor @(@ M @)@ of the matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , n-1 @)@, by @[@ M_{i,j} = A_{i, j} @]@

12.9.2.c: a
The argument a has prototype
     const double* 
a
and is a vector with size @(@ m * m @)@. The elements of the @(@ m \times m @)@ matrix @(@ A @)@ are defined, for @(@ i = 0 , \ldots , m-1 @)@ and @(@ j = 0 , \ldots , m-1 @)@, by @[@ A_{i,j} = a[ i * m + j] @]@

12.9.2.d: m
The argument m has prototype
     size_t 
m
and is the number of rows (and columns) in the square matrix @(@ A @)@.

12.9.2.e: Source Code
double det_by_minor(double* a, size_t m)
{     size_t *r, *c, i;
     double value;

     r = (size_t*) malloc( (m+1) * sizeof(size_t) );
     c = (size_t*) malloc( (m+1) * sizeof(size_t) );

     assert(m <= 100);
     for(i = 0; i < m; i++)
     {     r[i] = i+1;
          c[i] = i+1;
     }
     r[m] = 0;
     c[m] = 0;

     value = det_of_minor(a, m, m, r, c);

     free(r);
     free(c);
     return value;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.3: Simulate a [0,1] Uniform Random Variate

12.9.3.a: Syntax
random_seed(seed)
uniform_01(na)

12.9.3.b: Purpose
This routine is used to create random values for speed testing purposes.

12.9.3.c: seed
The argument seed has prototype
     size_t 
seed
It specifies a seed for the uniform random number generator.

12.9.3.d: n
The argument n has prototype
     size_t 
n
It specifies the number of elements in the random vector a .

12.9.3.e: a
The argument a has prototype
     double* 
a
. The input value of the elements of a does not matter. Upon return, the elements of a are set to values randomly sampled over the interval [0,1].

12.9.3.f: Source Code
void random_seed(size_t seed)
{     srand( (unsigned int) seed );
}
void uniform_01(size_t n, double* a)
{     static double factor = 1. / (double) RAND_MAX;
     while(n--)
          a[n] = rand() * factor;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.4: Correctness Test of det_by_minor Routine

12.9.4.a: Syntax
flag = correct_det_by_minor()

12.9.4.b: flag
The return value has prototype
     bool 
flag
It value is 1 if the test passes and 0 otherwise.

12.9.4.c: Source Code
bool correct_det_by_minor(void)
{     double a[9], det, check;

     random_seed(123);
     uniform_01(9, a);

     /* compute determinant using expansion by minors */
     det = det_by_minor(a, 3);

     /* use expansion by minors to hand code the determinant  */
     check = 0.;
     check += a[0] * ( a[4] * a[8] - a[5] * a[7] );
     check -= a[1] * ( a[3] * a[8] - a[5] * a[6] );
     check += a[2] * ( a[3] * a[7] - a[4] * a[6] );

     double eps99 = 99.0 * DBL_EPSILON;
     if( fabs(det / check - 1.0) < eps99 )
          return true;
     return false;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.5: Repeat det_by_minor Routine A Specified Number of Times

12.9.5.a: Syntax
repeat_det_by_minor(repeatsize)

12.9.5.b: repeat
The argument has prototype
     size_t 
repeat
It specifies the number of times to repeat the calculation.

12.9.5.c: size
The argument has prototype
     size_t 
size
It specifies the number of rows (and columns) in the square matrix we are computing the determinant of.

12.9.5.d: Source Code
void repeat_det_by_minor(size_t repeat, size_t size)
{     double *a;
     a = (double*) malloc( (size * size) * sizeof(double) );

     while(repeat--)
     {     uniform_01(size * size, a);
          det_by_minor(a, size);
     }

     free(a);
     return;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.6: Returns Elapsed Number of Seconds

12.9.6.a: Syntax
s = elapsed_seconds()

12.9.6.b: Purpose
This routine is accurate to within .02 seconds It does not necessary work for time intervals that are greater than a day.

12.9.6.c: s
is a double equal to the number of seconds since the first call to elapsed_seconds.

12.9.6.d: Source Code
# if _MSC_VER
// ---------------------------------------------------------------------------
// Microsoft version of timer
# include <windows.h>
# include <cassert>
double elapsed_seconds(void)
{     static bool       first_  = true;
     static SYSTEMTIME st_;
     double hour, minute, second, milli, diff;
     SYSTEMTIME st;

     if( first_ )
     {     GetSystemTime(&st_);
          first_ = false;
          return 0.;
     }
     GetSystemTime(&st);

     hour   = (double) st.wHour         - (double) st_.wHour;
     minute = (double) st.wMinute       - (double) st_.wMinute;
     second = (double) st.wSecond       - (double) st_.wSecond;
     milli  = (double) st.wMilliseconds - (double) st_.wMilliseconds;

     diff   = 1e-3*milli + second + 60.*minute + 3600.*hour;
     if( diff < 0. )
          diff += 3600.*24.;
     assert( 0 <= diff && diff < 3600.*24. );

     return diff;
}
# else
// ---------------------------------------------------------------------------
// Unix version of timer
# include <sys/time.h>
double elapsed_seconds(void)
{     double sec, usec, diff;

     static bool first_ = true;
     static struct timeval tv_first;
     struct timeval        tv;
     if( first_ )
     {     gettimeofday(&tv_first, NULL);
          first_ = false;
          return 0.;
     }
     gettimeofday(&tv, NULL);
     assert( tv.tv_sec >= tv_first.tv_sec );

     sec  = (double)(tv.tv_sec -  tv_first.tv_sec);
     usec = (double)tv.tv_usec - (double)tv_first.tv_usec;
     diff = sec + 1e-6*usec;

     return diff;
}
# endif

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.7: Determine Amount of Time to Execute det_by_minor

12.9.7.a: Syntax
time = time_test(sizetime_min)

12.9.7.b: Purpose
reports the amount of wall clock time for det_by_minor to compute the determinant of a square matrix. The size has prototype
     size_t 
size
It specifies the number of rows (and columns) in the square matrix that the determinant is being calculated for.

12.9.7.c: time_min
The argument time_min has prototype
     double 
time_min
It specifies the minimum amount of time in seconds that the test routine should take. The calculations is repeated the necessary number of times so that this amount of execution time (or more) is reached.

12.9.7.d: time
The return value time has prototype
     double 
time
and is the number of wall clock seconds that it took for det_by_minor to compute its determinant (plus overhead which includes choosing a random matrix).

12.9.7.e: Source Code
double time_det_by_minor(size_t size, double time_min)
{     size_t repeat;
     double s0, s1, time;
     repeat = 0;
     s0     = elapsed_seconds();
     s1     = s0;
     while( s1 - s0 < time_min )
     {     if( repeat == 0 )
               repeat = 1;
          else     repeat = 2 * repeat;
          s0     = elapsed_seconds();
          repeat_det_by_minor(repeat, size);
          s1     = elapsed_seconds();
     }
     time = (s1 - s0) / (double) repeat;
     return time;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.9.8: Main Program For Comparing C and C++ Speed

12.9.8.a: Source Code
int main(void)
{     bool flag;
     size_t i;

     random_seed(123);

     printf("correct_det_by_minor: ");
     flag = correct_det_by_minor();
     if( flag )
          printf("OK\n");
     else     printf("Error\n");

     for(i = 0; i < 5; i++)
     {     double time_min = 1.0;
          size_t size     = 2 + i * 2;
          int   i_size    = (int) size;
          printf("time_det_minor for %d x %d matrix = ", i_size, i_size);
          printf("%g\n", time_det_by_minor(size, time_min) );
     }

     if( flag )
          return 0;
     return 1;
}

Input File: test_more/compare_c/det_by_minor.c
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10: Some Numerical AD Utilities
The routines listed below are numerical utilities that are designed to work with CppAD in particular.

12.10.a: Contents
BenderQuad: 12.10.1Computing Jacobian and Hessian of Bender's Reduced Objective
opt_val_hes: 12.10.2Jacobian and Hessian of Optimal Values
LuRatio: 12.10.3LU Factorization of A Square Matrix and Stability Calculation

Input File: omh/appendix/numeric_ad.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective

12.10.1.a: Syntax

# include <cppad/cppad.hpp>
BenderQuad(
xyfunggxgxx)


12.10.1.b: See Also
12.10.2: opt_val_hes

12.10.1.c: Problem
The type 12.10.1.l: ADvector cannot be determined form the arguments above (currently the type ADvector must be CPPAD_TESTVECTOR(Base) .) This will be corrected in the future by requiring Fun to define Fun::vector_type which will specify the type ADvector .

12.10.1.d: Purpose
We are given the optimization problem @[@ \begin{array}{rcl} {\rm minimize} & F(x, y) & {\rm w.r.t.} \; (x, y) \in \B{R}^n \times \B{R}^m \end{array} @]@ that is convex with respect to @(@ y @)@. In addition, we are given a set of equations @(@ H(x, y) @)@ such that @[@ H[ x , Y(x) ] = 0 \;\; \Rightarrow \;\; F_y [ x , Y(x) ] = 0 @]@ (In fact, it is often the case that @(@ H(x, y) = F_y (x, y) @)@.) Furthermore, it is easy to calculate a Newton step for these equations; i.e., @[@ dy = - [ \partial_y H(x, y)]^{-1} H(x, y) @]@ The purpose of this routine is to compute the value, Jacobian, and Hessian of the reduced objective function @[@ G(x) = F[ x , Y(x) ] @]@ Note that if only the value and Jacobian are needed, they can be computed more quickly using the relations @[@ G^{(1)} (x) = \partial_x F [x, Y(x) ] @]@

12.10.1.e: x
The BenderQuad argument x has prototype
     const 
BAvector &x
(see 12.10.1.k: BAvector below) and its size must be equal to n . It specifies the point at which we evaluating the reduced objective function and its derivatives.

12.10.1.f: y
The BenderQuad argument y has prototype
     const 
BAvector &y
and its size must be equal to m . It must be equal to @(@ Y(x) @)@; i.e., it must solve the problem in @(@ y @)@ for this given value of @(@ x @)@ @[@ \begin{array}{rcl} {\rm minimize} & F(x, y) & {\rm w.r.t.} \; y \in \B{R}^m \end{array} @]@

12.10.1.g: fun
The BenderQuad object fun must support the member functions listed below. The AD<Base> arguments will be variables for a tape created by a call to 5.1.1: Independent from BenderQuad (hence they can not be combined with variables corresponding to a different tape).

12.10.1.g.a: fun.f
The BenderQuad argument fun supports the syntax
     
f = fun.f(xy)
The fun.f argument x has prototype
     const 
ADvector &x
(see 12.10.1.l: ADvector below) and its size must be equal to n . The fun.f argument y has prototype
     const 
ADvector &y
and its size must be equal to m . The fun.f result f has prototype
     
ADvector f
and its size must be equal to one. The value of f is @[@ f = F(x, y) @]@.

12.10.1.g.b: fun.h
The BenderQuad argument fun supports the syntax
     
h = fun.h(xy)
The fun.h argument x has prototype
     const 
ADvector &x
and its size must be equal to n . The fun.h argument y has prototype
     const 
BAvector &y
and its size must be equal to m . The fun.h result h has prototype
     
ADvector h
and its size must be equal to m . The value of h is @[@ h = H(x, y) @]@.

12.10.1.g.c: fun.dy
The BenderQuad argument fun supports the syntax
     
dy = fun.dy(xyh)

x
The fun.dy argument x has prototype
     const 
BAvector &x
and its size must be equal to n . Its value will be exactly equal to the BenderQuad argument x and values depending on it can be stored as private objects in f and need not be recalculated.

y
The fun.dy argument y has prototype
     const 
BAvector &y
and its size must be equal to m . Its value will be exactly equal to the BenderQuad argument y and values depending on it can be stored as private objects in f and need not be recalculated.

h
The fun.dy argument h has prototype
     const 
ADvector &h
and its size must be equal to m .

dy
The fun.dy result dy has prototype
     
ADvector dy
and its size must be equal to m . The return value dy is given by @[@ dy = - [ \partial_y H (x , y) ]^{-1} h @]@ Note that if h is equal to @(@ H(x, y) @)@, @(@ dy @)@ is the Newton step for finding a zero of @(@ H(x, y) @)@ with respect to @(@ y @)@; i.e., @(@ y + dy @)@ is an approximate solution for the equation @(@ H (x, y + dy) = 0 @)@.

12.10.1.h: g
The argument g has prototype
     
BAvector &g
and has size one. The input value of its element does not matter. On output, it contains the value of @(@ G (x) @)@; i.e., @[@ g[0] = G (x) @]@

12.10.1.i: gx
The argument gx has prototype
     
BAvector &gx
and has size @(@ n @)@. The input values of its elements do not matter. On output, it contains the Jacobian of @(@ G (x) @)@; i.e., for @(@ j = 0 , \ldots , n-1 @)@, @[@ gx[ j ] = G^{(1)} (x)_j @]@

12.10.1.j: gxx
The argument gx has prototype
     
BAvector &gxx
and has size @(@ n \times n @)@. The input values of its elements do not matter. On output, it contains the Hessian of @(@ G (x) @)@; i.e., for @(@ i = 0 , \ldots , n-1 @)@, and @(@ j = 0 , \ldots , n-1 @)@, @[@ gxx[ i * n + j ] = G^{(2)} (x)_{i,j} @]@

12.10.1.k: BAvector
The type BAvector must be a 8.9: SimpleVector class. We use Base to refer to the type of the elements of BAvector ; i.e.,
     
BAvector::value_type

12.10.1.l: ADvector
The type ADvector must be a 8.9: SimpleVector class with elements of type AD<Base> ; i.e.,
     
ADvector::value_type
must be the same type as
     AD< 
BAvector::value_type >
.

12.10.1.m: Example
The file 12.10.1.1: bender_quad.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/bender_quad.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10.1.1: BenderQuad: Example and Test
Define @(@ F : \B{R} \times \B{R} \rightarrow \B{R} @)@ by @[@ F(x, y) = \frac{1}{2} \sum_{i=1}^N [ y * \sin ( x * t_i ) - z_i ]^2 @]@ where @(@ z \in \B{R}^N @)@ is a fixed vector. It follows that @[@ \begin{array}{rcl} \partial_y F(x, y) & = & \sum_{i=1}^N [ y * \sin ( x * t_i ) - z_i ] \sin( x * t_i ) \\ \partial_y \partial_y F(x, y) & = & \sum_{i=1}^N \sin ( x t_i )^2 \end{array} @]@ Furthermore if we define @(@ Y(x) @)@ as the argmin of @(@ F(x, y) @)@ with respect to @(@ y @)@, @[@ \begin{array}{rcl} Y(x) & = & y - [ \partial_y \partial_y F(x, y) ]^{-1} \partial_y F[x, y] \\ & = & \left. \sum_{i=1}^N z_i \sin ( x t_i ) \right/ \sum_{i=1}^N z_i \sin ( x * t_i )^2 \end{array} @]@

# include <cppad/cppad.hpp>

namespace {
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(double)         BAvector;
     typedef CPPAD_TESTVECTOR(AD<double>)   ADvector;

     class Fun {
     private:
          BAvector t_; // measurement times
          BAvector z_; // measurement values
     public:
          // constructor
          Fun(const BAvector &t, const BAvector &z)
          : t_(t), z_(z)
          { }
          // Fun.f(x, y) = F(x, y)
          ADvector f(const ADvector &x, const ADvector &y)
          {     size_t i;
               size_t N = size_t(z_.size());

               ADvector F(1);
               F[0] = 0.;

               AD<double> residual;
               for(i = 0; i < N; i++)
               {     residual = y[0] * sin( x[0] * t_[i] ) - z_[i];
                    F[0]    += .5 * residual * residual;
               }
               return F;
          }
          // Fun.h(x, y) = H(x, y) = F_y (x, y)
          ADvector h(const ADvector &x, const BAvector &y)
          {     size_t i;
               size_t N = size_t(z_.size());

               ADvector fy(1);
               fy[0] = 0.;

               AD<double> residual;
               for(i = 0; i < N; i++)
               {     residual = y[0] * sin( x[0] * t_[i] ) - z_[i];
                    fy[0]   += residual * sin( x[0] * t_[i] );
               }
               return fy;
          }
          // Fun.dy(x, y, h) = - H_y (x,y)^{-1} * h
          //                 = - F_yy (x, y)^{-1} * h
          ADvector dy(
               const BAvector &x ,
               const BAvector &y ,
               const ADvector &H )
          {     size_t i;
               size_t N = size_t(z_.size());

               ADvector Dy(1);
               AD<double> fyy = 0.;

               for(i = 0; i < N; i++)
               {     fyy += sin( x[0] * t_[i] ) * sin( x[0] * t_[i] );
               }
               Dy[0] = - H[0] / fyy;

               return Dy;
          }
     };

     // Used to test calculation of Hessian of G
     AD<double> G(const ADvector& x, const BAvector& t, const BAvector& z)
     {     // compute Y(x)
          AD<double> numerator = 0.;
          AD<double> denominator = 0.;
          size_t k;
          for(k = 0; k < size_t(t.size()); k++)
          {     numerator   += sin( x[0] * t[k] ) * z[k];
               denominator += sin( x[0] * t[k] ) * sin( x[0] * t[k] );
          }
          AD<double> y = numerator / denominator;

          // V(x) = F[x, Y(x)]
          AD<double> sum = 0;
          for(k = 0; k < size_t(t.size()); k++)
          {     AD<double> residual = y * sin( x[0] * t[k] ) - z[k];
               sum += .5 * residual * residual;
          }
          return sum;
     }
}

bool BenderQuad(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;

     // temporary indices
     size_t i, j;

     // x space vector
     size_t n = 1;
     BAvector x(n);
     x[0] = 2. * 3.141592653;

     // y space vector
     size_t m = 1;
     BAvector y(m);
     y[0] = 1.;

     // t and z vectors
     size_t N = 10;
     BAvector t(N);
     BAvector z(N);
     for(i = 0; i < N; i++)
     {     t[i] = double(i) / double(N);       // time of measurement
          z[i] = y[0] * sin( x[0] * t[i] );   // data without noise
     }

     // construct the function object
     Fun fun(t, z);

     // evaluate the G(x), G'(x) and G''(x)
     BAvector g(1), gx(n), gxx(n * n);
     CppAD::BenderQuad(x, y, fun, g, gx, gxx);


     // create ADFun object Gfun corresponding to G(x)
     ADvector a_x(n), a_g(1);
     for(j = 0; j < n; j++)
          a_x[j] = x[j];
     Independent(a_x);
     a_g[0] = G(a_x, t, z);
     CppAD::ADFun<double> Gfun(a_x, a_g);

     // accuracy for checks
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // check Jacobian
     BAvector check_gx = Gfun.Jacobian(x);
     for(j = 0; j < n; j++)
          ok &= NearEqual(gx[j], check_gx[j], eps, eps);

     // check Hessian
     BAvector check_gxx = Gfun.Hessian(x, 0);
     for(j = 0; j < n*n; j++)
          ok &= NearEqual(gxx[j], check_gxx[j], eps, eps);

     return ok;
}

Input File: example/general/bender_quad.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10.2: Jacobian and Hessian of Optimal Values

12.10.2.a: Syntax
signdet = opt_val_hes(xyfunjaches)

12.10.2.b: See Also
12.10.1: BenderQuad

12.10.2.c: Reference
Algorithmic differentiation of implicit functions and optimal values, Bradley M. Bell and James V. Burke, Advances in Automatic Differentiation, 2008, Springer.

12.10.2.d: Purpose
We are given a function @(@ S : \B{R}^n \times \B{R}^m \rightarrow \B{R}^\ell @)@ and we define @(@ F : \B{R}^n \times \B{R}^m \rightarrow \B{R} @)@ and @(@ V : \B{R}^n \rightarrow \B{R} @)@ by @[@ \begin{array}{rcl} F(x, y) & = & \sum_{k=0}^{\ell-1} S_k ( x , y) \\ V(x) & = & F [ x , Y(x) ] \\ 0 & = & \partial_y F [x , Y(x) ] \end{array} @]@ We wish to compute the Jacobian and possibly also the Hessian, of @(@ V (x) @)@.

12.10.2.e: BaseVector
The type BaseVector must be a 8.9: SimpleVector class. We use Base to refer to the type of the elements of BaseVector ; i.e.,
     
BaseVector::value_type

12.10.2.f: x
The argument x has prototype
     const 
BaseVectorx
and its size must be equal to n . It specifies the point at which we evaluating the Jacobian @(@ V^{(1)} (x) @)@ (and possibly the Hessian @(@ V^{(2)} (x) @)@).

12.10.2.g: y
The argument y has prototype
     const 
BaseVectory
and its size must be equal to m . It must be equal to @(@ Y(x) @)@; i.e., it must solve the implicit equation @[@ 0 = \partial_y F ( x , y) @]@

12.10.2.h: Fun
The argument fun is an object of type Fun which must support the member functions listed below. CppAD will may be recording operations of the type AD<Base> when these member functions are called. These member functions must not stop such a recording; e.g., they must not call 5.1.4: AD<Base>::abort_recording .

12.10.2.h.a: Fun::ad_vector
The type Fun::ad_vector must be a 8.9: SimpleVector class with elements of type AD<Base> ; i.e.
     
Fun::ad_vector::value_type
is equal to AD<Base> .

12.10.2.h.b: fun.ell
The type Fun must support the syntax
     
ell = fun.ell()
where ell has prototype
     size_t 
ell
and is the value of @(@ \ell @)@; i.e., the number of terms in the summation.

One can choose ell equal to one, and have @(@ S(x,y) @)@ the same as @(@ F(x, y) @)@. Each of the functions @(@ S_k (x , y) @)@, (in the summation defining @(@ F(x, y) @)@) is differentiated separately using AD. For very large problems, breaking @(@ F(x, y) @)@ into the sum of separate simpler functions may reduce the amount of memory necessary for algorithmic differentiation and there by speed up the process.

12.10.2.h.c: fun.s
The type Fun must support the syntax
     
s_k = fun.s(kxy)
The fun.s argument k has prototype
     size_t 
k
and is between zero and ell - 1 . The argument x to fun.s has prototype
     const 
Fun::ad_vector& x
and its size must be equal to n . The argument y to fun.s has prototype
     const 
Fun::ad_vector& y
and its size must be equal to m . The fun.s result s_k has prototype
     AD<
Bases_k
and its value must be given by @(@ s_k = S_k ( x , y ) @)@.

12.10.2.h.d: fun.sy
The type Fun must support the syntax
     
sy_k = fun.sy(kxy)
The argument k to fun.sy has prototype
     size_t 
k
The argument x to fun.sy has prototype
     const 
Fun::ad_vector& x
and its size must be equal to n . The argument y to fun.sy has prototype
     const 
Fun::ad_vector& y
and its size must be equal to m . The fun.sy result sy_k has prototype
     
Fun::ad_vector sy_k
its size must be equal to m , and its value must be given by @(@ sy_k = \partial_y S_k ( x , y ) @)@.

12.10.2.i: jac
The argument jac has prototype
     
BaseVectorjac
and has size n or zero. The input values of its elements do not matter. If it has size zero, it is not affected. Otherwise, on output it contains the Jacobian of @(@ V (x) @)@; i.e., for @(@ j = 0 , \ldots , n-1 @)@, @[@ jac[ j ] = V^{(1)} (x)_j @]@ where x is the first argument to opt_val_hes.

12.10.2.j: hes
The argument hes has prototype
     
BaseVectorhes
and has size n * n or zero. The input values of its elements do not matter. If it has size zero, it is not affected. Otherwise, on output it contains the Hessian of @(@ V (x) @)@; i.e., for @(@ i = 0 , \ldots , n-1 @)@, and @(@ j = 0 , \ldots , n-1 @)@, @[@ hes[ i * n + j ] = V^{(2)} (x)_{i,j} @]@

12.10.2.k: signdet
If hes has size zero, signdet is not defined. Otherwise the return value signdet is the sign of the determinant for @(@ \partial_{yy}^2 F(x , y) @)@. If it is zero, then the matrix is singular and the Hessian is not computed ( hes is not changed).

12.10.2.l: Example
The file 12.10.2.1: opt_val_hes.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/core/opt_val_hes.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10.2.1: opt_val_hes: Example and Test
Fix @(@ z \in \B{R}^\ell @)@ and define the functions @(@ S_k : \B{R} \times \B{R} \rightarrow \B{R}^\ell @)@ by and @(@ F : \B{R} \times \B{R} \rightarrow \B{R} @)@ by @[@ \begin{array}{rcl} S_k (x, y) & = & \frac{1}{2} [ y * \sin ( x * t_k ) - z_k ]^2 \\ F(x, y) & = & \sum_{k=0}^{\ell-1} S_k (x, y) \end{array} @]@ It follows that @[@ \begin{array}{rcl} \partial_y F(x, y) & = & \sum_{k=0}^{\ell-1} [ y * \sin ( x * t_k ) - z_k ] \sin( x * t_k ) \\ \partial_y \partial_y F(x, y) & = & \sum_{k=0}^{\ell-1} \sin ( x t_k )^2 \end{array} @]@ Furthermore if we define @(@ Y(x) @)@ as solving the equation @(@ \partial F[ x, Y(x) ] = 0 @)@ we have @[@ \begin{array}{rcl} 0 & = & \sum_{k=0}^{\ell-1} [ Y(x) * \sin ( x * t_k ) - z_k ] \sin( x * t_k ) \\ Y(x) \sum_{k=0}^{\ell-1} \sin ( x * t_k )^2 - \sum_{k=0}^{\ell-1} \sin ( x * t_k ) z_k \\ Y(x) & = & \frac{ \sum_{k=0}^{\ell-1} \sin( x * t_k ) z_k }{ \sum_{k=0}^{\ell-1} \sin ( x * t_k )^2 } \end{array} @]@

# include <limits>
# include <cppad/cppad.hpp>

namespace {
     using CppAD::AD;
     typedef CPPAD_TESTVECTOR(double)       BaseVector;
     typedef CPPAD_TESTVECTOR(AD<double>) ADVector;

     class Fun {
     private:
          const BaseVector t_;    // measurement times
          const BaseVector z_;    // measurement values
     public:
          typedef ADVector ad_vector;
          // constructor
          Fun(const BaseVector &t, const BaseVector &z)
          : t_(t) , z_(z)
          {     assert( t.size() == z.size() ); }
          // ell
          size_t ell(void) const
          {     return t_.size(); }
          // Fun.s
          AD<double> s(size_t k, const ad_vector& x, const ad_vector& y) const
          {
               AD<double> residual = y[0] * sin( x[0] * t_[k] ) - z_[k];
               AD<double> s_k      = .5 * residual * residual;

               return s_k;
          }
          // Fun.sy
          ad_vector sy(size_t k, const ad_vector& x, const ad_vector& y) const
          {     assert( y.size() == 1);
               ad_vector sy_k(1);

               AD<double> residual = y[0] * sin( x[0] * t_[k] ) - z_[k];
               sy_k[0] = residual * sin( x[0] * t_[k] );

               return sy_k;
          }
     };
     // Used to test calculation of Hessian of V
     AD<double> V(const ADVector& x, const BaseVector& t, const BaseVector& z)
     {     // compute Y(x)
          AD<double> numerator = 0.;
          AD<double> denominator = 0.;
          size_t k;
          for(k = 0; k < size_t(t.size()); k++)
          {     numerator   += sin( x[0] * t[k] ) * z[k];
               denominator += sin( x[0] * t[k] ) * sin( x[0] * t[k] );
          }
          AD<double> y = numerator / denominator;

          // V(x) = F[x, Y(x)]
          AD<double> sum = 0;
          for(k = 0; k < size_t(t.size()); k++)
          {     AD<double> residual = y * sin( x[0] * t[k] ) - z[k];
               sum += .5 * residual * residual;
          }
          return sum;
     }
}

bool opt_val_hes(void)
{     bool ok = true;
     using CppAD::AD;
     using CppAD::NearEqual;

     // temporary indices
     size_t j, k;

     // x space vector
     size_t n = 1;
     BaseVector x(n);
     x[0] = 2. * 3.141592653;

     // y space vector
     size_t m = 1;
     BaseVector y(m);
     y[0] = 1.;

     // t and z vectors
     size_t ell = 10;
     BaseVector t(ell);
     BaseVector z(ell);
     for(k = 0; k < ell; k++)
     {     t[k] = double(k) / double(ell);       // time of measurement
          z[k] = y[0] * sin( x[0] * t[k] );     // data without noise
     }

     // construct the function object
     Fun fun(t, z);

     // evaluate the Jacobian and Hessian
     BaseVector jac(n), hes(n * n);
# ifndef NDEBUG
     int signdet =
# endif
     CppAD::opt_val_hes(x, y, fun, jac, hes);

     // we know that F_yy is positive definate for this case
     assert( signdet == 1 );

     // create ADFun object g corresponding to V(x)
     ADVector a_x(n), a_v(1);
     for(j = 0; j < n; j++)
          a_x[j] = x[j];
     Independent(a_x);
     a_v[0] = V(a_x, t, z);
     CppAD::ADFun<double> g(a_x, a_v);

     // accuracy for checks
     double eps = 10. * CppAD::numeric_limits<double>::epsilon();

     // check Jacobian
     BaseVector check_jac = g.Jacobian(x);
     for(j = 0; j < n; j++)
          ok &= NearEqual(jac[j], check_jac[j], eps, eps);

     // check Hessian
     BaseVector check_hes = g.Hessian(x, 0);
     for(j = 0; j < n*n; j++)
          ok &= NearEqual(hes[j], check_hes[j], eps, eps);

     return ok;
}

Input File: example/general/opt_val_hes.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10.3: LU Factorization of A Square Matrix and Stability Calculation

12.10.3.a: Syntax
# include <cppad/cppad.hpp>

sign = LuRatio(ipjpLUratio)

12.10.3.b: Description
Computes an LU factorization of the matrix A where A is a square matrix. A measure of the numerical stability called ratio is calculated. This ratio is useful when the results of LuRatio are used as part of an 5: ADFun object.

12.10.3.c: Include
This routine is designed to be used with AD objects and requires the cppad/cppad.hpp file to be included.

12.10.3.d: Matrix Storage
All matrices are stored in row major order. To be specific, if @(@ Y @)@ is a vector that contains a @(@ p @)@ by @(@ q @)@ matrix, the size of @(@ Y @)@ must be equal to @(@ p * q @)@ and for @(@ i = 0 , \ldots , p-1 @)@, @(@ j = 0 , \ldots , q-1 @)@, @[@ Y_{i,j} = Y[ i * q + j ] @]@

12.10.3.e: sign
The return value sign has prototype
     int 
sign
If A is invertible, sign is plus or minus one and is the sign of the permutation corresponding to the row ordering ip and column ordering jp . If A is not invertible, sign is zero.

12.10.3.f: ip
The argument ip has prototype
     
SizeVector &ip
(see description of 8.14.2.i: SizeVector below). The size of ip is referred to as n in the specifications below. The input value of the elements of ip does not matter. The output value of the elements of ip determine the order of the rows in the permuted matrix.

12.10.3.g: jp
The argument jp has prototype
     
SizeVector &jp
(see description of 8.14.2.i: SizeVector below). The size of jp must be equal to n . The input value of the elements of jp does not matter. The output value of the elements of jp determine the order of the columns in the permuted matrix.

12.10.3.h: LU
The argument LU has the prototype
     
ADvector &LU
and the size of LU must equal @(@ n * n @)@ (see description of 12.10.3.k: ADvector below).

12.10.3.h.a: A
We define A as the matrix corresponding to the input value of LU .

12.10.3.h.b: P
We define the permuted matrix P in terms of A by
     
P(ij) = Aip[i] * n + jp[j] ]

12.10.3.h.c: L
We define the lower triangular matrix L in terms of the output value of LU . The matrix L is zero above the diagonal and the rest of the elements are defined by
     
L(ij) = LUip[i] * n + jp[j] ]
for @(@ i = 0 , \ldots , n-1 @)@ and @(@ j = 0 , \ldots , i @)@.

12.10.3.h.d: U
We define the upper triangular matrix U in terms of the output value of LU . The matrix U is zero below the diagonal, one on the diagonal, and the rest of the elements are defined by
     
U(ij) = LUip[i] * n + jp[j] ]
for @(@ i = 0 , \ldots , n-2 @)@ and @(@ j = i+1 , \ldots , n-1 @)@.

12.10.3.h.e: Factor
If the return value sign is non-zero,
     
L * U = P
If the return value of sign is zero, the contents of L and U are not defined.

12.10.3.h.f: Determinant
If the return value sign is zero, the determinant of A is zero. If sign is non-zero, using the output value of LU the determinant of the matrix A is equal to
sign * LU[ip[0], jp[0]] * ... * LU[ip[n-1], jp[n-1]]

12.10.3.i: ratio
The argument ratio has prototype
        AD<
Base> &ratio
On input, the value of ratio does not matter. On output it is a measure of how good the choice of pivots is. For @(@ p = 0 , \ldots , n-1 @)@, the p-th pivot element is the element of maximum absolute value of a @(@ (n-p) \times (n-p) @)@ sub-matrix. The ratio of each element of sub-matrix divided by the pivot element is computed. The return value of ratio is the maximum absolute value of such ratios over with respect to all elements and all the pivots.

12.10.3.i.a: Purpose
Suppose that the execution of a call to LuRatio is recorded in the ADFun<Base> object F . Then a call to 5.3: Forward of the form
     
F.Forward(kxk)
with k equal to zero will revaluate this Lu factorization with the same pivots and a new value for A . In this case, the resulting ratio may not be one. If ratio is too large (the meaning of too large is up to you), the current pivots do not yield a stable LU factorization of A . A better choice for the pivots (for this value of A ) will be made if you recreate the ADFun object starting with the 5.1.1: Independent variable values that correspond to the vector xk .

12.10.3.j: SizeVector
The type SizeVector must be a 8.9: SimpleVector class with 8.9.b: elements of type size_t . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

12.10.3.k: ADvector
The type ADvector must be a 8.9: simple vector class with elements of type AD<Base> . The routine 8.10: CheckSimpleVector will generate an error message if this is not the case.

12.10.3.l: Example
The file 12.10.3.1: lu_ratio.cpp contains an example and test of using LuRatio. It returns true if it succeeds and false otherwise.
Input File: cppad/core/lu_ratio.hpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.10.3.1: LuRatio: Example and Test
# include <cstdlib>               // for rand function
# include <cassert>
# include <cppad/cppad.hpp>

namespace { // Begin empty namespace

CppAD::ADFun<double> *NewFactor(
     size_t                           n ,
     const CPPAD_TESTVECTOR(double) &x ,
     bool                           &ok ,
     CPPAD_TESTVECTOR(size_t)      &ip ,
     CPPAD_TESTVECTOR(size_t)      &jp )
{     using CppAD::AD;
     using CppAD::ADFun;
     size_t i, j, k;

     // values for independent and dependent variables
     CPPAD_TESTVECTOR(AD<double>) Y(n*n+1), X(n*n);

     // values for the LU factor
     CPPAD_TESTVECTOR(AD<double>) LU(n*n);

     // record the LU factorization corresponding to this value of x
     AD<double> Ratio;
     for(k = 0; k < n*n; k++)
          X[k] = x[k];
     Independent(X);
     for(k = 0; k < n*n; k++)
          LU[k] = X[k];
     CppAD::LuRatio(ip, jp, LU, Ratio);
     for(k = 0; k < n*n; k++)
          Y[k] = LU[k];
     Y[n*n] = Ratio;

     // use a function pointer so can return ADFun object
     ADFun<double> *FunPtr = new ADFun<double>(X, Y);

     // check value of ratio during recording
     ok &= (Ratio == 1.);

     // check that ip and jp are permutations of the indices 0, ... , n-1
     for(i = 0; i < n; i++)
     {     ok &= (ip[i] < n);
          ok &= (jp[i] < n);
          for(j = 0; j < n; j++)
          {     if( i != j )
               {     ok &= (ip[i] != ip[j]);
                    ok &= (jp[i] != jp[j]);
               }
          }
     }
     return FunPtr;
}
bool CheckLuFactor(
     size_t                           n  ,
     const CPPAD_TESTVECTOR(double) &x  ,
     const CPPAD_TESTVECTOR(double) &y  ,
     const CPPAD_TESTVECTOR(size_t) &ip ,
     const CPPAD_TESTVECTOR(size_t) &jp )
{     bool     ok = true;

     using CppAD::NearEqual;
     double eps99 = 99.0 * std::numeric_limits<double>::epsilon();

     double  sum;                          // element of L * U
     double  pij;                          // element of permuted x
     size_t  i, j, k;                      // temporary indices

     // L and U factors
     CPPAD_TESTVECTOR(double)  L(n*n), U(n*n);

     // Extract L from LU factorization
     for(i = 0; i < n; i++)
     {     // elements along and below the diagonal
          for(j = 0; j <= i; j++)
               L[i * n + j] = y[ ip[i] * n + jp[j] ];
          // elements above the diagonal
          for(j = i+1; j < n; j++)
               L[i * n + j] = 0.;
     }

     // Extract U from LU factorization
     for(i = 0; i < n; i++)
     {     // elements below the diagonal
          for(j = 0; j < i; j++)
               U[i * n + j] = 0.;
          // elements along the diagonal
          U[i * n + i] = 1.;
          // elements above the diagonal
          for(j = i+1; j < n; j++)
               U[i * n + j] = y[ ip[i] * n + jp[j] ];
     }

     // Compute L * U
     for(i = 0; i < n; i++)
     {     for(j = 0; j < n; j++)
          {     // compute element (i,j) entry in L * U
               sum = 0.;
               for(k = 0; k < n; k++)
                    sum += L[i * n + k] * U[k * n + j];
               // element (i,j) in permuted version of A
               pij  = x[ ip[i] * n + jp[j] ];
               // compare
               ok  &= NearEqual(pij, sum, eps99, eps99);
          }
     }
     return ok;
}

} // end Empty namespace

bool LuRatio(void)
{     bool  ok = true;

     size_t  n = 2; // number rows in A
     double  ratio;

     // values for independent and dependent variables
     CPPAD_TESTVECTOR(double)  x(n*n), y(n*n+1);

     // pivot vectors
     CPPAD_TESTVECTOR(size_t) ip(n), jp(n);

     // set x equal to the identity matrix
     x[0] = 1.; x[1] = 0;
     x[2] = 0.; x[3] = 1.;

     // create a fnction object corresponding to this value of x
     CppAD::ADFun<double> *FunPtr = NewFactor(n, x, ok, ip, jp);

     // use function object to factor matrix
     y     = FunPtr->Forward(0, x);
     ratio = y[n*n];
     ok   &= (ratio == 1.);
     ok   &= CheckLuFactor(n, x, y, ip, jp);

     // set x so that the pivot ratio will be infinite
     x[0] = 0. ; x[1] = 1.;
     x[2] = 1. ; x[3] = 0.;

     // try to use old function pointer to factor matrix
     y     = FunPtr->Forward(0, x);
     ratio = y[n*n];

     // check to see if we need to refactor matrix
     ok &= (ratio > 10.);
     if( ratio > 10. )
     {     delete FunPtr; // to avoid a memory leak
          FunPtr = NewFactor(n, x, ok, ip, jp);
     }

     //  now we can use the function object to factor matrix
     y     = FunPtr->Forward(0, x);
     ratio = y[n*n];
     ok    &= (ratio == 1.);
     ok    &= CheckLuFactor(n, x, y, ip, jp);

     delete FunPtr;  // avoid memory leak
     return ok;
}

Input File: example/general/lu_ratio.cpp
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.11: CppAD Addons

12.11.a: Name
Each CppAD addon has a short name which we denote by name below, a longer name longer and a description :
name    longer    description
tmb (https://github.com/kaskr/adcomp) adcomp An R Interface to CppAD with Random Effects Modeling Utilities
cg (https://github.com/joaoleal/CppADCodeGen/) CppADCodeGen C++ Source Code Generation of CppAD Derivative Calculations
mixed (http://moby.ihme.washington.edu/bradbell/cppad_mixed) cppad_mixed A C++ Interface to Random Effects Laplace Approximation
swig (http://www.seanet.com/~bradbell/cppad_swig) cppad_swig A C++ AD Library with a Swig Interface to Perl, Octave, and Python
py (http://www.seanet.com/~bradbell/pycppad/pycppad.htm) pycppad A Python Interface to CppAD

12.11.b: Include Files
If includedir is the directory where the include files are installed, the file
     
includedir/cppad/name.hpp
and the directory
     
includedir/cppad/name
are reserved for use by the name addon.

12.11.c: Library Files
If libdir is the directory where CppAD library files are installed, files with the name
     
libdir/libcppad_name.ext
     
libdir/libcppad_name_anything.ext
where anything and ext are arbitrary, are reserved for use by the name addon.

12.11.d: Preprocessor Symbols
C++ preprocessor symbols that begin with
     CPPAD_
NAME_
where NAME is a upper-case version of name , are reserved for use by the name addon.

12.11.e: Namespace
The C++ namespace
     CppAD::
name
is reserved for use by the name addon.
Input File: omh/appendix/addon.omh
@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@
12.12: Your License for the CppAD Software


 

Eclipse Public License - v 1.0

THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE
PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR
DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS
AGREEMENT.

1. DEFINITIONS

"Contribution" means:

	a) in the case of the initial Contributor, the initial
	code and documentation distributed under this Agreement, and
	b) in the case of each subsequent Contributor:
	i) changes to the Program, and
	ii) additions to the Program;
	where such changes and/or additions to the Program
	originate from and are distributed by that particular Contributor. A
	Contribution 'originates' from a Contributor if it was added to the
	Program by such Contributor itself or anyone acting on such
	Contributor's behalf. Contributions do not include additions to the
	Program which: (i) are separate modules of software distributed in
	conjunction with the Program under their own license agreement, and (ii)
	are not derivative works of the Program.

"Contributor" means any person or entity that distributes
the Program.

"Licensed Patents" mean patent claims licensable by a
Contributor which are necessarily infringed by the use or sale of its
Contribution alone or when combined with the Program.

"Program" means the Contributions distributed in accordance
with this Agreement.

"Recipient" means anyone who receives the Program under
this Agreement, including all Contributors.

2. GRANT OF RIGHTS

	a) Subject to the terms of this Agreement, each
	Contributor hereby grants Recipient a non-exclusive, worldwide,
	royalty-free copyright license to reproduce, prepare derivative works
	of, publicly display, publicly perform, distribute and sublicense the
	Contribution of such Contributor, if any, and such derivative works, in
	source code and object code form.

	b) Subject to the terms of this Agreement, each
	Contributor hereby grants Recipient a non-exclusive, worldwide,
	royalty-free patent license under Licensed Patents to make, use, sell,
	offer to sell, import and otherwise transfer the Contribution of such
	Contributor, if any, in source code and object code form. This patent
	license shall apply to the combination of the Contribution and the
	Program if, at the time the Contribution is added by the Contributor,
	such addition of the Contribution causes such combination to be covered
	by the Licensed Patents. The patent license shall not apply to any other
	combinations which include the Contribution. No hardware per se is
	licensed hereunder.

	c) Recipient understands that although each Contributor
	grants the licenses to its Contributions set forth herein, no assurances
	are provided by any Contributor that the Program does not infringe the
	patent or other intellectual property rights of any other entity. Each
	Contributor disclaims any liability to Recipient for claims brought by
	any other entity based on infringement of intellectual property rights
	or otherwise. As a condition to exercising the rights and licenses
	granted hereunder, each Recipient hereby assumes sole responsibility to
	secure any other intellectual property rights needed, if any. For
	example, if a third party patent license is required to allow Recipient
	to distribute the Program, it is Recipient's responsibility to acquire
	that license before distributing the Program.

	d) Each Contributor represents that to its knowledge it
	has sufficient copyright rights in its Contribution, if any, to grant
	the copyright license set forth in this Agreement.

3. REQUIREMENTS

A Contributor may choose to distribute the Program in object code
form under its own license agreement, provided that:

	a) it complies with the terms and conditions of this
	Agreement; and

	b) its license agreement:

	i) effectively disclaims on behalf of all Contributors
	all warranties and conditions, express and implied, including warranties
	or conditions of title and non-infringement, and implied warranties or
	conditions of merchantability and fitness for a particular purpose;

	ii) effectively excludes on behalf of all Contributors
	all liability for damages, including direct, indirect, special,
	incidental and consequential damages, such as lost profits;

	iii) states that any provisions which differ from this
	Agreement are offered by that Contributor alone and not by any other
	party; and

	iv) states that source code for the Program is available
	from such Contributor, and informs licensees how to obtain it in a
	reasonable manner on or through a medium customarily used for software
	exchange.

When the Program is made available in source code form:

	a) it must be made available under this Agreement; and

	b) a copy of this Agreement must be included with each
	copy of the Program.

Contributors may not remove or alter any copyright notices contained
within the Program.

Each Contributor must identify itself as the originator of its
Contribution, if any, in a manner that reasonably allows subsequent
Recipients to identify the originator of the Contribution.

4. COMMERCIAL DISTRIBUTION

Commercial distributors of software may accept certain
responsibilities with respect to end users, business partners and the
like. While this license is intended to facilitate the commercial use of
the Program, the Contributor who includes the Program in a commercial
product offering should do so in a manner which does not create
potential liability for other Contributors. Therefore, if a Contributor
includes the Program in a commercial product offering, such Contributor
("Commercial Contributor") hereby agrees to defend and
indemnify every other Contributor ("Indemnified Contributor")
against any losses, damages and costs (collectively "Losses")
arising from claims, lawsuits and other legal actions brought by a third
party against the Indemnified Contributor to the extent caused by the
acts or omissions of such Commercial Contributor in connection with its
distribution of the Program in a commercial product offering. The
obligations in this section do not apply to any claims or Losses
relating to any actual or alleged intellectual property infringement. In
order to qualify, an Indemnified Contributor must: a) promptly notify
the Commercial Contributor in writing of such claim, and b) allow the
Commercial Contributor to control, and cooperate with the Commercial
Contributor in, the defense and any related settlement negotiations. The
Indemnified Contributor may participate in any such claim at its own
expense.

For example, a Contributor might include the Program in a commercial
product offering, Product X. That Contributor is then a Commercial
Contributor. If that Commercial Contributor then makes performance
claims, or offers warranties related to Product X, those performance
claims and warranties are such Commercial Contributor's responsibility
alone. Under this section, the Commercial Contributor would have to
defend claims against the other Contributors related to those
performance claims and warranties, and if a court requires any other
Contributor to pay any damages as a result, the Commercial Contributor
must pay those damages.

5. NO WARRANTY

EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS
PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION,
ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY
OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely
responsible for determining the appropriateness of using and
distributing the Program and assumes all risks associated with its
exercise of rights under this Agreement , including but not limited to
the risks and costs of program errors, compliance with applicable laws,
damage to or loss of data, programs or equipment, and unavailability or
interruption of operations.

6. DISCLAIMER OF LIABILITY

EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT
NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING
WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR
DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

7. GENERAL

If any provision of this Agreement is invalid or unenforceable under
applicable law, it shall not affect the validity or enforceability of
the remainder of the terms of this Agreement, and without further action
by the parties hereto, such provision shall be reformed to the minimum
extent necessary to make such provision valid and enforceable.

If Recipient institutes patent litigation against any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the
Program itself (excluding combinations of the Program with other
software or hardware) infringes such Recipient's patent(s), then such
Recipient's rights granted under Section 2(b) shall terminate as of the
date such litigation is filed.

All Recipient's rights under this Agreement shall terminate if it
fails to comply with any of the material terms or conditions of this
Agreement and does not cure such failure in a reasonable period of time
after becoming aware of such noncompliance. If all Recipient's rights
under this Agreement terminate, Recipient agrees to cease use and
distribution of the Program as soon as reasonably practicable. However,
Recipient's obligations under this Agreement and any licenses granted by
Recipient relating to the Program shall continue and survive.

Everyone is permitted to copy and distribute copies of this
Agreement, but in order to avoid inconsistency the Agreement is
copyrighted and may only be modified in the following manner. The
Agreement Steward reserves the right to publish new versions (including
revisions) of this Agreement from time to time. No one other than the
Agreement Steward has the right to modify this Agreement. The Eclipse
Foundation is the initial Agreement Steward. The Eclipse Foundation may
assign the responsibility to serve as the Agreement Steward to a
suitable separate entity. Each new version of the Agreement will be
given a distinguishing version number. The Program (including
Contributions) may always be distributed subject to the version of the
Agreement under which it was received. In addition, after a new version
of the Agreement is published, Contributor may elect to distribute the
Program (including its Contributions) under the new version. Except as
expressly stated in Sections 2(a) and 2(b) above, Recipient receives no
rights or licenses to the intellectual property of any Contributor under
this Agreement, whether expressly, by implication, estoppel or
otherwise. All rights in the Program not expressly granted under this
Agreement are reserved.

This Agreement is governed by the laws of the State of New York and
the intellectual property laws of the United States of America. No party
to this Agreement will bring a legal action under this Agreement more
than one year after the cause of action arose. Each party waives its


Input File: omh/appendix/license.omh
13: Alphabetic Listing of Cross Reference Tags
A
7.2.2: a11c_bthread.cpp
A Simple Boost Thread Example and Test
7.2.1: a11c_openmp.cpp
A Simple OpenMP Example and Test
7.2.3: a11c_pthread.cpp
A Simple Parallel Pthread Example and Test
5.1.4: abort_recording
Abort Recording of an Operation Sequence
5.1.4.1: abort_recording.cpp
Abort Current Recording: Example and Test
4.4.2.14: abs
AD Absolute Value Functions: abs, fabs
5.8.3: abs_eval
abs_normal: Evaluate First Order Approximation
5.8.3.1: abs_eval.cpp
abs_eval: Example and Test
5.8.3.2: abs_eval.hpp
abs_eval Source Code
5.8.1.1: abs_get_started.cpp
abs_normal Getting Started: Example and Test
5.8.6: abs_min_linear
abs_normal: Minimize a Linear Abs-normal Approximation
5.8.6.1: abs_min_linear.cpp
abs_min_linear: Example and Test
5.8.6.2: abs_min_linear.hpp
abs_min_linear Source Code
5.8.10: abs_min_quad
abs_normal: Minimize a Linear Abs-normal Approximation
5.8.10.1: abs_min_quad.cpp
abs_min_quad: Example and Test
5.8.10.2: abs_min_quad.hpp
abs_min_quad Source Code
5.8: abs_normal
Abs-normal Representation of Non-Smooth Functions
5.8.1: abs_normal_fun
Create An Abs-normal Representation of a Function
5.8.2: abs_print_mat
abs_normal: Print a Vector or Matrix
4.4.2.1: acos
Inverse Sine Function: acos
4.4.2.1.1: acos.cpp
The AD acos Function: Example and Test
12.3.1.7: acos_forward
Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
12.3.2.7: acos_reverse
Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
4.4.2.15: acosh
The Inverse Hyperbolic Cosine Function: acosh
4.4.2.15.1: acosh.cpp
The AD acosh Function: Example and Test
4: AD
AD Objects
4.2: ad_assign
AD Assignment Operator
4.2.1: ad_assign.cpp
AD Assignment: Example and Test
4.4.1.3: ad_binary
AD Binary Arithmetic Operators
4.1: ad_ctor
AD Constructors
4.1.1: ad_ctor.cpp
AD Constructors: Example and Test
10.2.1: ad_fun.cpp
Creating Your Own Interface to an ADFun Object
10.2.2: ad_in_c.cpp
Example and Test Linking CppAD to Languages Other than C++
4.3.4: ad_input
AD Output Stream Operator
4.3.4.1: ad_input.cpp
AD Output Operator: Example and Test
4.3.5: ad_output
AD Output Stream Operator
4.3.5.1: ad_output.cpp
AD Output Operator: Example and Test
4.3.3: ad_to_string
Convert An AD or Base Type to String
4.4.1.3.1: add.cpp
AD Binary Addition: Example and Test
4.4.1.4.1: AddEq.cpp
AD Compound Assignment Addition: Example and Test
12.11: addon
CppAD Addons
5: ADFun
ADFun Objects
11.4.8: adolc_alloc_mat
Adolc Test Utility: Allocate and Free Memory For a Matrix
11.4.2: adolc_det_lu.cpp
Adolc Speed: Gradient of Determinant Using Lu Factorization
11.4.1: adolc_det_minor.cpp
Adolc Speed: Gradient of Determinant by Minor Expansion
11.4.3: adolc_mat_mul.cpp
Adolc Speed: Matrix Multiplication
11.4.4: adolc_ode.cpp
Adolc Speed: Ode
11.4.5: adolc_poly.cpp
Adolc Speed: Second Derivative of a Polynomial
2.2.1: adolc_prefix
Including the ADOL-C Examples and Tests
11.4.6: adolc_sparse_hessian.cpp
Adolc Speed: Sparse Hessian
11.4.7: adolc_sparse_jacobian.cpp
adolc Speed: Sparse Jacobian
4.4: ADValued
AD Valued Operations and Functions
12: Appendix
Appendix
4.4.1: Arithmetic
AD Arithmetic Operators and Compound Assignments
4.4.2.2: asin
Inverse Sine Function: asin
4.4.2.2.1: asin.cpp
The AD asin Function: Example and Test
12.3.1.6: asin_forward
Inverse Sine and Hyperbolic Sine Forward Mode Theory
12.3.2.6: asin_reverse
Inverse Sine and Hyperbolic Sine Reverse Mode Theory
4.4.2.16: asinh
The Inverse Hyperbolic Sine Function: asinh
4.4.2.16.1: asinh.cpp
The AD asinh Function: Example and Test
4.4.2.3: atan
Inverse Tangent Function: atan
4.4.2.3.1: atan.cpp
The AD atan Function: Example and Test
4.4.3.1: atan2
AD Two Argument Inverse Tangent Function
4.4.3.1.1: atan2.cpp
The AD atan2 Function: Example and Test
12.3.1.5: atan_forward
Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
12.3.2.5: atan_reverse
Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
4.4.2.17: atanh
The Inverse Hyperbolic Tangent Function: atanh
4.4.2.17.1: atanh.cpp
The AD atanh Function: Example and Test
4.4.7: atomic
Atomic AD Functions
4.4.7.2.3: atomic_afun
Using AD Version of Atomic Function
4.4.7.2: atomic_base
User Defined Atomic AD Functions
4.4.7.2.10: atomic_base_clear
Free Static Variables
4.4.7.2.1: atomic_ctor
Atomic Function Constructor
4.4.7.2.18: atomic_eigen_cholesky.cpp
Atomic Eigen Cholesky Factorization: Example and Test
4.4.7.2.18.2: atomic_eigen_cholesky.hpp
Atomic Eigen Cholesky Factorization Class
4.4.7.2.17: atomic_eigen_mat_inv.cpp
Atomic Eigen Matrix Inverse: Example and Test
4.4.7.2.17.1: atomic_eigen_mat_inv.hpp
Atomic Eigen Matrix Inversion Class
4.4.7.2.16: atomic_eigen_mat_mul.cpp
Atomic Eigen Matrix Multiply: Example and Test
4.4.7.2.16.1: atomic_eigen_mat_mul.hpp
Atomic Eigen Matrix Multiply Class
4.4.7.2.8: atomic_for_sparse_hes
Atomic Forward Hessian Sparsity Patterns
4.4.7.2.8.1: atomic_for_sparse_hes.cpp
Atomic Forward Hessian Sparsity: Example and Test
4.4.7.2.6: atomic_for_sparse_jac
Atomic Forward Jacobian Sparsity Patterns
4.4.7.2.6.1: atomic_for_sparse_jac.cpp
Atomic Forward Jacobian Sparsity: Example and Test
4.4.7.2.4: atomic_forward
Atomic Forward Mode
4.4.7.2.4.1: atomic_forward.cpp
Atomic Forward: Example and Test
4.4.7.2.11: atomic_get_started.cpp
Getting Started with Atomic Operations: Example and Test
4.4.7.2.19: atomic_mat_mul.cpp
User Atomic Matrix Multiply: Example and Test
4.4.7.2.19.1: atomic_mat_mul.hpp
Matrix Multiply as an Atomic Operation
4.4.7.1.2: atomic_mul_level.cpp
Atomic Operations and Multiple-Levels of AD: Example and Test
4.4.7.2.12: atomic_norm_sq.cpp
Atomic Euclidean Norm Squared: Example and Test
4.4.7.2.2: atomic_option
Set Atomic Function Options
4.4.7.2.13: atomic_reciprocal.cpp
Reciprocal as an Atomic Operation: Example and Test
4.4.7.2.9: atomic_rev_sparse_hes
Atomic Reverse Hessian Sparsity Patterns
4.4.7.2.9.1: atomic_rev_sparse_hes.cpp
Atomic Reverse Hessian Sparsity: Example and Test
4.4.7.2.7: atomic_rev_sparse_jac
Atomic Reverse Jacobian Sparsity Patterns
4.4.7.2.7.1: atomic_rev_sparse_jac.cpp
Atomic Reverse Jacobian Sparsity: Example and Test
4.4.7.2.5: atomic_reverse
Atomic Reverse Mode
4.4.7.2.5.1: atomic_reverse.cpp
Atomic Reverse: Example and Test
4.4.7.2.14: atomic_set_sparsity.cpp
Atomic Sparsity with Set Patterns: Example and Test
4.4.7.2.15: atomic_tangent.cpp
Tan and Tanh as User Atomic Operations: Example and Test
12.8.13: autotools
Autotools Unix Test and Installation
4.4.3.3: azmul
Absolute Zero Multiplication
4.4.3.3.1: azmul.cpp
AD Absolute Zero Multiplication: Example and Test
B
4.7.9.3: base_adolc.hpp
Enable use of AD<Base> where Base is Adolc's adouble Type
4.7.9.1: base_alloc.hpp
Example AD<Base> Where Base Constructor Allocates Memory
4.7.9.6: base_complex.hpp
Enable use of AD<Base> where Base is std::complex<double>
4.7.2: base_cond_exp
Base Type Requirements for Conditional Expressions
4.7.9.5: base_double.hpp
Enable use of AD<Base> where Base is double
4.7.9: base_example
Example AD Base Types That are not AD<OtherBase>
4.7.9.4: base_float.hpp
Enable use of AD<Base> where Base is float
4.7.8: base_hash
Base Type Requirements for Hash Coding Values
4.7.3: base_identical
Base Type Requirements for Identically Equal Comparisons
4.7.6: base_limits
Base Type Requirements for Numeric Limits
4.7.1: base_member
Required Base Class Member Functions
4.7.4: base_ordered
Base Type Requirements for Ordered Comparisons
4.7: base_require
AD<Base> Requirements for a CppAD Base Type
4.7.9.2: base_require.cpp
Using a User Defined AD Base Type: Example and Test
4.7.5: base_std_math
Base Type Requirements for Standard Math Functions
4.7.7: base_to_string
Extending to_string To Another Floating Point Type
12.10.1.1: bender_quad.cpp
BenderQuad: Example and Test
12.10.1: BenderQuad
Computing Jacobian and Hessian of Bender's Reduced Objective
12.5: Bib
Bibliography
4.4.3: binary_math
The Binary Math Functions
4.5.3.1: bool_fun.cpp
AD Boolean Functions: Example and Test
4.5.3: BoolFun
AD Boolean Functions
4.5: BoolValued
Bool Valued Operations and Functions with AD Arguments
C
5.3.8: capacity_order
Controlling Taylor Coefficients Memory Allocation
5.3.8.1: capacity_order.cpp
Controlling Taylor Coefficient Memory Allocation: Example and Test
10.2.10.2: change_param.cpp
Computing a Jacobian With Constants that Change
5.10: check_for_nan
Check an ADFun Object For Nan Results
5.10.1: check_for_nan.cpp
ADFun Checking For Nan: Example and Test
8.8.1: check_numeric_type.cpp
The CheckNumericType Function: Example and Test
8.10.1: check_simple_vector.cpp
The CheckSimpleVector Function: Example and Test
8.8: CheckNumericType
Check NumericType Class Concept
4.4.7.1: checkpoint
Checkpointing Functions
4.4.7.1.1: checkpoint.cpp
Simple Checkpointing: Example and Test
4.4.7.1.4: checkpoint_extended_ode.cpp
Checkpointing an Extended ODE Solver: Example and Test
4.4.7.1.3: checkpoint_ode.cpp
Checkpointing an ODE Solver: Example and Test
8.10: CheckSimpleVector
Check Simple Vector Concept
4.4.7.2.18.1: cholesky_theory
AD Theory for Cholesky Factorization
2.2: cmake
Using CMake to Configure CppAD
2.3: cmake_check
Checking the CppAD Examples and Tests
2.2.2.3: colpack_hes.cpp
ColPack: Sparse Hessian Example and Test
2.2.2.4: colpack_hessian.cpp
ColPack: Sparse Hessian Example and Test
2.2.2.1: colpack_jac.cpp
ColPack: Sparse Jacobian Example and Test
2.2.2.2: colpack_jacobian.cpp
ColPack: Sparse Jacobian Example and Test
2.2.2: colpack_prefix
Including the ColPack Sparsity Calculations
4.5.1: Compare
AD Binary Comparison Operators
4.5.1.1: compare.cpp
AD Binary Comparison Operators: Example and Test
12.9: compare_c
Compare Speed of C and C++
5.3.7: compare_change
Comparison Changes Between Taping and Zero Order Forward
5.3.7.1: compare_change.cpp
CompareChange and Re-Tape: Example and Test
12.8.3: CompareChange
Comparison Changes During Zero Order Forward Mode
4.7.9.6.1: complex_poly.cpp
Complex Polynomial: Example and Test
4.4.1.4: compound_assign
AD Compound Assignment Operators
4.4.4.1: cond_exp.cpp
Conditional Expressions: Example and Test
4.4.4: CondExp
AD Conditional Expressions
10.2.3: conj_grad.cpp
Differentiate Conjugate Gradient Algorithm: Example and Test
4.3: Convert
Conversion and I/O of AD Objects
12.9.4: correct_det_by_minor_c
Correctness Test of det_by_minor Routine
4.4.2.4: cos
The Cosine Function: cos
4.4.2.4.1: cos.cpp
The AD cos Function: Example and Test
4.4.2.5: cosh
The Hyperbolic Cosine Function: cosh
4.4.2.5.1: cosh.cpp
The AD cosh Function: Example and Test
: CppAD
cppad-20171217: A Package for Differentiation of C++ Algorithms
8.1.2: cppad_assert
CppAD Assertions During Execution
11.5.2: cppad_det_lu.cpp
CppAD Speed: Gradient of Determinant Using Lu Factorization
11.5.1: cppad_det_minor.cpp
CppAD Speed: Gradient of Determinant by Minor Expansion
10.2.4: cppad_eigen.hpp
Enable Use of Eigen Linear Algebra Package with CppAD
12.8.10: cppad_ipopt_nlp
Nonlinear Programming Using the CppAD Interface to Ipopt
11.5.3: cppad_mat_mul.cpp
CppAD Speed, Matrix Multiplication
11.5.4: cppad_ode.cpp
CppAD Speed: Gradient of Ode Solution
11.5.5: cppad_poly.cpp
CppAD Speed: Second Derivative of a Polynomial
11.5.6: cppad_sparse_hessian.cpp
CppAD Speed: Sparse Hessian
11.5.7: cppad_sparse_jacobian.cpp
CppAD Speed: Sparse Jacobian
2.2.7: cppad_testvector
Choosing the CppAD Test Vector Template Class
8.22: CppAD_vector
The CppAD::vector Template Class
8.22.1: cppad_vector.cpp
CppAD::vector Template Class: Example and Test
D
5.5.9: dependency.cpp
Computing Dependency: Example and Test
5.1.3: Dependent
Stop Recording and Store Operation Sequence
12.8: deprecated
CppAD Deprecated API Features
11.2.4: det_33
Check Determinant of 3 by 3 matrix
11.2.4.1: det_33.hpp
Source: det_33
11.2.1: det_by_lu
Determinant Using Expansion by Lu Factorization
11.2.1.1: det_by_lu.cpp
Determinant Using Lu Factorization: Example and Test
11.2.1.2: det_by_lu.hpp
Source: det_by_lu
11.2.3: det_by_minor
Determinant Using Expansion by Minors
11.2.3.1: det_by_minor.cpp
Determinant Using Expansion by Minors: Example and Test
11.2.3.2: det_by_minor.hpp
Source: det_by_minor
12.9.2: det_by_minor_c
Compute Determinant using Expansion by Minors
11.2.5: det_grad_33
Check Gradient of Determinant of 3 by 3 matrix
11.2.5.1: det_grad_33.hpp
Source: det_grad_33
11.2.2: det_of_minor
Determinant of a Minor
11.2.2.1: det_of_minor.cpp
Determinant of a Minor: Example and Test
11.2.2.2: det_of_minor.hpp
Source: det_of_minor
12.9.1: det_of_minor_c
Determinant of a Minor
12.2: directory
Directory Structure
4.4.5: Discrete
Discrete AD Functions
4.4.1.3.4: div.cpp
AD Binary Division: Example and Test
4.4.1.4.4: div_eq.cpp
AD Compound Assignment Division: Example and Test
11.3.2: double_det_lu.cpp
Double Speed: Determinant Using Lu Factorization
11.3.1: double_det_minor.cpp
Double Speed: Determinant by Minor Expansion
11.3.3: double_mat_mul.cpp
CppAD Speed: Matrix Multiplication (Double Version)
11.3.4: double_ode.cpp
Double Speed: Ode Solution
11.3.5: double_poly.cpp
Double Speed: Evaluate a Polynomial
11.3.6: double_sparse_hessian.cpp
Double Speed: Sparse Hessian
11.3.7: double_sparse_jacobian.cpp
Double Speed: Sparse Jacobian
2.1: download
Download The CppAD Source Code
5.2: drivers
First and Second Order Derivatives: Easy Drivers
E
10.2.4.2: eigen_array.cpp
Using Eigen Arrays: Example and Test
10.2.4.3: eigen_det.cpp
Using Eigen To Compute Determinant: Example and Test
10.2.4.1: eigen_plugin.hpp
Source Code for eigen_plugin.hpp
2.2.3: eigen_prefix
Including the Eigen Examples and Tests
8.5.1: elapsed_seconds
Returns Elapsed Number of Seconds
8.5.1.1: elapsed_seconds.cpp
Elapsed Seconds: Example and Test
12.9.6: elapsed_seconds_c
Returns Elapsed Number of Seconds
12.8.8: epsilon
Machine Epsilon For AD Types
4.5.5.1: equal_op_seq.cpp
EqualOpSeq: Example and Test
4.5.5: EqualOpSeq
Check if Two Value are Identically Equal
4.4.2.18: erf
The Error Function
4.4.2.18.1: erf.cpp
The AD erf Function: Example and Test
12.3.1.9: erf_forward
Error Function Forward Taylor Polynomial Theory
12.3.2.9: erf_reverse
Error Function Reverse Mode Theory
8.1.1: error_handler.cpp
Replacing The CppAD Error Handler: Example and Test
8.1: ErrorHandler
Replacing the CppAD Error Handler
10: Example
Examples
10.3: ExampleUtility
Utility Routines used by CppAD Examples
4.4.2.6: exp
The Exponential Function: exp
4.4.2.6.1: exp.cpp
The AD exp Function: Example and Test
3.1: exp_2
Second Order Exponential Approximation
3.1.2: exp_2.cpp
exp_2: Test
3.1.1: exp_2.hpp
exp_2: Implementation
3.1.8: exp_2_cppad
exp_2: CppAD Forward and Reverse Sweeps
3.1.3: exp_2_for0
exp_2: Operation Sequence and Zero Order Forward Mode
3.1.3.1: exp_2_for0.cpp
exp_2: Verify Zero Order Forward Sweep
3.1.4: exp_2_for1
exp_2: First Order Forward Mode
3.1.4.1: exp_2_for1.cpp
exp_2: Verify First Order Forward Sweep
3.1.6: exp_2_for2
exp_2: Second Order Forward Mode
3.1.6.1: exp_2_for2.cpp
exp_2: Verify Second Order Forward Sweep
3.1.5: exp_2_rev1
exp_2: First Order Reverse Mode
3.1.5.1: exp_2_rev1.cpp
exp_2: Verify First Order Reverse Sweep
3.1.7: exp_2_rev2
exp_2: Second Order Reverse Mode
3.1.7.1: exp_2_rev2.cpp
exp_2: Verify Second Order Reverse Sweep
3.3: exp_apx.cpp
Correctness Tests For Exponential Approximation in Introduction
3.2: exp_eps
An Epsilon Accurate Exponential Approximation
3.2.2: exp_eps.cpp
exp_eps: Test of exp_eps
3.2.1: exp_eps.hpp
exp_eps: Implementation
3.2.8: exp_eps_cppad
exp_eps: CppAD Forward and Reverse Sweeps
3.2.3: exp_eps_for0
exp_eps: Operation Sequence and Zero Order Forward Sweep
3.2.3.1: exp_eps_for0.cpp
exp_eps: Verify Zero Order Forward Sweep
3.2.4: exp_eps_for1
exp_eps: First Order Forward Sweep
3.2.4.1: exp_eps_for1.cpp
exp_eps: Verify First Order Forward Sweep
3.2.6: exp_eps_for2
exp_eps: Second Order Forward Mode
3.2.6.1: exp_eps_for2.cpp
exp_eps: Verify Second Order Forward Sweep
3.2.5: exp_eps_rev1
exp_eps: First Order Reverse Sweep
3.2.5.1: exp_eps_rev1.cpp
exp_eps: Verify First Order Reverse Sweep
3.2.7: exp_eps_rev2
exp_eps: Second Order Reverse Sweep
3.2.7.1: exp_eps_rev2.cpp
exp_eps: Verify Second Order Reverse Sweep
12.3.1.1: exp_forward
Exponential Function Forward Mode Theory
12.3.2.1: exp_reverse
Exponential Function Reverse Mode Theory
4.4.2.19: expm1
The Exponential Function Minus One: expm1
4.4.2.19.1: expm1.cpp
The AD exp Function: Example and Test
F
4.4.2.14.1: fabs.cpp
AD Absolute Value Function: Example and Test
11.6.2: fadbad_det_lu.cpp
Fadbad Speed: Gradient of Determinant Using Lu Factorization
11.6.1: fadbad_det_minor.cpp
Fadbad Speed: Gradient of Determinant by Minor Expansion
11.6.3: fadbad_mat_mul.cpp
Fadbad Speed: Matrix Multiplication
11.6.4: fadbad_ode.cpp
Fadbad Speed: Ode
11.6.5: fadbad_poly.cpp
Fadbad Speed: Second Derivative of a Polynomial
2.2.4: fadbad_prefix
Including the FADBAD Speed Tests
11.6.6: fadbad_sparse_hessian.cpp
Fadbad Speed: Sparse Hessian
11.6.7: fadbad_sparse_jacobian.cpp
fadbad Speed: sparse_jacobian
12.1: Faq
Frequently Asked Questions and Answers
5.5.7: for_hes_sparsity
Forward Mode Hessian Sparsity Patterns
5.5.7.1: for_hes_sparsity.cpp
Forward Mode Hessian Sparsity: Example and Test
5.5.1: for_jac_sparsity
Forward Mode Jacobian Sparsity Patterns
5.5.1.1: for_jac_sparsity.cpp
Forward Mode Jacobian Sparsity: Example and Test
5.2.3.1: for_one.cpp
First Order Partial Driver: Example and Test
5.5.8.1: for_sparse_hes.cpp
Forward Mode Hessian Sparsity: Example and Test
5.5.2.1: for_sparse_jac.cpp
Forward Mode Jacobian Sparsity: Example and Test
5.2.5.1: for_two.cpp
Subset of Second Order Partials: Example and Test
5.2.3: ForOne
First Order Partial Derivative: Driver Routine
5.5.8: ForSparseHes
Hessian Sparsity Pattern: Forward Mode
5.5.2: ForSparseJac
Jacobian Sparsity Pattern: Forward Mode
5.2.5: ForTwo
Forward Mode Second Partial Derivative Driver
5.3: Forward
Forward Mode
5.3.4.1: forward.cpp
Forward Mode: Example and Test
5.3.5: forward_dir
Multiple Directions Forward Mode
5.3.5.1: forward_dir.cpp
Forward Mode: Example and Test of Multiple Directions
5.3.2: forward_one
First Order Forward Mode: Derivative Values
5.3.4: forward_order
Multiple Order Forward Mode
5.3.4.2: forward_order.cpp
Forward Mode: Example and Test of Multiple Orders
5.3.3: forward_two
Second Order Forward Mode: Derivative Values
5.3.1: forward_zero
Zero Order Forward Mode: Function Values
12.3.1: ForwardTheory
The Theory of Forward Mode
5.1.2.1: fun_assign.cpp
ADFun Assignment: Example and Test
5.9.1: fun_check.cpp
ADFun Check and Re-Tape: Example and Test
5.9: FunCheck
Check an ADFun Sequence of Operations
5.1.2: FunConstruct
Construct an ADFun Object and Stop Recording
12.8.2: FunDeprecated
ADFun Object Deprecated Member Functions
G
10.2: General
General Examples
10.3.1: general.cpp
CppAD Examples and Tests
2.2.1.1: get_adolc.sh
Download and Install Adolc in Build Directory
2.2.2.5: get_colpack.sh
Download and Install ColPack in Build Directory
2.2.3.1: get_eigen.sh
Download and Install Eigen in Build Directory
2.2.4.1: get_fadbad.sh
Download and Install Fadbad in Build Directory
2.2.5.1: get_ipopt.sh
Download and Install Ipopt in Build Directory
2.2.6.1: get_sacado.sh
Download and Install Sacado in Build Directory
10.1: get_started.cpp
Getting Started Using CppAD to Compute Derivatives
12.4: glossary
Glossary
H
7.2.8: harmonic.cpp
Multi-Threading Harmonic Summation Example / Test
7.2.8.1: harmonic_common
Common Variables Used by Multi-threading Sum of 1/i
7.2.8.2: harmonic_setup
Set Up Multi-threading Sum of 1/i
7.2.8.5: harmonic_sum
Multi-Threaded Implementation of Summation of 1/i
7.2.8.4: harmonic_takedown
Take Down Multi-threading Sum of 1/i
7.2.8.6: harmonic_time
Timing Test of Multi-Threaded Summation of 1/i
7.2.8.3: harmonic_worker
Do One Thread's Work for Sum of 1/i
5.2.2.2: hes_lagrangian.cpp
Hessian of Lagrangian and ADFun Default Constructor: Example and Test
10.2.6: hes_lu_det.cpp
Gradient of Determinant Using LU Factorization: Example and Test
10.2.5: hes_minor_det.cpp
Gradient of Determinant Using Expansion by Minors: Example and Test
5.4.2.2: hes_times_dir.cpp
Hessian Times Direction: Example and Test
5.2.2: Hessian
Hessian: Easy Driver
5.2.2.1: hessian.cpp
Hessian: Example and Test
I
12.8.1: include_deprecated
Deprecated Include Files
5.1.1: Independent
Declare Independent Variables and Start Recording
5.1.1.1: independent.cpp
Independent and ADFun Constructor: Example and Test
8.24: index_sort
Returns Indices that Sort a Vector
8.24.1: index_sort.cpp
Index Sort: Example and Test
2: Install
CppAD Download, Test, and Install Instructions
4.3.2: Integer
Convert From AD to Integer
4.3.2.1: integer.cpp
Convert From AD to Integer: Example and Test
10.2.7: interface2c.cpp
Interfacing to C: Example and Test
4.4.5.2: interp_onetape.cpp
Interpolation With Out Retaping: Example and Test
4.4.5.3: interp_retape.cpp
Interpolation With Retaping: Example and Test
3: Introduction
An Introduction by Example to Algorithmic Differentiation
12.8.10.1: ipopt_nlp_get_started.cpp
Nonlinear Programming Using CppAD and Ipopt: Example and Test
12.8.10.2: ipopt_nlp_ode
Example Simultaneous Solution of Forward and Inverse Problem
12.8.10.2.5: ipopt_nlp_ode_check.cpp
Correctness Check for Both Simple and Fast Representations
12.8.10.2.3: ipopt_nlp_ode_fast
ODE Fitting Using Fast Representation
12.8.10.2.3.1: ipopt_nlp_ode_fast.hpp
ODE Fitting Using Fast Representation
12.8.10.2.1: ipopt_nlp_ode_problem
An ODE Inverse Problem Example
12.8.10.2.1.1: ipopt_nlp_ode_problem.hpp
ODE Inverse Problem Definitions: Source Code
12.8.10.2.4: ipopt_nlp_ode_run.hpp
Driver for Running the Ipopt ODE Example
12.8.10.2.2: ipopt_nlp_ode_simple
ODE Fitting Using Simple Representation
12.8.10.2.2.1: ipopt_nlp_ode_simple.hpp
ODE Fitting Using Simple Representation
12.8.10.3: ipopt_ode_speed.cpp
Speed Test for Both Simple and Fast Representations
2.2.5: ipopt_prefix
Including the cppad_ipopt Library and Tests
9: ipopt_solve
Use Ipopt to Solve a Nonlinear Programming Problem
9.1: ipopt_solve_get_started.cpp
Nonlinear Programming Using CppAD and Ipopt: Example and Test
9.3: ipopt_solve_ode_inverse.cpp
ODE Inverse Problem Definitions: Source Code
9.2: ipopt_solve_retape.cpp
Nonlinear Programming Retaping: Example and Test
J
10.2.9: jac_lu_det.cpp
Gradient of Determinant Using Lu Factorization: Example and Test
10.2.8: jac_minor_det.cpp
Gradient of Determinant Using Expansion by Minors: Example and Test
5.2.1: Jacobian
Jacobian: Driver Routine
5.2.1.1: jacobian.cpp
Jacobian: Example and Test
L
12.12: License
Your License for the CppAD Software
11.1.1: link_det_lu
Speed Testing Gradient of Determinant Using Lu Factorization
11.1.2: link_det_minor
Speed Testing Gradient of Determinant by Minor Expansion
11.1.3: link_mat_mul
Speed Testing Derivative of Matrix Multiply
11.1.4: link_ode
Speed Testing the Jacobian of Ode Solution
11.1.5: link_poly
Speed Testing Second Derivative of a Polynomial
11.1.6: link_sparse_hessian
Speed Testing Sparse Hessian
11.1.7: link_sparse_jacobian
Speed Testing Sparse Jacobian
10.4: ListAllExamples
List All (Except Deprecated) CppAD Examples
4.4.2.7: log
The Exponential Function: log
4.4.2.7.1: log.cpp
The AD log Function: Example and Test
4.4.2.8: log10
The Base 10 Logarithm Function: log10
4.4.2.8.1: log10.cpp
The AD log10 Function: Example and Test
4.4.2.20: log1p
The Logarithm of One Plus Argument: log1p
4.4.2.20.1: log1p.cpp
The AD log1p Function: Example and Test
12.3.1.2: log_forward
Logarithm Function Forward Mode Theory
12.3.2.2: log_reverse
Logarithm Function Reverse Mode Theory
5.8.5: lp_box
abs_normal: Solve a Linear Program With Box Constraints
5.8.5.1: lp_box.cpp
abs_normal lp_box: Example and Test
5.8.5.2: lp_box.hpp
lp_box Source Code
8.14.2.1: lu_factor.cpp
LuFactor: Example and Test
8.14.2.2: lu_factor.hpp
Source: LuFactor
8.14.3.1: lu_invert.cpp
LuInvert: Example and Test
8.14.3.2: lu_invert.hpp
Source: LuInvert
12.10.3.1: lu_ratio.cpp
LuRatio: Example and Test
8.14.1.1: lu_solve.cpp
LuSolve With Complex Arguments: Example and Test
8.14.1.2: lu_solve.hpp
Source: LuSolve
10.3.3: lu_vec_ad.cpp
Lu Factor and Solve with Recorded Pivoting
10.3.3.1: lu_vec_ad_ok.cpp
Lu Factor and Solve With Recorded Pivoting: Example and Test
8.14: LuDetAndSolve
Compute Determinants and Solve Equations by LU Factorization
8.14.2: LuFactor
LU Factorization of A Square Matrix
8.14.3: LuInvert
Invert an LU Factored Equation
12.10.3: LuRatio
LU Factorization of A Square Matrix and Stability Calculation
8.14.1: LuSolve
Compute Determinant and Solve Linear Equations
M
12.9.8: main_compare_c
Main Program For Comparing C and C++ Speed
11.2.6: mat_sum_sq
Sum Elements of a Matrix Times Itself
11.2.6.1: mat_sum_sq.cpp
Sum of the Elements of the Square of a Matrix: Example and Test
11.2.6.2: mat_sum_sq.hpp
Source: mat_sum_sq
12.8.7: memory_leak
Memory Leak Detection
11.1.8: microsoft_timer
Microsoft Version of Elapsed Number of Seconds
5.8.7: min_nso_linear
Non-Smooth Optimization Using Abs-normal Linear Approximations
5.8.7.1: min_nso_linear.cpp
abs_normal min_nso_linear: Example and Test
5.8.7.2: min_nso_linear.hpp
min_nso_linear Source Code
5.8.11: min_nso_quad
Non-Smooth Optimization Using Abs-normal Quadratic Approximations
5.8.11.1: min_nso_quad.cpp
abs_normal min_nso_quad: Example and Test
5.8.11.2: min_nso_quad.hpp
min_nso_quad Source Code
4.4.1.3.3: mul.cpp
AD Binary Multiplication: Example and Test
4.4.1.4.3: mul_eq.cpp
AD Compound Assignment Multiplication: Example and Test
10.2.10: mul_level
Using Multiple Levels of AD
10.2.10.1: mul_level.cpp
Multiple Level of AD: Example and Test
4.7.9.3.1: mul_level_adolc.cpp
Using Adolc with Multiple Levels of Taping: Example and Test
10.2.13: mul_level_adolc_ode.cpp
Taylor's Ode Solver: A Multi-Level Adolc Example and Test
10.2.12: mul_level_ode.cpp
Taylor's Ode Solver: A Multi-Level AD Example and Test
7.2.9: multi_atomic.cpp
Multi-Threading User Atomic Example / Test
7.2.9.2: multi_atomic_common
Multi-Threaded User Atomic Common Information
7.2.9.6: multi_atomic_run
Run Multi-Threaded User Atomic Calculation
7.2.9.3: multi_atomic_setup
Multi-Threaded User Atomic Set Up
7.2.9.5: multi_atomic_takedown
Multi-Threaded User Atomic Take Down
7.2.9.7: multi_atomic_time
Timing Test for Multi-Threaded User Atomic Calculation
7.2.9.1: multi_atomic_user
Defines a User Atomic Operation that Computes Square Root
7.2.9.4: multi_atomic_worker
Multi-Threaded User Atomic Worker
7.2.10: multi_newton.cpp
Multi-Threaded Newton Method Example / Test
7.2.10.1: multi_newton_common
Common Variables use by Multi-Threaded Newton Method
7.2.10.5: multi_newton_run
A Multi-Threaded Newton's Method
7.2.10.2: multi_newton_setup
Set Up Multi-Threaded Newton Method
7.2.10.4: multi_newton_takedown
Take Down Multi-threaded Newton Method
7.2.10.6: multi_newton_time
Timing Test of Multi-Threaded Newton Method
7.2.10.3: multi_newton_worker
Do One Thread's Work for Multi-Threaded Newton Method
7: multi_thread
Using CppAD in a Multi-Threading Environment
N
8.11: nan
Obtain Nan or Determine if a Value is Nan
8.11.1: nan.cpp
nan: Example and Test
8.2.1: near_equal.cpp
NearEqual Function: Example and Test
4.5.2.1: near_equal_ext.cpp
Compare AD with Base Objects: Example and Test
8.2: NearEqual
Determine if Two Values Are Nearly Equal
4.5.2: NearEqualExt
Compare AD and Base Objects for Nearly Equal
4.4.6.1: num_limits.cpp
Numeric Limits: Example and Test
5.3.9: number_skip
Number of Variables that Can be Skipped
5.3.9.1: number_skip.cpp
Number of Variables That Can be Skipped: Example and Test
12.10: numeric_ad
Some Numerical AD Utilities
4.4.6: numeric_limits
Numeric Limits For an AD and Base Types
8.7.1: numeric_type.cpp
The NumericType: Example and Test
8.7: NumericType
Definition of a Numeric Type
O
8.19.1: ode_err_control.cpp
OdeErrControl: Example and Test
8.19.2: ode_err_maxabs.cpp
OdeErrControl: Example and Test Using Maxabs Argument
11.2.7: ode_evaluate
Evaluate a Function Defined in Terms of an ODE
11.2.7.1: ode_evaluate.cpp
ode_evaluate: Example and test
11.2.7.2: ode_evaluate.hpp
Source: ode_evaluate
8.20.1: ode_gear.cpp
OdeGear: Example and Test
8.21.1: ode_gear_control.cpp
OdeGearControl: Example and Test
10.2.11: ode_stiff.cpp
A Stiff Ode: Example and Test
10.2.14: ode_taylor.cpp
Taylor's Ode Solver: An Example and Test
8.19: OdeErrControl
An Error Controller for ODE Solvers
8.20: OdeGear
An Arbitrary Order Gear Method
8.21: OdeGearControl
An Error Controller for Gear's Ode Solvers
12.8.11: old_atomic
User Defined Atomic AD Functions
12.8.11.5: old_mat_mul.cpp
Old Matrix Multiply as a User Atomic Operation: Example and Test
12.8.11.5.1: old_mat_mul.hpp
Define Matrix Multiply as a User Atomic Operation
12.8.6.12: old_max_num_threads
Set Maximum Number of Threads for omp_alloc Allocator
12.8.11.1: old_reciprocal.cpp
Old Atomic Operation Reciprocal: Example and Test
12.8.11.4: old_tan.cpp
Old Tan and Tanh as User Atomic Operations: Example and Test
12.8.11.2: old_usead_1.cpp
Using AD to Compute Atomic Function Derivatives
12.8.11.3: old_usead_2.cpp
Using AD to Compute Atomic Function Derivatives
12.8.6: omp_alloc
A Quick OpenMP Memory Allocator Used by CppAD
12.8.6.13: omp_alloc.cpp
OpenMP Memory Allocator: Example and Test
12.8.6.8: omp_available
Amount of Memory Available for Quick Use by a Thread
12.8.6.9: omp_create_array
Allocate Memory and Create A Raw Array
12.8.6.10: omp_delete_array
Return A Raw Array to The Available Memory for a Thread
12.8.6.11: omp_efficient
Check If A Memory Allocation is Efficient for Another Use
12.8.6.6: omp_free_available
Free Memory Currently Available for Quick Use by a Thread
12.8.6.4: omp_get_memory
Get At Least A Specified Amount of Memory
12.8.6.3: omp_get_thread_num
Get the Current OpenMP Thread Number
12.8.6.2: omp_in_parallel
Is The Current Execution in OpenMP Parallel Mode
12.8.6.7: omp_inuse
Amount of Memory a Thread is Currently Using
12.8.6.1: omp_max_num_threads
Set and Get Maximum Number of Threads for omp_alloc Allocator
12.8.4: omp_max_thread
OpenMP Parallel Setup
12.8.6.5: omp_return_memory
Return Memory to omp_alloc
12.10.2: opt_val_hes
Jacobian and Hessian of Optimal Values
12.10.2.1: opt_val_hes.cpp
opt_val_hes: Example and Test
5.7: optimize
Optimize an ADFun Object Tape
5.7.3: optimize_compare_op.cpp
Example Optimization and Comparison Operators
5.7.5: optimize_conditional_skip.cpp
Example Optimization and Conditional Expressions
5.7.7: optimize_cumulative_sum.cpp
Example Optimization and Cumulative Sum Operations
5.7.1: optimize_forward_active.cpp
Example Optimization and Forward Activity Analysis
5.7.6: optimize_nest_conditional.cpp
Example Optimization and Nested Conditional Expressions
5.7.4: optimize_print_for.cpp
Example Optimization and Print Forward Operators
5.7.2: optimize_reverse_active.cpp
Example Optimization and Reverse Activity Analysis
P
4.5.4.1: par_var.cpp
AD Parameter and Variable Functions: Example and Test
7.1: parallel_ad
Enable AD Calculations During Parallel Mode
4.5.4: ParVar
Is an AD Object a Parameter or Variable
2.4: pkgconfig
CppAD pkg-config Files
8.13: Poly
Evaluate a Polynomial or its Derivative
8.13.1: poly.cpp
Polynomial Evaluation: Example and Test
8.13.2: poly.hpp
Source: Poly
4.4.3.2: pow
The AD Power Function
4.4.3.2.1: pow.cpp
The AD Power Function: Example and Test
8.12: pow_int
The Integer Power Function
8.12.1: pow_int.cpp
The Pow Integer Exponent: Example and Test
6: preprocessor
CppAD API Preprocessor Symbols
4.3.6.1: print_for_cout.cpp
Printing During Forward Mode: Example and Test
4.3.6.2: print_for_string.cpp
Print During Zero Order Forward Mode: Example and Test
4.3.6: PrintFor
Printing AD Values During Forward Mode
Q
5.8.9: qp_box
abs_normal: Solve a Quadratic Program With Box Constraints
5.8.9.1: qp_box.cpp
abs_normal qp_box: Example and Test
5.8.9.2: qp_box.hpp
qp_box Source Code
5.8.8: qp_interior
Solve a Quadratic Program Using Interior Point Method
5.8.8.1: qp_interior.cpp
abs_normal qp_interior: Example and Test
5.8.8.2: qp_interior.hpp
qp_interior Source Code
R
5.5.10: rc_sparsity.cpp
Preferred Sparsity Patterns: Row and Column Indices: Example and Test
5.1: record_adfun
Create an ADFun Object (Record an Operation Sequence)
12.9.5: repeat_det_by_minor_c
Repeat det_by_minor Routine A Specified Number of Times
5.5.5: rev_hes_sparsity
Reverse Mode Hessian Sparsity Patterns
5.5.5.1: rev_hes_sparsity.cpp
Reverse Mode Hessian Sparsity: Example and Test
5.5.3: rev_jac_sparsity
Reverse Mode Jacobian Sparsity Patterns
5.5.3.1: rev_jac_sparsity.cpp
Reverse Mode Jacobian Sparsity: Example and Test
5.2.4.1: rev_one.cpp
First Order Derivative Driver: Example and Test
5.5.6.1: rev_sparse_hes.cpp
Reverse Mode Hessian Sparsity: Example and Test
5.5.4.1: rev_sparse_jac.cpp
Reverse Mode Jacobian Sparsity: Example and Test
5.2.6.1: rev_two.cpp
Second Partials Reverse Driver: Example and Test
5.4: Reverse
Reverse Mode
5.4.3: reverse_any
Any Order Reverse Mode
5.4.3.2: reverse_checkpoint.cpp
Reverse Mode General Case (Checkpointing): Example and Test
12.3.3: reverse_identity
An Important Reverse Mode Identity
5.4.1: reverse_one
First Order Reverse Mode
5.4.1.1: reverse_one.cpp
First Order Reverse Mode: Example and Test
5.4.3.1: reverse_three.cpp
Third Order Reverse Mode: Example and Test
5.4.2: reverse_two
Second Order Reverse Mode
5.4.2.1: reverse_two.cpp
Second Order Reverse ModeExample and Test
12.3.2: ReverseTheory
The Theory of Reverse Mode
5.2.4: RevOne
First Order Derivative: Driver Routine
5.5.6: RevSparseHes
Hessian Sparsity Pattern: Reverse Mode
5.5.4: RevSparseJac
Jacobian Sparsity Pattern: Reverse Mode
5.2.6: RevTwo
Reverse Mode Second Partial Derivative Driver
8.15.1: romberg_one.cpp
One Dimensional Romberg Integration: Example and Test
8.16: RombergMul
Multi-dimensional Romberg Integration
8.16.1: Rombergmul.cpp
One Dimensional Romberg Integration: Example and Test
8.15: RombergOne
One DimensionalRomberg Integration
8.18: Rosen34
A 3rd and 4th Order Rosenbrock ODE Solver
8.18.1: rosen_34.cpp
Rosen34: Example and Test
8.17: Runge45
An Embedded 4th and 5th Order Runge-Kutta ODE Solver
8.17.1: runge45_1.cpp
Runge45: Example and Test
8.17.2: runge45_2.cpp
Runge45: Example and Test
S
11.7.2: sacado_det_lu.cpp
Sacado Speed: Gradient of Determinant Using Lu Factorization
11.7.1: sacado_det_minor.cpp
Sacado Speed: Gradient of Determinant by Minor Expansion
11.7.3: sacado_mat_mul.cpp
Sacado Speed: Matrix Multiplication
11.7.4: sacado_ode.cpp
Sacado Speed: Gradient of Ode Solution
11.7.5: sacado_poly.cpp
Sacado Speed: Second Derivative of a Polynomial
2.2.6: sacado_prefix
Including the Sacado Speed Tests
11.7.6: sacado_sparse_hessian.cpp
Sacado Speed: Sparse Hessian
11.7.7: sacado_sparse_jacobian.cpp
sacado Speed: sparse_jacobian
5.1.5: seq_property
ADFun Sequence Properties
5.1.5.1: seq_property.cpp
ADFun Sequence Properties: Example and Test
8.26: set_union
Union of Standard Sets
8.26.1: set_union.cpp
Set Union: Example and Test
4.4.2.21: sign
The Sign: sign
4.4.2.21.1: sign.cpp
Sign Function: Example and Test
7.2.5: simple_ad_bthread.cpp
A Simple Boost Threading AD: Example and Test
7.2.4: simple_ad_openmp.cpp
A Simple OpenMP AD: Example and Test
7.2.6: simple_ad_pthread.cpp
A Simple pthread AD: Example and Test
8.9.1: simple_vector.cpp
Simple Vector Template Class: Example and Test
8.9: SimpleVector
Definition of a Simple Vector
5.8.4: simplex_method
abs_normal: Solve a Linear Program Using Simplex Method
5.8.4.1: simplex_method.cpp
abs_normal simplex_method: Example and Test
5.8.4.2: simplex_method.hpp
simplex_method Source Code
4.4.2.9: sin
The Sine Function: sin
4.4.2.9.1: sin.cpp
The AD sin Function: Example and Test
12.3.1.4: sin_cos_forward
Trigonometric and Hyperbolic Sine and Cosine Forward Theory
12.3.2.4: sin_cos_reverse
Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
4.4.2.10: sinh
The Hyperbolic Sine Function: sinh
4.4.2.10.1: sinh.cpp
The AD sinh Function: Example and Test
5.3.6: size_order
Number Taylor Coefficient Orders Currently Stored
5.6: sparse_derivative
Calculating Sparse Derivatives
5.6.3: sparse_hes
Computing Sparse Hessians
5.6.3.1: sparse_hes.cpp
Computing Sparse Hessian: Example and Test
11.2.9: sparse_hes_fun
Evaluate a Function That Has a Sparse Hessian
11.2.9.1: sparse_hes_fun.cpp
sparse_hes_fun: Example and test
11.2.9.2: sparse_hes_fun.hpp
Source: sparse_hes_fun
5.6.4: sparse_hessian
Sparse Hessian
5.6.4.1: sparse_hessian.cpp
Sparse Hessian: Example and Test
5.6.1: sparse_jac
Computing Sparse Jacobians
5.6.1.1: sparse_jac_for.cpp
Computing Sparse Jacobian Using Forward Mode: Example and Test
11.2.8: sparse_jac_fun
Evaluate a Function That Has a Sparse Jacobian
11.2.8.1: sparse_jac_fun.cpp
sparse_jac_fun: Example and test
11.2.8.2: sparse_jac_fun.hpp
Source: sparse_jac_fun
5.6.1.2: sparse_jac_rev.cpp
Computing Sparse Jacobian Using Reverse Mode: Example and Test
5.6.2: sparse_jacobian
Sparse Jacobian
5.6.2.1: sparse_jacobian.cpp
Sparse Jacobian: Example and Test
8.27: sparse_rc
Row and Column Index Sparsity Patterns
8.27.1: sparse_rc.cpp
sparse_rc: Example and Test
8.28: sparse_rcv
Sparse Matrix Row, Column, Value Representation
8.28.1: sparse_rcv.cpp
sparse_rcv: Example and Test
5.6.4.3: sparse_sub_hes.cpp
Subset of a Sparse Hessian: Example and Test
5.5: sparsity_pattern
Calculating Sparsity Patterns
5.5.6.2: sparsity_sub.cpp
Sparsity Patterns For a Subset of Variables: Example and Test
11: speed
Speed Test an Operator Overloading AD Package
11.4: speed_adolc
Speed Test of Derivatives Using Adolc
11.5: speed_cppad
Speed Test Derivatives Using CppAD
11.3: speed_double
Speed Test of Functions in Double
10.3.2: speed_example.cpp
Run the Speed Examples
11.6: speed_fadbad
Speed Test Derivatives Using Fadbad
11.1: speed_main
Running the Speed Test Program
8.4.1: speed_program.cpp
Example Use of SpeedTest
11.7: speed_sacado
Speed Test Derivatives Using Sacado
8.3: speed_test
Run One Speed Test and Return Results
8.3.1: speed_test.cpp
speed_test: Example and test
11.2: speed_utility
Speed Testing Utilities
8.4: SpeedTest
Run One Speed Test and Print Results
4.4.2.11: sqrt
The Square Root Function: sqrt
4.4.2.11.1: sqrt.cpp
The AD sqrt Function: Example and Test
12.3.1.3: sqrt_forward
Square Root Function Forward Mode Theory
12.3.2.3: sqrt_reverse
Square Root Function Reverse Mode Theory
10.2.15: stack_machine.cpp
Example Differentiating a Stack Machine Interpreter
4.4.1.3.2: sub.cpp
AD Binary Subtraction: Example and Test
4.4.1.4.2: sub_eq.cpp
AD Compound Assignment Subtraction: Example and Test
5.6.4.2: sub_sparse_hes.cpp
Computing Sparse Hessian for a Subset of Variables
5.6.5.2: subgraph_hes2jac.cpp
Sparse Hessian Using Subgraphs and Jacobian: Example and Test
5.6.5: subgraph_jac_rev
Compute Sparse Jacobians Using Subgraphs
5.6.5.1: subgraph_jac_rev.cpp
Computing Sparse Jacobian Using Reverse Mode: Example and Test
5.4.4: subgraph_reverse
Reverse Mode Using Subgraphs
5.4.4.1: subgraph_reverse.cpp
Computing Reverse Mode on Subgraphs: Example and Test
5.5.11: subgraph_sparsity
Subgraph Dependency Sparsity Patterns
5.5.11.1: subgraph_sparsity.cpp
Subgraph Dependency Sparsity Patterns: Example and Test
T
8.23.11: ta_available
Amount of Memory Available for Quick Use by a Thread
8.23.12: ta_create_array
Allocate An Array and Call Default Constructor for its Elements
8.23.13: ta_delete_array
Deallocate An Array and Call Destructor for its Elements
8.23.14: ta_free_all
Free All Memory That Was Allocated for Use by thread_alloc
8.23.8: ta_free_available
Free Memory Currently Available for Quick Use by a Thread
8.23.6: ta_get_memory
Get At Least A Specified Amount of Memory
8.23.9: ta_hold_memory
Control When Thread Alloc Retains Memory For Future Use
8.23.4: ta_in_parallel
Is The Current Execution in Parallel Mode
8.23.10: ta_inuse
Amount of Memory a Thread is Currently Using
8.23.3: ta_num_threads
Get Number of Threads
8.23.2: ta_parallel_setup
Setup thread_alloc For Use in Multi-Threading Environment
8.23.7: ta_return_memory
Return Memory to thread_alloc
8.23.5: ta_thread_num
Get the Current Thread Number
4.4.2.12: tan
The Tangent Function: tan
4.4.2.12.1: tan.cpp
The AD tan Function: Example and Test
12.3.1.8: tan_forward
Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
12.3.2.8: tan_reverse
Tangent and Hyperbolic Tangent Reverse Mode Theory
4.4.2.13: tanh
The Hyperbolic Tangent Function: tanh
4.4.2.13.1: tanh.cpp
The AD tanh Function: Example and Test
4.4.5.1: tape_index.cpp
Taping Array Index Operation: Example and Test
7.2.11.2: team_bthread.cpp
Boost Thread Implementation of a Team of AD Threads
7.2.7: team_example.cpp
Using a Team of AD Threads: Example and Test
7.2.11.1: team_openmp.cpp
OpenMP Implementation of a Team of AD Threads
7.2.11.3: team_pthread.cpp
Pthread Implementation of a Team of AD Threads
7.2.11: team_thread.hpp
Specifications for A Team of AD Threads
8.6: test_boolofvoid
Object that Runs a Group of Tests
12.8.9: test_vector
Choosing The Vector Testing Template Class
10.5: testvector
Using The CppAD Test Vector Template Class
12.3: Theory
The Theory of Derivative Calculations
8.23: thread_alloc
A Fast Multi-Threading Memory Allocator
8.23.1: thread_alloc.cpp
Fast Multi-Threading Memory Allocator: Example and Test
7.2: thread_test.cpp
Run Multi-Threading Examples and Speed Tests
12.9.7: time_det_by_minor_c
Determine Amount of Time to Execute det_by_minor
8.5: time_test
Determine Amount of Time to Execute a Test
8.5.2: time_test.cpp
time_test: Example and test
8.25: to_string
Convert Certain Types to a String
8.25.1: to_string.cpp
to_string: Example and Test
12.8.5: TrackNewDel
Routines That Track Use of New and Delete
12.8.5.1: TrackNewDel.cpp
Tracking Use of New and Delete: Example and Test
U
4.4.1.2.1: unary_minus.cpp
AD Unary Minus Operator: Example and Test
4.4.1.1.1: unary_plus.cpp
AD Unary Plus Operator: Example and Test
4.4.2: unary_standard_math
The Unary Standard Math Functions
4.4.1.2: UnaryMinus
AD Unary Minus Operator
4.4.1.1: UnaryPlus
AD Unary Plus Operator
11.2.10: uniform_01
Simulate a [0,1] Uniform Random Variate
11.2.10.1: uniform_01.hpp
Source: uniform_01
12.9.3: uniform_01_c
Simulate a [0,1] Uniform Random Variate
8: utility
Some General Purpose Utilities
V
4.3.1: Value
Convert From an AD Type to its Base Type
4.3.1.1: value.cpp
Convert From AD to its Base Type: Example and Test
4.3.7: Var2Par
Convert an AD Variable to a Parameter
4.3.7.1: var2par.cpp
Convert an AD Variable to a Parameter: Example and Test
4.6.1: vec_ad.cpp
AD Vectors that Record Index Operations: Example and Test
4.6: VecAD
AD Vectors that Record Index Operations
8.22.2: vector_bool.cpp
CppAD::vectorBool Class: Example and Test
W
12.7: whats_new
Changes and Additions to CppAD
12.7.15: whats_new_03
Changes and Additions to CppAD During 2003
12.7.14: whats_new_04
Changes and Additions to CppAD During 2004
12.7.13: whats_new_05
Changes and Additions to CppAD During 2005
12.7.12: whats_new_06
Changes and Additions to CppAD During 2006
12.7.11: whats_new_07
Changes and Additions to CppAD During 2007
12.7.10: whats_new_08
Changes and Additions to CppAD During 2008
12.7.9: whats_new_09
Changes and Additions to CppAD During 2009
12.7.8: whats_new_10
Changes and Additions to CppAD During 2010
12.7.7: whats_new_11
Changes and Additions to CppAD During 2011
12.7.6: whats_new_12
CppAD Changes and Additions During 2012
12.7.5: whats_new_13
CppAD Changes and Additions During 2013
12.7.4: whats_new_14
CppAD Changes and Additions During 2014
12.7.3: whats_new_15
CppAD Changes and Additions During 2015
12.7.2: whats_new_16
Changes and Additions to CppAD During 2016
12.7.1: whats_new_17
Changes and Additions to CppAD During 2017
12.6: wish_list
The CppAD Wish List
10.6: wno_conversion
Suppress Suspect Implicit Conversion Warnings
Z
12.8.12: zdouble
zdouble: An AD Base Type With Absolute Zero
12.8.12.1: zdouble.cpp
zdouble: Example and Test

14: Keyword Index
!= 4.5.1: AD Binary Comparison Operators
   4.5.1.1: AD Binary Comparison Operators: Example and Test
(checkpointing): 5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
(double 11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
(except 10.4: List All (Except Deprecated) CppAD Examples
(record 5.1: Create an ADFun Object (Record an Operation Sequence)
* 4.4.1.3: AD Binary Arithmetic Operators
  4.4.1.4: AD Compound Assignment Operators
  4.4.1.3.3: AD Binary Multiplication: Example and Test
*= 4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
+ 4.4.1.4: AD Compound Assignment Operators
  4.4.1.3.1: AD Binary Addition: Example and Test
  4.4.1.1: AD Unary Plus Operator
  4.4.1.3: AD Binary Arithmetic Operators
+= 4.4.1.4.1: AD Compound Assignment Addition: Example and Test
- 4.4.1.4: AD Compound Assignment Operators
  4.4.1.3.2: AD Binary Subtraction: Example and Test
  4.4.1.3: AD Binary Arithmetic Operators
  4.4.1.2: AD Unary Minus Operator
--with-documentation 12.8.13.h: Autotools Unix Test and Installation: --with-Documentation
--with-testvector 12.8.13.i: Autotools Unix Test and Installation: --with-testvector
-= 4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
/ 7.2.10: Multi-Threaded Newton Method Example / Test
  7.2.9: Multi-Threading User Atomic Example / Test
  7.2.8: Multi-Threading Harmonic Summation Example / Test
  4.4.1.4: AD Compound Assignment Operators
  4.4.1.3.4: AD Binary Division: Example and Test
  4.4.1.3: AD Binary Arithmetic Operators
/= 4.4.1.4.4: AD Compound Assignment Division: Example and Test
11.2.7.f.a: Evaluate a Function Defined in Terms of an ODE: p.p == 0
  4.4.7.2.18.1.f.b: AD Theory for Cholesky Factorization: Reverse Mode.Case k > 0
  4.4.7.2.18.1.f.a: AD Theory for Cholesky Factorization: Reverse Mode.Case k = 0
01-02 12.7.12.dd: Changes and Additions to CppAD During 2006: 01-02
      12.7.6.ck: CppAD Changes and Additions During 2012: 01-02
      12.7.3.bx: CppAD Changes and Additions During 2015: 01-02
01-04 12.7.8.ab: Changes and Additions to CppAD During 2010: 01-04
01-05 12.7.12.de: Changes and Additions to CppAD During 2006: 01-05
01-06 12.7.9.az: Changes and Additions to CppAD During 2009: 01-06
01-07 12.7.12.dc: Changes and Additions to CppAD During 2006: 01-07
      12.7.6.cj: CppAD Changes and Additions During 2012: 01-07
      12.7.3.bw: CppAD Changes and Additions During 2015: 01-07
01-08 12.7.13.ca: Changes and Additions to CppAD During 2005: 01-08
      12.7.12.db: Changes and Additions to CppAD During 2006: 01-08
      12.7.10.av: Changes and Additions to CppAD During 2008: 01-08
01-09 12.7.7.bo: Changes and Additions to CppAD During 2011: 01-09
      12.7.3.bv: CppAD Changes and Additions During 2015: 01-09
01-10 12.7.4.ap: CppAD Changes and Additions During 2014: 01-10
01-11 12.7.10.au: Changes and Additions to CppAD During 2008: 01-11
01-12 12.7.6.ci: CppAD Changes and Additions During 2012: 01-12
01-15 12.7.6.ch: CppAD Changes and Additions During 2012: 01-15
01-16 12.7.7.bn: Changes and Additions to CppAD During 2011: 01-16
      12.7.6.cg: CppAD Changes and Additions During 2012: 01-16
01-17 12.7.1.bo: Changes and Additions to CppAD During 2017: 01-17
01-18 12.7.12.da: Changes and Additions to CppAD During 2006: 01-18
      12.7.9.ay: Changes and Additions to CppAD During 2009: 01-18
      12.7.8.aa: Changes and Additions to CppAD During 2010: 01-18
      12.7.2.ax: Changes and Additions to CppAD During 2016: 01-18
      12.7.1.bn: Changes and Additions to CppAD During 2017: 01-18
01-19 12.7.7.bm: Changes and Additions to CppAD During 2011: 01-19
      12.7.6.cf: CppAD Changes and Additions During 2012: 01-19
      12.7.1.bm: Changes and Additions to CppAD During 2017: 01-19
01-20 12.7.12.cz: Changes and Additions to CppAD During 2006: 01-20
      12.7.10.at: Changes and Additions to CppAD During 2008: 01-20
      12.7.8.z: Changes and Additions to CppAD During 2010: 01-20
      12.7.6.ce: CppAD Changes and Additions During 2012: 01-20
      12.7.3.bu: CppAD Changes and Additions During 2015: 01-20
      12.7.2.aw: Changes and Additions to CppAD During 2016: 01-20
01-21 12.7.10.as: Changes and Additions to CppAD During 2008: 01-21
      12.7.4.ao: CppAD Changes and Additions During 2014: 01-21
      12.7.3.bt: CppAD Changes and Additions During 2015: 01-21
      12.7.2.av: Changes and Additions to CppAD During 2016: 01-21
01-22 12.7.14.cs: Changes and Additions to CppAD During 2004: 01-22
01-23 12.7.8.y: Changes and Additions to CppAD During 2010: 01-23
      12.7.6.cd: CppAD Changes and Additions During 2012: 01-23
      12.7.3.bs: CppAD Changes and Additions During 2015: 01-23
01-24 12.7.10.ar: Changes and Additions to CppAD During 2008: 01-24
      12.7.8.x: Changes and Additions to CppAD During 2010: 01-24
      12.7.6.cc: CppAD Changes and Additions During 2012: 01-24
01-26 12.7.10.aq: Changes and Additions to CppAD During 2008: 01-26
      12.7.8.w: Changes and Additions to CppAD During 2010: 01-26
      12.7.4.an: CppAD Changes and Additions During 2014: 01-26
      12.7.3.br: CppAD Changes and Additions During 2015: 01-26
01-27 12.7.6.cb: CppAD Changes and Additions During 2012: 01-27
      12.7.1.bl: Changes and Additions to CppAD During 2017: 01-27
01-28 12.7.14.cr: Changes and Additions to CppAD During 2004: 01-28
01-29 12.7.14.cq: Changes and Additions to CppAD During 2004: 01-29
      12.7.11.bm: Changes and Additions to CppAD During 2007: 01-29
      12.7.3.bq: CppAD Changes and Additions During 2015: 01-29
      12.7.1.bk: Changes and Additions to CppAD During 2017: 01-29
01-30 12.7.6.ca: CppAD Changes and Additions During 2012: 01-30
      12.7.3.bp: CppAD Changes and Additions During 2015: 01-30
      12.7.1.bj: Changes and Additions to CppAD During 2017: 01-30
01-31 12.7.9.ax: Changes and Additions to CppAD During 2009: 01-31
02-01 12.7.14.cp: Changes and Additions to CppAD During 2004: 02-01
      12.7.11.bl: Changes and Additions to CppAD During 2007: 02-01
      12.7.9.aw: Changes and Additions to CppAD During 2009: 02-01
      12.7.7.bl: Changes and Additions to CppAD During 2011: 02-01
      12.7.1.bi: Changes and Additions to CppAD During 2017: 02-01
02-02 12.7.11.bk: Changes and Additions to CppAD During 2007: 02-02
      12.7.7.bk: Changes and Additions to CppAD During 2011: 02-02
      12.7.3.bo: CppAD Changes and Additions During 2015: 02-02
      12.7.1.bh: Changes and Additions to CppAD During 2017: 02-02
02-03 12.7.11.bj: Changes and Additions to CppAD During 2007: 02-03
      12.7.10.ap: Changes and Additions to CppAD During 2008: 02-03
      12.7.8.v: Changes and Additions to CppAD During 2010: 02-03
      12.7.3.bn: CppAD Changes and Additions During 2015: 02-03
      12.7.1.bg: Changes and Additions to CppAD During 2017: 02-03
02-04 12.7.12.cy: Changes and Additions to CppAD During 2006: 02-04
      12.7.11.bi: Changes and Additions to CppAD During 2007: 02-04
      12.7.3.bm: CppAD Changes and Additions During 2015: 02-04
02-05 12.7.10.ao: Changes and Additions to CppAD During 2008: 02-05
      12.7.8.u: Changes and Additions to CppAD During 2010: 02-05
      12.7.1.bf: Changes and Additions to CppAD During 2017: 02-05
02-06 12.7.11.bh: Changes and Additions to CppAD During 2007: 02-06
      12.7.8.t: Changes and Additions to CppAD During 2010: 02-06
      12.7.7.bj: Changes and Additions to CppAD During 2011: 02-06
      12.7.3.bl: CppAD Changes and Additions During 2015: 02-06
02-07 12.7.3.bk: CppAD Changes and Additions During 2015: 02-07
02-08 12.7.8.s: Changes and Additions to CppAD During 2010: 02-08
      12.7.1.be: Changes and Additions to CppAD During 2017: 02-08
02-09 12.7.7.bi: Changes and Additions to CppAD During 2011: 02-09
      12.7.6.bz: CppAD Changes and Additions During 2012: 02-09
      12.7.3.bj: CppAD Changes and Additions During 2015: 02-09
      12.7.1.bd: Changes and Additions to CppAD During 2017: 02-09
02-10 12.7.12.cx: Changes and Additions to CppAD During 2006: 02-10
      12.7.6.by: CppAD Changes and Additions During 2012: 02-10
      12.7.3.bi: CppAD Changes and Additions During 2015: 02-10
      12.7.1.bc: Changes and Additions to CppAD During 2017: 02-10
02-11 12.7.12.cw: Changes and Additions to CppAD During 2006: 02-11
      12.7.8.r: Changes and Additions to CppAD During 2010: 02-11
      12.7.6.bx: CppAD Changes and Additions During 2012: 02-11
      12.7.3.bh: CppAD Changes and Additions During 2015: 02-11
      12.7.1.bb: Changes and Additions to CppAD During 2017: 02-11
02-12 12.7.14.co: Changes and Additions to CppAD During 2004: 02-12
02-13 12.7.12.cv: Changes and Additions to CppAD During 2006: 02-13
      12.7.1.ba: Changes and Additions to CppAD During 2017: 02-13
02-14 12.7.12.cu: Changes and Additions to CppAD During 2006: 02-14
      12.7.3.bg: CppAD Changes and Additions During 2015: 02-14
02-15 12.7.14.cn: Changes and Additions to CppAD During 2004: 02-15
      12.7.12.ct: Changes and Additions to CppAD During 2006: 02-15
      12.7.11.bg.f: Changes and Additions to CppAD During 2007: 03-09.02-15
      12.7.9.av: Changes and Additions to CppAD During 2009: 02-15
      12.7.7.bh: Changes and Additions to CppAD During 2011: 02-15
      12.7.4.am: CppAD Changes and Additions During 2014: 02-15
      12.7.1.az: Changes and Additions to CppAD During 2017: 02-15
02-16 12.7.14.cm: Changes and Additions to CppAD During 2004: 02-16
      12.7.11.bg.e: Changes and Additions to CppAD During 2007: 03-09.02-16
      12.7.3.bf: CppAD Changes and Additions During 2015: 02-16
02-17 12.7.14.cl: Changes and Additions to CppAD During 2004: 02-17
      12.7.11.bg.d: Changes and Additions to CppAD During 2007: 03-09.02-17
      12.7.7.bg: Changes and Additions to CppAD During 2011: 02-17
      12.7.4.al: CppAD Changes and Additions During 2014: 02-17
02-18 12.7.3.be: CppAD Changes and Additions During 2015: 02-18
02-19 12.7.7.bf: Changes and Additions to CppAD During 2011: 02-19
      12.7.1.ay: Changes and Additions to CppAD During 2017: 02-19
02-20 12.7.14.ck: Changes and Additions to CppAD During 2004: 02-20
      12.7.9.au: Changes and Additions to CppAD During 2009: 02-20
02-21 12.7.14.cj: Changes and Additions to CppAD During 2004: 02-21
      12.7.12.cs: Changes and Additions to CppAD During 2006: 02-21
      12.7.1.ax: Changes and Additions to CppAD During 2017: 02-21
02-22 12.7.11.bg.c: Changes and Additions to CppAD During 2007: 03-09.02-22
      12.7.7.be: Changes and Additions to CppAD During 2011: 02-22
      12.7.4.ak: CppAD Changes and Additions During 2014: 02-22
02-23 12.7.12.cr: Changes and Additions to CppAD During 2006: 02-23
      12.7.4.aj: CppAD Changes and Additions During 2014: 02-23
      12.7.2.au: Changes and Additions to CppAD During 2016: 02-23
02-24 12.7.13.bz: Changes and Additions to CppAD During 2005: 02-24
      12.7.12.cq: Changes and Additions to CppAD During 2006: 02-24
02-25 12.7.12.cp: Changes and Additions to CppAD During 2006: 02-25
      12.7.2.at: Changes and Additions to CppAD During 2016: 02-25
02-26 12.7.4.ai: CppAD Changes and Additions During 2014: 02-26
      12.7.2.as: Changes and Additions to CppAD During 2016: 02-26
      12.7.1.aw: Changes and Additions to CppAD During 2017: 02-26
02-27 12.7.11.bg.b: Changes and Additions to CppAD During 2007: 03-09.02-27
      12.7.4.ah: CppAD Changes and Additions During 2014: 02-27
      12.7.2.ar: Changes and Additions to CppAD During 2016: 02-27
02-28 12.7.14.ci: Changes and Additions to CppAD During 2004: 02-28
      12.7.12.co: Changes and Additions to CppAD During 2006: 02-28
      12.7.4.ag: CppAD Changes and Additions During 2014: 02-28
      12.7.3.bd: CppAD Changes and Additions During 2015: 02-28
      12.7.2.aq: Changes and Additions to CppAD During 2016: 02-28
02-29 12.7.14.ch: Changes and Additions to CppAD During 2004: 02-29
      12.7.2.ap: Changes and Additions to CppAD During 2016: 02-29
03-01 12.7.14.cg: Changes and Additions to CppAD During 2004: 03-01
      12.7.13.by: Changes and Additions to CppAD During 2005: 03-01
      12.7.4.af: CppAD Changes and Additions During 2014: 03-01
      12.7.2.ao: Changes and Additions to CppAD During 2016: 03-01
03-02 12.7.6.bw: CppAD Changes and Additions During 2012: 03-02
      12.7.4.ae: CppAD Changes and Additions During 2014: 03-02
03-03 12.7.14.cf: Changes and Additions to CppAD During 2004: 03-03
      12.7.8.q: Changes and Additions to CppAD During 2010: 03-03
      12.7.6.bv: CppAD Changes and Additions During 2012: 03-03
03-04 12.7.14.ce: Changes and Additions to CppAD During 2004: 03-04
      12.7.13.bx: Changes and Additions to CppAD During 2005: 03-04
      12.7.1.av: Changes and Additions to CppAD During 2017: 03-04
03-05 12.7.14.cd: Changes and Additions to CppAD During 2004: 03-05
      12.7.12.cn: Changes and Additions to CppAD During 2006: 03-05
      12.7.7.bd: Changes and Additions to CppAD During 2011: 03-05
      12.7.4.ad: CppAD Changes and Additions During 2014: 03-05
      12.7.2.an: Changes and Additions to CppAD During 2016: 03-05
03-06 12.7.14.cc: Changes and Additions to CppAD During 2004: 03-06
      12.7.3.bc: CppAD Changes and Additions During 2015: 03-06
      12.7.1.au: Changes and Additions to CppAD During 2017: 03-06
03-07 12.7.14.cb: Changes and Additions to CppAD During 2004: 03-07
      12.7.12.cm: Changes and Additions to CppAD During 2006: 03-07
03-09 12.7.14.ca: Changes and Additions to CppAD During 2004: 03-09
      12.7.13.bw: Changes and Additions to CppAD During 2005: 03-09
      12.7.12.cl: Changes and Additions to CppAD During 2006: 03-09
      12.7.11.bg: Changes and Additions to CppAD During 2007: 03-09
      12.7.8.p: Changes and Additions to CppAD During 2010: 03-09
      12.7.4.ac: CppAD Changes and Additions During 2014: 03-09
03-10 12.7.12.ck: Changes and Additions to CppAD During 2006: 03-10
      12.7.8.o: Changes and Additions to CppAD During 2010: 03-10
      12.7.1.at: Changes and Additions to CppAD During 2017: 03-10
03-11 12.7.14.bz: Changes and Additions to CppAD During 2004: 03-11
      12.7.12.cj: Changes and Additions to CppAD During 2006: 03-11
      12.7.8.n: Changes and Additions to CppAD During 2010: 03-11
      12.7.7.bc: Changes and Additions to CppAD During 2011: 03-11
      12.7.6.bu: CppAD Changes and Additions During 2012: 03-11
      12.7.1.as: Changes and Additions to CppAD During 2017: 03-11
03-12 12.7.14.by: Changes and Additions to CppAD During 2004: 03-12
      12.7.2.am: Changes and Additions to CppAD During 2016: 03-12
03-13 12.7.11.bf.c: Changes and Additions to CppAD During 2007: 03-15.03-13
      12.7.3.bb: CppAD Changes and Additions During 2015: 03-13
      12.7.1.ar: Changes and Additions to CppAD During 2017: 03-13
03-14 12.7.11.bf.b: Changes and Additions to CppAD During 2007: 03-15.03-14
03-15 12.7.14.bx: Changes and Additions to CppAD During 2004: 03-15
      12.7.12.ci: Changes and Additions to CppAD During 2006: 03-15
      12.7.11.bf.a: Changes and Additions to CppAD During 2007: 03-15.03-15
      12.7.11.bf: Changes and Additions to CppAD During 2007: 03-15
03-16 12.7.12.ch: Changes and Additions to CppAD During 2006: 03-16
03-17 12.7.14.bw: Changes and Additions to CppAD During 2004: 03-17
      12.7.12.cg: Changes and Additions to CppAD During 2006: 03-17
      12.7.6.bt: CppAD Changes and Additions During 2012: 03-17
      12.7.4.ab: CppAD Changes and Additions During 2014: 03-17
      12.7.2.al: Changes and Additions to CppAD During 2016: 03-17
03-18 12.7.14.bv: Changes and Additions to CppAD During 2004: 03-18
      12.7.12.cf: Changes and Additions to CppAD During 2006: 03-18
      12.7.4.aa: CppAD Changes and Additions During 2014: 03-18
03-19 12.7.7.bb: Changes and Additions to CppAD During 2011: 03-19
      12.7.2.ak: Changes and Additions to CppAD During 2016: 03-19
03-20 12.7.11.be: Changes and Additions to CppAD During 2007: 03-20
      12.7.2.aj: Changes and Additions to CppAD During 2016: 03-20
      12.7.1.aq: Changes and Additions to CppAD During 2017: 03-20
03-21 12.7.6.bs: CppAD Changes and Additions During 2012: 03-21
      12.7.2.ai: Changes and Additions to CppAD During 2016: 03-21
03-22 12.7.13.bv: Changes and Additions to CppAD During 2005: 03-22
      12.7.12.ce: Changes and Additions to CppAD During 2006: 03-22
      12.7.2.ah: Changes and Additions to CppAD During 2016: 03-22
03-23 12.7.13.bu: Changes and Additions to CppAD During 2005: 03-23
      12.7.12.cd: Changes and Additions to CppAD During 2006: 03-23
      12.7.6.br: CppAD Changes and Additions During 2012: 03-23
      12.7.2.ag: Changes and Additions to CppAD During 2016: 03-23
03-24 12.7.12.cc: Changes and Additions to CppAD During 2006: 03-24
      12.7.9.at: Changes and Additions to CppAD During 2009: 03-24
      12.7.2.af: Changes and Additions to CppAD During 2016: 03-24
03-25 12.7.14.bu: Changes and Additions to CppAD During 2004: 03-25
      12.7.2.ae: Changes and Additions to CppAD During 2016: 03-25
      12.7.1.ap: Changes and Additions to CppAD During 2017: 03-25
03-26 12.7.13.bt: Changes and Additions to CppAD During 2005: 03-26
      12.7.12.cb: Changes and Additions to CppAD During 2006: 03-26
      12.7.6.bq: CppAD Changes and Additions During 2012: 03-26
      12.7.2.ad: Changes and Additions to CppAD During 2016: 03-26
03-27 12.7.12.ca: Changes and Additions to CppAD During 2006: 03-27
      12.7.10.an: Changes and Additions to CppAD During 2008: 03-27
      12.7.6.bp: CppAD Changes and Additions During 2012: 03-27
      12.7.2.ac: Changes and Additions to CppAD During 2016: 03-27
03-28 12.7.14.bt: Changes and Additions to CppAD During 2004: 03-28
      12.7.12.bz: Changes and Additions to CppAD During 2006: 03-28
      12.7.11.bg.a: Changes and Additions to CppAD During 2007: 03-09.03-28
03-29 12.7.12.by: Changes and Additions to CppAD During 2006: 03-29
      12.7.11.bd: Changes and Additions to CppAD During 2007: 03-29
      12.7.1.ao: Changes and Additions to CppAD During 2017: 03-29
03-30 12.7.14.bs: Changes and Additions to CppAD During 2004: 03-30
      12.7.12.bx: Changes and Additions to CppAD During 2006: 03-30
      12.7.11.bc.b: Changes and Additions to CppAD During 2007: 03-31.03-30
03-31 12.7.12.bw: Changes and Additions to CppAD During 2006: 03-31
      12.7.11.bc.a: Changes and Additions to CppAD During 2007: 03-31.03-31
      12.7.11.bc: Changes and Additions to CppAD During 2007: 03-31
      12.7.1.an: Changes and Additions to CppAD During 2017: 03-31
04-01 12.7.14.br: Changes and Additions to CppAD During 2004: 04-01
      12.7.12.bv: Changes and Additions to CppAD During 2006: 04-01
      12.7.8.m: Changes and Additions to CppAD During 2010: 04-01
      12.7.6.bo: CppAD Changes and Additions During 2012: 04-01
04-02 12.7.14.bq: Changes and Additions to CppAD During 2004: 04-02
      12.7.12.bu: Changes and Additions to CppAD During 2006: 04-02
      12.7.1.am: Changes and Additions to CppAD During 2017: 04-02
04-03 12.7.14.bp: Changes and Additions to CppAD During 2004: 04-03
      12.7.12.bt: Changes and Additions to CppAD During 2006: 04-03
04-04 12.7.12.bs: Changes and Additions to CppAD During 2006: 04-04
      12.7.10.am: Changes and Additions to CppAD During 2008: 04-04
04-05 12.7.12.br: Changes and Additions to CppAD During 2006: 04-05
      12.7.11.bb.e: Changes and Additions to CppAD During 2007: 04-11.04-05
      12.7.6.bn: CppAD Changes and Additions During 2012: 04-05
04-06 12.7.12.bq: Changes and Additions to CppAD During 2006: 04-06
      12.7.11.bb.d: Changes and Additions to CppAD During 2007: 04-11.04-06
      12.7.10.al: Changes and Additions to CppAD During 2008: 04-06
      12.7.6.bm: CppAD Changes and Additions During 2012: 04-06
04-07 12.7.14.bo: Changes and Additions to CppAD During 2004: 04-07
      12.7.11.bb.c: Changes and Additions to CppAD During 2007: 04-11.04-07
04-08 12.7.14.bn: Changes and Additions to CppAD During 2004: 04-08
      12.7.12.bp: Changes and Additions to CppAD During 2006: 04-08
      12.7.1.al: Changes and Additions to CppAD During 2017: 04-08
04-09 12.7.14.bm: Changes and Additions to CppAD During 2004: 04-09
04-10 12.7.11.bb.b: Changes and Additions to CppAD During 2007: 04-11.04-10
      12.7.10.ak: Changes and Additions to CppAD During 2008: 04-10
      12.7.6.bl: CppAD Changes and Additions During 2012: 04-10
04-11 12.7.11.bb.a: Changes and Additions to CppAD During 2007: 04-11.04-11
      12.7.11.bb: Changes and Additions to CppAD During 2007: 04-11
04-13 12.7.12.bo: Changes and Additions to CppAD During 2006: 04-13
04-14 12.7.12.bn: Changes and Additions to CppAD During 2006: 04-14
      12.7.11.ba: Changes and Additions to CppAD During 2007: 04-14
04-15 12.7.12.bm: Changes and Additions to CppAD During 2006: 04-15
04-17 12.7.12.bl: Changes and Additions to CppAD During 2006: 04-17
      12.7.11.az: Changes and Additions to CppAD During 2007: 04-17
      12.7.6.bk: CppAD Changes and Additions During 2012: 04-17
      12.7.2.ab: Changes and Additions to CppAD During 2016: 04-17
04-18 12.7.12.bk: Changes and Additions to CppAD During 2006: 04-18
      12.7.10.aj: Changes and Additions to CppAD During 2008: 04-18
      12.7.6.bj: CppAD Changes and Additions During 2012: 04-18
      12.7.3.ba: CppAD Changes and Additions During 2015: 04-18
04-19 12.7.14.bl: Changes and Additions to CppAD During 2004: 04-19
      12.7.13.bs: Changes and Additions to CppAD During 2005: 04-19
      12.7.12.bj: Changes and Additions to CppAD During 2006: 04-19
      12.7.11.ay: Changes and Additions to CppAD During 2007: 04-19
      12.7.7.ba: Changes and Additions to CppAD During 2011: 04-19
      12.7.6.bi: CppAD Changes and Additions During 2012: 04-19
04-20 12.7.14.bk: Changes and Additions to CppAD During 2004: 04-20
      12.7.13.br: Changes and Additions to CppAD During 2005: 04-20
      12.7.10.ai: Changes and Additions to CppAD During 2008: 04-20
      12.7.7.az: Changes and Additions to CppAD During 2011: 04-20
04-21 12.7.14.bj: Changes and Additions to CppAD During 2004: 04-21
      12.7.13.bq: Changes and Additions to CppAD During 2005: 04-21
04-22 12.7.14.bi: Changes and Additions to CppAD During 2004: 04-22
04-24 12.7.14.bh: Changes and Additions to CppAD During 2004: 04-24
      12.7.8.l: Changes and Additions to CppAD During 2010: 04-24
04-25 12.7.14.bg: Changes and Additions to CppAD During 2004: 04-25
      12.7.12.bi: Changes and Additions to CppAD During 2006: 04-25
04-26 12.7.12.bh: Changes and Additions to CppAD During 2006: 04-26
      12.7.8.k: Changes and Additions to CppAD During 2010: 04-26
      12.7.5.ai: CppAD Changes and Additions During 2013: 04-26
04-27 12.7.5.ah: CppAD Changes and Additions During 2013: 04-27
04-28 12.7.14.bf: Changes and Additions to CppAD During 2004: 04-28
      12.7.12.bg: Changes and Additions to CppAD During 2006: 04-28
      12.7.8.j: Changes and Additions to CppAD During 2010: 04-28
      12.7.5.ag: CppAD Changes and Additions During 2013: 04-28
04-29 12.7.14.be: Changes and Additions to CppAD During 2004: 04-29
      12.7.12.bf: Changes and Additions to CppAD During 2006: 04-29
      12.7.7.ay: Changes and Additions to CppAD During 2011: 04-29
05-01 12.7.13.bp: Changes and Additions to CppAD During 2005: 05-01
05-03 12.7.14.bd: Changes and Additions to CppAD During 2004: 05-03
      12.7.12.be: Changes and Additions to CppAD During 2006: 05-03
      12.7.10.ah: Changes and Additions to CppAD During 2008: 05-03
      12.7.7.ax: Changes and Additions to CppAD During 2011: 05-03
05-04 12.7.14.bc: Changes and Additions to CppAD During 2004: 05-04
      12.7.5.af: CppAD Changes and Additions During 2013: 05-04
05-05 12.7.11.ax: Changes and Additions to CppAD During 2007: 05-05
      12.7.3.az: CppAD Changes and Additions During 2015: 05-05
      12.7.2.aa: Changes and Additions to CppAD During 2016: 05-05
05-06 12.7.13.bo: Changes and Additions to CppAD During 2005: 05-06
05-07 12.7.14.bb: Changes and Additions to CppAD During 2004: 05-07
      12.7.3.ay: CppAD Changes and Additions During 2015: 05-07
05-08 12.7.11.aw: Changes and Additions to CppAD During 2007: 05-08
      12.7.10.ag: Changes and Additions to CppAD During 2008: 05-08
      12.7.3.ax: CppAD Changes and Additions During 2015: 05-08
05-09 12.7.14.ba: Changes and Additions to CppAD During 2004: 05-09
      12.7.3.aw: CppAD Changes and Additions During 2015: 05-09
05-10 12.7.3.av: CppAD Changes and Additions During 2015: 05-10
05-11 12.7.7.aw: Changes and Additions to CppAD During 2011: 05-11
      12.7.5.ae: CppAD Changes and Additions During 2013: 05-11
      12.7.3.au: CppAD Changes and Additions During 2015: 05-11
05-12 12.7.14.az: Changes and Additions to CppAD During 2004: 05-12
      12.7.13.bn: Changes and Additions to CppAD During 2005: 05-12
      12.7.5.ad: CppAD Changes and Additions During 2013: 05-12
      12.7.1.ak: Changes and Additions to CppAD During 2017: 05-12
05-14 12.7.14.ay: Changes and Additions to CppAD During 2004: 05-14
      12.7.5.ac: CppAD Changes and Additions During 2013: 05-14
      12.7.4.z: CppAD Changes and Additions During 2014: 05-14
      12.7.1.aj: Changes and Additions to CppAD During 2017: 05-14
05-15 12.7.5.ab: CppAD Changes and Additions During 2013: 05-15
05-16 12.7.13.bm: Changes and Additions to CppAD During 2005: 05-16
      12.7.4.y: CppAD Changes and Additions During 2014: 05-16
05-17 12.7.5.aa: CppAD Changes and Additions During 2013: 05-17
05-18 12.7.13.bl: Changes and Additions to CppAD During 2005: 05-18
05-19 12.7.13.bk: Changes and Additions to CppAD During 2005: 05-19
      12.7.4.x: CppAD Changes and Additions During 2014: 05-19
      12.7.1.ai: Changes and Additions to CppAD During 2017: 05-19
05-20 12.7.9.as: Changes and Additions to CppAD During 2009: 05-20
      12.7.4.w: CppAD Changes and Additions During 2014: 05-20
05-21 12.7.5.z: CppAD Changes and Additions During 2013: 05-21
05-22 12.7.11.av: Changes and Additions to CppAD During 2007: 05-22
      12.7.7.av: Changes and Additions to CppAD During 2011: 05-22
      12.7.4.v: CppAD Changes and Additions During 2014: 05-22
05-23 12.7.4.u: CppAD Changes and Additions During 2014: 05-23
05-24 12.7.11.au: Changes and Additions to CppAD During 2007: 05-24
      12.7.6.bh: CppAD Changes and Additions During 2012: 05-24
05-25 12.7.14.ax: Changes and Additions to CppAD During 2004: 05-25
      12.7.11.at: Changes and Additions to CppAD During 2007: 05-25
05-26 12.7.14.aw: Changes and Additions to CppAD During 2004: 05-26
      12.7.11.as: Changes and Additions to CppAD During 2007: 05-26
      12.7.7.au: Changes and Additions to CppAD During 2011: 05-26
      12.7.3.at: CppAD Changes and Additions During 2015: 05-26
05-27 12.7.12.bd: Changes and Additions to CppAD During 2006: 05-27
      12.7.6.bg: CppAD Changes and Additions During 2012: 05-27
      12.7.4.t: CppAD Changes and Additions During 2014: 05-27
05-28 12.7.7.at: Changes and Additions to CppAD During 2011: 05-28
      12.7.5.y: CppAD Changes and Additions During 2013: 05-28
      12.7.4.s: CppAD Changes and Additions During 2014: 05-28
05-29 12.7.14.av: Changes and Additions to CppAD During 2004: 05-29
      12.7.12.bc: Changes and Additions to CppAD During 2006: 05-29
      12.7.7.as: Changes and Additions to CppAD During 2011: 05-29
      12.7.6.bf: CppAD Changes and Additions During 2012: 05-29
      12.7.1.ah: Changes and Additions to CppAD During 2017: 05-29
05-30 12.7.14.au: Changes and Additions to CppAD During 2004: 05-30
      12.7.6.be: CppAD Changes and Additions During 2012: 05-30
05-31 12.7.12.bb: Changes and Additions to CppAD During 2006: 05-31
      12.7.6.bd: CppAD Changes and Additions During 2012: 05-31
06-01 12.7.14.at: Changes and Additions to CppAD During 2004: 06-01
      12.7.8.i: Changes and Additions to CppAD During 2010: 06-01
      12.7.6.bc: CppAD Changes and Additions During 2012: 06-01
      12.7.1.ag: Changes and Additions to CppAD During 2017: 06-01
06-02 12.7.12.ba: Changes and Additions to CppAD During 2006: 06-02
      12.7.6.bb: CppAD Changes and Additions During 2012: 06-02
06-03 12.7.14.as: Changes and Additions to CppAD During 2004: 06-03
      12.7.6.ba: CppAD Changes and Additions During 2012: 06-03
      12.7.1.af: Changes and Additions to CppAD During 2017: 06-03
06-04 12.7.14.ar: Changes and Additions to CppAD During 2004: 06-04
      12.7.6.az: CppAD Changes and Additions During 2012: 06-04
      12.7.1.ae: Changes and Additions to CppAD During 2017: 06-04
06-05 12.7.12.az: Changes and Additions to CppAD During 2006: 06-05
      12.7.6.ay: CppAD Changes and Additions During 2012: 06-05
06-06 12.7.13.bj: Changes and Additions to CppAD During 2005: 06-06
      12.7.9.ar: Changes and Additions to CppAD During 2009: 06-06
06-07 12.7.12.ay: Changes and Additions to CppAD During 2006: 06-07
      12.7.6.ax: CppAD Changes and Additions During 2012: 06-07
      12.7.3.as: CppAD Changes and Additions During 2015: 06-07
      12.7.1.ad: Changes and Additions to CppAD During 2017: 06-07
06-08 12.7.6.aw: CppAD Changes and Additions During 2012: 06-08
06-09 12.7.12.ax: Changes and Additions to CppAD During 2006: 06-09
      12.7.6.av: CppAD Changes and Additions During 2012: 06-09
      12.7.3.ar: CppAD Changes and Additions During 2015: 06-09
06-10 12.7.10.af: Changes and Additions to CppAD During 2008: 06-10
      12.7.6.au: CppAD Changes and Additions During 2012: 06-10
      12.7.2.z: Changes and Additions to CppAD During 2016: 06-10
      12.7.1.ac: Changes and Additions to CppAD During 2017: 06-10
06-11 12.7.10.ae: Changes and Additions to CppAD During 2008: 06-11
      12.7.3.aq: CppAD Changes and Additions During 2015: 06-11
      12.7.1.ab: Changes and Additions to CppAD During 2017: 06-11
06-12 12.7.14.aq: Changes and Additions to CppAD During 2004: 06-12
      12.7.6.at: CppAD Changes and Additions During 2012: 06-12
06-13 12.7.13.bi: Changes and Additions to CppAD During 2005: 06-13
06-14 12.7.13.bh: Changes and Additions to CppAD During 2005: 06-14
      12.7.11.ar: Changes and Additions to CppAD During 2007: 06-14
06-15 12.7.12.aw: Changes and Additions to CppAD During 2006: 06-15
      12.7.10.ad: Changes and Additions to CppAD During 2008: 06-15
      12.7.6.as: CppAD Changes and Additions During 2012: 06-15
06-16 12.7.6.ar: CppAD Changes and Additions During 2012: 06-16
      12.7.3.ap: CppAD Changes and Additions During 2015: 06-16
06-17 12.7.12.av: Changes and Additions to CppAD During 2006: 06-17
      12.7.12.au.a: Changes and Additions to CppAD During 2006: 06-19.06-17
      12.7.6.aq: CppAD Changes and Additions During 2012: 06-17
06-18 12.7.13.bg: Changes and Additions to CppAD During 2005: 06-18
      12.7.12.au.b: Changes and Additions to CppAD During 2006: 06-19.06-18
      12.7.10.ac: Changes and Additions to CppAD During 2008: 06-18
      12.7.7.ar: Changes and Additions to CppAD During 2011: 06-18
06-19 12.7.12.au: Changes and Additions to CppAD During 2006: 06-19
06-20 12.7.11.aq: Changes and Additions to CppAD During 2007: 06-20
      12.7.9.aq: Changes and Additions to CppAD During 2009: 06-20
06-21 12.7.9.ap: Changes and Additions to CppAD During 2009: 06-21
      12.7.7.aq: Changes and Additions to CppAD During 2011: 06-21
06-22 12.7.12.at: Changes and Additions to CppAD During 2006: 06-22
      12.7.11.ap: Changes and Additions to CppAD During 2007: 06-22
      12.7.9.ao: Changes and Additions to CppAD During 2009: 06-22
06-23 12.7.7.ap: Changes and Additions to CppAD During 2011: 06-23
06-24 12.7.13.bf: Changes and Additions to CppAD During 2005: 06-24
06-25 12.7.14.ap: Changes and Additions to CppAD During 2004: 06-25
      12.7.13.be: Changes and Additions to CppAD During 2005: 06-25
      12.7.9.an: Changes and Additions to CppAD During 2009: 06-25
      12.7.2.y: Changes and Additions to CppAD During 2016: 06-25
06-27 12.7.2.x: Changes and Additions to CppAD During 2016: 06-27
06-28 12.7.9.am.g: Changes and Additions to CppAD During 2009: 07-04.06-28
      12.7.1.aa: Changes and Additions to CppAD During 2017: 06-28
06-29 12.7.14.ao: Changes and Additions to CppAD During 2004: 06-29
      12.7.12.as: Changes and Additions to CppAD During 2006: 06-29
      12.7.9.am.f: Changes and Additions to CppAD During 2009: 07-04.06-29
      12.7.2.w: Changes and Additions to CppAD During 2016: 06-29
06-30 12.7.9.am.e: Changes and Additions to CppAD During 2009: 07-04.06-30
      12.7.2.v: Changes and Additions to CppAD During 2016: 06-30
07-01 12.7.13.bd: Changes and Additions to CppAD During 2005: 07-01
      12.7.9.am.d: Changes and Additions to CppAD During 2009: 07-04.07-01
      12.7.6.ap: CppAD Changes and Additions During 2012: 07-01
      12.7.1.z: Changes and Additions to CppAD During 2017: 07-01
07-02 12.7.14.an: Changes and Additions to CppAD During 2004: 07-02
      12.7.13.bc: Changes and Additions to CppAD During 2005: 07-02
      12.7.10.ab: Changes and Additions to CppAD During 2008: 07-02
      12.7.9.am.c: Changes and Additions to CppAD During 2009: 07-04.07-02
      12.7.6.ao: CppAD Changes and Additions During 2012: 07-02
07-03 12.7.14.am: Changes and Additions to CppAD During 2004: 07-03
      12.7.13.bb: Changes and Additions to CppAD During 2005: 07-03
      12.7.9.am.b: Changes and Additions to CppAD During 2009: 07-04.07-03
      12.7.6.an: CppAD Changes and Additions During 2012: 07-03
      12.7.1.y: Changes and Additions to CppAD During 2017: 07-03
07-04 12.7.13.ba: Changes and Additions to CppAD During 2005: 07-04
      12.7.9.am.a: Changes and Additions to CppAD During 2009: 07-04.07-04
      12.7.9.am: Changes and Additions to CppAD During 2009: 07-04
      12.7.6.am: CppAD Changes and Additions During 2012: 07-04
07-05 12.7.13.az: Changes and Additions to CppAD During 2005: 07-05
      12.7.6.al: CppAD Changes and Additions During 2012: 07-05
07-06 12.7.9.al.a: Changes and Additions to CppAD During 2009: 07-23.07-06
07-07 12.7.14.al: Changes and Additions to CppAD During 2004: 07-07
      12.7.7.ao: Changes and Additions to CppAD During 2011: 07-07
      12.7.6.ak: CppAD Changes and Additions During 2012: 07-07
07-08 12.7.14.ak: Changes and Additions to CppAD During 2004: 07-08
      12.7.13.ay: Changes and Additions to CppAD During 2005: 07-08
      12.7.6.aj: CppAD Changes and Additions During 2012: 07-08
07-09 12.7.7.an: Changes and Additions to CppAD During 2011: 07-09
07-10 12.7.7.am: Changes and Additions to CppAD During 2011: 07-10
07-11 12.7.13.ax: Changes and Additions to CppAD During 2005: 07-11
      12.7.8.h: Changes and Additions to CppAD During 2010: 07-11
      12.7.7.al: Changes and Additions to CppAD During 2011: 07-11
07-12 12.7.12.ar: Changes and Additions to CppAD During 2006: 07-12
07-13 12.7.11.ao: Changes and Additions to CppAD During 2007: 07-13
      12.7.7.ak: Changes and Additions to CppAD During 2011: 07-13
07-14 12.7.15.be: Changes and Additions to CppAD During 2003: 07-14
      12.7.12.aq: Changes and Additions to CppAD During 2006: 07-14
      12.7.11.an: Changes and Additions to CppAD During 2007: 07-14
      12.7.8.g: Changes and Additions to CppAD During 2010: 07-14
      12.7.7.aj: Changes and Additions to CppAD During 2011: 07-14
      12.7.2.u: Changes and Additions to CppAD During 2016: 07-14
07-15 12.7.13.aw: Changes and Additions to CppAD During 2005: 07-15
07-16 12.7.15.bd: Changes and Additions to CppAD During 2003: 07-16
07-17 12.7.7.ai: Changes and Additions to CppAD During 2011: 07-17
      12.7.2.t: Changes and Additions to CppAD During 2016: 07-17
07-18 12.7.15.bc: Changes and Additions to CppAD During 2003: 07-18
      12.7.11.am: Changes and Additions to CppAD During 2007: 07-18
      12.7.7.ah: Changes and Additions to CppAD During 2011: 07-18
07-19 12.7.13.av: Changes and Additions to CppAD During 2005: 07-19
      12.7.11.al: Changes and Additions to CppAD During 2007: 07-19
07-20 12.7.15.bb: Changes and Additions to CppAD During 2003: 07-20
      12.7.11.ak: Changes and Additions to CppAD During 2007: 07-20
07-21 12.7.13.au: Changes and Additions to CppAD During 2005: 07-21
      12.7.11.aj: Changes and Additions to CppAD During 2007: 07-21
07-22 12.7.15.ba: Changes and Additions to CppAD During 2003: 07-22
      12.7.11.ai: Changes and Additions to CppAD During 2007: 07-22
07-23 12.7.11.ah: Changes and Additions to CppAD During 2007: 07-23
      12.7.9.al: Changes and Additions to CppAD During 2009: 07-23
07-24 12.7.9.ak: Changes and Additions to CppAD During 2009: 07-24
07-25 12.7.11.ag.b: Changes and Additions to CppAD During 2007: 07-26.07-25
      12.7.9.aj: Changes and Additions to CppAD During 2009: 07-25
      12.7.7.ag: Changes and Additions to CppAD During 2011: 07-25
      12.7.1.x: Changes and Additions to CppAD During 2017: 07-25
07-26 12.7.15.az: Changes and Additions to CppAD During 2003: 07-26
      12.7.11.ag.a: Changes and Additions to CppAD During 2007: 07-26.07-26
      12.7.11.ag: Changes and Additions to CppAD During 2007: 07-26
      12.7.9.ai: Changes and Additions to CppAD During 2009: 07-26
      12.7.5.x: CppAD Changes and Additions During 2013: 07-26
07-27 12.7.7.af: Changes and Additions to CppAD During 2011: 07-27
07-28 12.7.11.af: Changes and Additions to CppAD During 2007: 07-28
      12.7.7.ae: Changes and Additions to CppAD During 2011: 07-28
07-29 12.7.15.ay: Changes and Additions to CppAD During 2003: 07-29
      12.7.11.ae: Changes and Additions to CppAD During 2007: 07-29
      12.7.7.ad: Changes and Additions to CppAD During 2011: 07-29
07-30 12.7.15.ax: Changes and Additions to CppAD During 2003: 07-30
      12.7.11.ad: Changes and Additions to CppAD During 2007: 07-30
      12.7.6.ai: CppAD Changes and Additions During 2012: 07-30
07-31 12.7.14.aj: Changes and Additions to CppAD During 2004: 07-31
      12.7.9.ah: Changes and Additions to CppAD During 2009: 07-31
      12.7.7.ac: Changes and Additions to CppAD During 2011: 07-31
      12.7.3.ao: CppAD Changes and Additions During 2015: 07-31
08-01 12.7.15.aw: Changes and Additions to CppAD During 2003: 08-01
      12.7.9.ag: Changes and Additions to CppAD During 2009: 08-01
08-02 12.7.9.af: Changes and Additions to CppAD During 2009: 08-02
      12.7.7.ab: Changes and Additions to CppAD During 2011: 08-02
08-03 12.7.15.av: Changes and Additions to CppAD During 2003: 08-03
      12.7.7.aa: Changes and Additions to CppAD During 2011: 08-03
08-04 12.7.15.au: Changes and Additions to CppAD During 2003: 08-04
      12.7.7.z.f: Changes and Additions to CppAD During 2011: 08-11.08-04
08-05 12.7.15.at: Changes and Additions to CppAD During 2003: 08-05
      12.7.6.ah: CppAD Changes and Additions During 2012: 08-05
08-06 12.7.15.as: Changes and Additions to CppAD During 2003: 08-06
      12.7.9.ae: Changes and Additions to CppAD During 2009: 08-06
      12.7.7.z.e: Changes and Additions to CppAD During 2011: 08-11.08-06
      12.7.5.w: CppAD Changes and Additions During 2013: 08-06
      12.7.3.an: CppAD Changes and Additions During 2015: 08-06
08-07 12.7.15.ar: Changes and Additions to CppAD During 2003: 08-07
      12.7.13.at: Changes and Additions to CppAD During 2005: 08-07
      12.7.11.ac: Changes and Additions to CppAD During 2007: 08-07
      12.7.7.z.d: Changes and Additions to CppAD During 2011: 08-11.08-07
08-08 12.7.10.aa: Changes and Additions to CppAD During 2008: 08-08
      12.7.7.z.c: Changes and Additions to CppAD During 2011: 08-11.08-08
      12.7.1.w: Changes and Additions to CppAD During 2017: 08-08
08-09 12.7.11.ab: Changes and Additions to CppAD During 2007: 08-09
      12.7.9.ad.d: Changes and Additions to CppAD During 2009: 08_13.08-09
      12.7.7.z.b: Changes and Additions to CppAD During 2011: 08-11.08-09
      12.7.3.am: CppAD Changes and Additions During 2015: 08-09
      12.7.1.v: Changes and Additions to CppAD During 2017: 08-09
08-10 12.7.15.aq: Changes and Additions to CppAD During 2003: 08-10
      12.7.9.ad.c: Changes and Additions to CppAD During 2009: 08_13.08-10
      12.7.7.z.a: Changes and Additions to CppAD During 2011: 08-11.08-10
08-11 12.7.15.ap: Changes and Additions to CppAD During 2003: 08-11
      12.7.9.ad.b: Changes and Additions to CppAD During 2009: 08_13.08-11
      12.7.7.z: Changes and Additions to CppAD During 2011: 08-11
      12.7.5.v: CppAD Changes and Additions During 2013: 08-11
08-12 12.7.14.ai: Changes and Additions to CppAD During 2004: 08-12
      12.7.5.u: CppAD Changes and Additions During 2013: 08-12
08-13 12.7.13.as: Changes and Additions to CppAD During 2005: 08-13
      12.7.9.ad.a: Changes and Additions to CppAD During 2009: 08_13.08-13
08-14 12.7.13.ar: Changes and Additions to CppAD During 2005: 08-14
      12.7.9.ac: Changes and Additions to CppAD During 2009: 08-14
08-15 12.7.13.aq: Changes and Additions to CppAD During 2005: 08-15
08-16 12.7.15.ao: Changes and Additions to CppAD During 2003: 08-16
      12.7.3.al: CppAD Changes and Additions During 2015: 08-16
08-17 12.7.15.an: Changes and Additions to CppAD During 2003: 08-17
      12.7.12.ap: Changes and Additions to CppAD During 2006: 08-17
      12.7.3.ak: CppAD Changes and Additions During 2015: 08-17
08-19 12.7.15.am: Changes and Additions to CppAD During 2003: 08-19
      12.7.13.ap: Changes and Additions to CppAD During 2005: 08-19
      12.7.10.z: Changes and Additions to CppAD During 2008: 08-19
08-20 12.7.13.ao: Changes and Additions to CppAD During 2005: 08-20
      12.7.3.aj: CppAD Changes and Additions During 2015: 08-20
08-21 12.7.8.f: Changes and Additions to CppAD During 2010: 08-21
      12.7.7.y.e: Changes and Additions to CppAD During 2011: 09-01.08-21
08-22 12.7.15.al: Changes and Additions to CppAD During 2003: 08-22
08-23 12.7.15.ak: Changes and Additions to CppAD During 2003: 08-23
      12.7.7.y.d: Changes and Additions to CppAD During 2011: 09-01.08-23
08-24 12.7.14.ah: Changes and Additions to CppAD During 2004: 08-24
      12.7.13.an: Changes and Additions to CppAD During 2005: 08-24
08-25 12.7.14.ag: Changes and Additions to CppAD During 2004: 08-25
      12.7.9.ab: Changes and Additions to CppAD During 2009: 08-25
      12.7.7.y.c: Changes and Additions to CppAD During 2011: 09-01.08-25
      12.7.3.ai: CppAD Changes and Additions During 2015: 08-25
      12.7.2.s: Changes and Additions to CppAD During 2016: 08-25
08-26 12.7.3.ah: CppAD Changes and Additions During 2015: 08-26
08-27 12.7.14.af: Changes and Additions to CppAD During 2004: 08-27
08-28 12.7.3.ag: CppAD Changes and Additions During 2015: 08-28
08-29 12.7.10.y: Changes and Additions to CppAD During 2008: 08-29
      12.7.3.af: CppAD Changes and Additions During 2015: 08-29
      12.7.1.u: Changes and Additions to CppAD During 2017: 08-29
08-30 12.7.13.am: Changes and Additions to CppAD During 2005: 08-30
      12.7.7.y.b: Changes and Additions to CppAD During 2011: 09-01.08-30
      12.7.3.ae: CppAD Changes and Additions During 2015: 08-30
      12.7.2.r: Changes and Additions to CppAD During 2016: 08-30
      12.7.1.t: Changes and Additions to CppAD During 2017: 08-30
08-31 12.7.7.y.a: Changes and Additions to CppAD During 2011: 09-01.08-31
      12.7.3.ad: CppAD Changes and Additions During 2015: 08-31
08_13 12.7.9.ad: Changes and Additions to CppAD During 2009: 08_13
09-01 12.7.10.x: Changes and Additions to CppAD During 2008: 09-01
      12.7.7.y: Changes and Additions to CppAD During 2011: 09-01
09-02 12.7.14.ae: Changes and Additions to CppAD During 2004: 09-02
      12.7.7.x: Changes and Additions to CppAD During 2011: 09-02
      12.7.3.ac: CppAD Changes and Additions During 2015: 09-02
09-03 12.7.15.aj: Changes and Additions to CppAD During 2003: 09-03
      12.7.10.w: Changes and Additions to CppAD During 2008: 09-03
      12.7.3.ab: CppAD Changes and Additions During 2015: 09-03
09-04 12.7.15.ai: Changes and Additions to CppAD During 2003: 09-04
      12.7.14.ad: Changes and Additions to CppAD During 2004: 09-04
      12.7.10.v: Changes and Additions to CppAD During 2008: 09-04
09-05 12.7.15.ah: Changes and Additions to CppAD During 2003: 09-05
      12.7.10.u: Changes and Additions to CppAD During 2008: 09-05
      12.7.7.w: Changes and Additions to CppAD During 2011: 09-05
09-06 12.7.15.ag: Changes and Additions to CppAD During 2003: 09-06
      12.7.11.aa: Changes and Additions to CppAD During 2007: 09-06
      12.7.10.t: Changes and Additions to CppAD During 2008: 09-06
      12.7.7.v: Changes and Additions to CppAD During 2011: 09-06
09-07 12.7.14.ac: Changes and Additions to CppAD During 2004: 09-07
      12.7.13.al: Changes and Additions to CppAD During 2005: 09-07
      12.7.10.s: Changes and Additions to CppAD During 2008: 09-07
      12.7.5.t: CppAD Changes and Additions During 2013: 09-07
09-09 12.7.14.ab: Changes and Additions to CppAD During 2004: 09-09
      12.7.13.ak: Changes and Additions to CppAD During 2005: 09-09
      12.7.10.r: Changes and Additions to CppAD During 2008: 09-09
09-10 12.7.14.aa: Changes and Additions to CppAD During 2004: 09-10
      12.7.10.q: Changes and Additions to CppAD During 2008: 09-10
09-11 12.7.6.ag: CppAD Changes and Additions During 2012: 09-11
09-12 12.7.10.p: Changes and Additions to CppAD During 2008: 09-12
09-13 12.7.15.af: Changes and Additions to CppAD During 2003: 09-13
      12.7.14.z: Changes and Additions to CppAD During 2004: 09-13
      12.7.2.q: Changes and Additions to CppAD During 2016: 09-13
09-14 12.7.15.ae: Changes and Additions to CppAD During 2003: 09-14
      12.7.13.aj: Changes and Additions to CppAD During 2005: 09-14
09-15 12.7.15.ad: Changes and Additions to CppAD During 2003: 09-15
09-16 12.7.10.o: Changes and Additions to CppAD During 2008: 09-16
      12.7.3.aa: CppAD Changes and Additions During 2015: 09-16
      12.7.2.p: Changes and Additions to CppAD During 2016: 09-16
      12.7.1.s: Changes and Additions to CppAD During 2017: 09-16
09-17 12.7.10.n: Changes and Additions to CppAD During 2008: 09-17
09-18 12.7.15.ac: Changes and Additions to CppAD During 2003: 09-18
      12.7.10.m: Changes and Additions to CppAD During 2008: 09-18
      12.7.9.aa: Changes and Additions to CppAD During 2009: 09-18
      12.7.5.s: CppAD Changes and Additions During 2013: 09-18
09-19 12.7.15.ab: Changes and Additions to CppAD During 2003: 09-19
      12.7.9.z: Changes and Additions to CppAD During 2009: 09-19
      12.7.5.r: CppAD Changes and Additions During 2013: 09-19
      12.7.3.z: CppAD Changes and Additions During 2015: 09-19
09-20 12.7.15.aa: Changes and Additions to CppAD During 2003: 09-20
      12.7.13.ai: Changes and Additions to CppAD During 2005: 09-20
      12.7.9.y: Changes and Additions to CppAD During 2009: 09-20
      12.7.5.q: CppAD Changes and Additions During 2013: 09-20
      12.7.3.y: CppAD Changes and Additions During 2015: 09-20
09-21 12.7.14.y: Changes and Additions to CppAD During 2004: 09-21
      12.7.4.r: CppAD Changes and Additions During 2014: 09-21
      12.7.3.x: CppAD Changes and Additions During 2015: 09-21
09-22 12.7.8.e: Changes and Additions to CppAD During 2010: 09-22
09-23 12.7.14.x: Changes and Additions to CppAD During 2004: 09-23
      12.7.3.w: CppAD Changes and Additions During 2015: 09-23
09-24 12.7.13.ah: Changes and Additions to CppAD During 2005: 09-24
      12.7.6.af: CppAD Changes and Additions During 2012: 09-24
      12.7.3.v: CppAD Changes and Additions During 2015: 09-24
09-25 12.7.4.q: CppAD Changes and Additions During 2014: 09-25
      12.7.3.u: CppAD Changes and Additions During 2015: 09-25
09-26 12.7.14.w: Changes and Additions to CppAD During 2004: 09-26
      12.7.10.l: Changes and Additions to CppAD During 2008: 09-26
      12.7.9.x: Changes and Additions to CppAD During 2009: 09-26
      12.7.8.d: Changes and Additions to CppAD During 2010: 09-26
      12.7.2.o: Changes and Additions to CppAD During 2016: 09-26
09-27 12.7.13.ag: Changes and Additions to CppAD During 2005: 09-27
      12.7.4.p: CppAD Changes and Additions During 2014: 09-27
      12.7.3.t: CppAD Changes and Additions During 2015: 09-27
      12.7.2.n: Changes and Additions to CppAD During 2016: 09-27
09-28 12.7.9.w: Changes and Additions to CppAD During 2009: 09-28
      12.7.4.o: CppAD Changes and Additions During 2014: 09-28
      12.7.3.s: CppAD Changes and Additions During 2015: 09-28
09-29 12.7.14.v: Changes and Additions to CppAD During 2004: 09-29
      12.7.13.af: Changes and Additions to CppAD During 2005: 09-29
      12.7.9.v: Changes and Additions to CppAD During 2009: 09-29
      12.7.2.m: Changes and Additions to CppAD During 2016: 09-29
09-30 12.7.15.z: Changes and Additions to CppAD During 2003: 09-30
      12.7.12.ao: Changes and Additions to CppAD During 2006: 09-30
      12.7.10.k: Changes and Additions to CppAD During 2008: 09-30
      12.7.9.u: Changes and Additions to CppAD During 2009: 09-30
11.2.7.f.b: Evaluate a Function Defined in Terms of an ODE: p.p = 1
  4.4.7.2.18.1.d: AD Theory for Cholesky Factorization: Lemma 1
1/7.2.8.6: Timing Test of Multi-Threaded Summation of 1/i
    7.2.8.5: Multi-Threaded Implementation of Summation of 1/i
    7.2.8.4: Take Down Multi-threading Sum of 1/i
    7.2.8.3: Do One Thread's Work for Sum of 1/i
    7.2.8.2: Set Up Multi-threading Sum of 1/i
    7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i
10 4.4.2.8: The Base 10 Logarithm Function: log10
10-02 12.7.11.z: Changes and Additions to CppAD During 2007: 10-02
      12.7.6.ae: CppAD Changes and Additions During 2012: 10-02
      12.7.3.r: CppAD Changes and Additions During 2015: 10-02
10-03 12.7.9.t: Changes and Additions to CppAD During 2009: 10-03
      12.7.6.ad: CppAD Changes and Additions During 2012: 10-03
      12.7.3.q: CppAD Changes and Additions During 2015: 10-03
10-04 12.7.6.ac: CppAD Changes and Additions During 2012: 10-04
      12.7.3.p: CppAD Changes and Additions During 2015: 10-04
10-05 12.7.15.y: Changes and Additions to CppAD During 2003: 10-05
      12.7.11.y: Changes and Additions to CppAD During 2007: 10-05
10-06 12.7.15.x: Changes and Additions to CppAD During 2003: 10-06
      12.7.14.u: Changes and Additions to CppAD During 2004: 10-06
      12.7.13.ae: Changes and Additions to CppAD During 2005: 10-06
      12.7.3.o: CppAD Changes and Additions During 2015: 10-06
10-10 12.7.15.w: Changes and Additions to CppAD During 2003: 10-10
      12.7.12.an: Changes and Additions to CppAD During 2006: 10-10
10-12 12.7.13.ad: Changes and Additions to CppAD During 2005: 10-12
      12.7.7.u: Changes and Additions to CppAD During 2011: 10-12
      12.7.6.ab: CppAD Changes and Additions During 2012: 10-12
      12.7.5.p: CppAD Changes and Additions During 2013: 10-12
      12.7.2.l: Changes and Additions to CppAD During 2016: 10-12
10-13 12.7.11.x: Changes and Additions to CppAD During 2007: 10-13
      12.7.5.o: CppAD Changes and Additions During 2013: 10-13
10-14 12.7.15.v: Changes and Additions to CppAD During 2003: 10-14
      12.7.13.ac: Changes and Additions to CppAD During 2005: 10-14
      12.7.9.s: Changes and Additions to CppAD During 2009: 10-14
      12.7.7.t: Changes and Additions to CppAD During 2011: 10-14
      12.7.5.n: CppAD Changes and Additions During 2013: 10-14
10-15 12.7.5.m: CppAD Changes and Additions During 2013: 10-15
10-16 12.7.15.u: Changes and Additions to CppAD During 2003: 10-16
      12.7.14.t: Changes and Additions to CppAD During 2004: 10-16
      12.7.12.am: Changes and Additions to CppAD During 2006: 10-16
      12.7.11.w: Changes and Additions to CppAD During 2007: 10-16
      12.7.10.j: Changes and Additions to CppAD During 2008: 10-16
      12.7.9.r: Changes and Additions to CppAD During 2009: 10-16
      12.7.5.l: CppAD Changes and Additions During 2013: 10-16
      12.7.3.n: CppAD Changes and Additions During 2015: 10-16
10-17 12.7.10.i: Changes and Additions to CppAD During 2008: 10-17
10-18 12.7.13.ab: Changes and Additions to CppAD During 2005: 10-18
      12.7.12.al: Changes and Additions to CppAD During 2006: 10-18
10-19 12.7.14.s: Changes and Additions to CppAD During 2004: 10-19
10-20 12.7.13.aa: Changes and Additions to CppAD During 2005: 10-20
10-21 12.7.15.t: Changes and Additions to CppAD During 2003: 10-21
      12.7.14.r: Changes and Additions to CppAD During 2004: 10-21
      12.7.9.q: Changes and Additions to CppAD During 2009: 10-21
      12.7.3.m: CppAD Changes and Additions During 2015: 10-21
10-22 12.7.11.v: Changes and Additions to CppAD During 2007: 10-22
      12.7.5.k: CppAD Changes and Additions During 2013: 10-22
10-23 12.7.11.u: Changes and Additions to CppAD During 2007: 10-23
      12.7.9.p: Changes and Additions to CppAD During 2009: 10-23
      12.7.5.j: CppAD Changes and Additions During 2013: 10-23
      12.7.1.r: Changes and Additions to CppAD During 2017: 10-23
10-24 12.7.9.o: Changes and Additions to CppAD During 2009: 10-24
      12.7.6.aa: CppAD Changes and Additions During 2012: 10-24
10-25 12.7.12.ak: Changes and Additions to CppAD During 2006: 10-25
      12.7.6.z: CppAD Changes and Additions During 2012: 10-25
10-26 12.7.12.aj: Changes and Additions to CppAD During 2006: 10-26
10-27 12.7.14.q: Changes and Additions to CppAD During 2004: 10-27
      12.7.12.ai: Changes and Additions to CppAD During 2006: 10-27
      12.7.11.t: Changes and Additions to CppAD During 2007: 10-27
      12.7.10.h: Changes and Additions to CppAD During 2008: 10-27
      12.7.9.n: Changes and Additions to CppAD During 2009: 10-27
      12.7.2.k: Changes and Additions to CppAD During 2016: 10-27
10-28 12.7.14.p: Changes and Additions to CppAD During 2004: 10-28
      12.7.12.ah: Changes and Additions to CppAD During 2006: 10-28
      12.7.9.m: Changes and Additions to CppAD During 2009: 10-28
10-29 12.7.14.o: Changes and Additions to CppAD During 2004: 10-29
      12.7.12.ag: Changes and Additions to CppAD During 2006: 10-29
      12.7.9.l: Changes and Additions to CppAD During 2009: 10-29
      12.7.7.s: Changes and Additions to CppAD During 2011: 10-29
      12.7.5.i: CppAD Changes and Additions During 2013: 10-29
10-30 12.7.11.s: Changes and Additions to CppAD During 2007: 10-30
      12.7.9.k: Changes and Additions to CppAD During 2009: 10-30
      12.7.7.r: Changes and Additions to CppAD During 2011: 10-30
      12.7.6.y: CppAD Changes and Additions During 2012: 10-30
10-31 12.7.12.af: Changes and Additions to CppAD During 2006: 10-31
      12.7.6.x: CppAD Changes and Additions During 2012: 10-31
11-01 12.7.14.n: Changes and Additions to CppAD During 2004: 11-01
      12.7.13.z: Changes and Additions to CppAD During 2005: 11-01
      12.7.12.ae: Changes and Additions to CppAD During 2006: 11-01
      12.7.11.r: Changes and Additions to CppAD During 2007: 11-01
      12.7.7.q: Changes and Additions to CppAD During 2011: 11-01
11-02 12.7.15.s: Changes and Additions to CppAD During 2003: 11-02
      12.7.14.m: Changes and Additions to CppAD During 2004: 11-02
      12.7.12.ad: Changes and Additions to CppAD During 2006: 11-02
      12.7.11.q: Changes and Additions to CppAD During 2007: 11-02
11-03 12.7.11.p: Changes and Additions to CppAD During 2007: 11-03
11-04 12.7.15.r: Changes and Additions to CppAD During 2003: 11-04
      12.7.14.l: Changes and Additions to CppAD During 2004: 11-04
      12.7.12.ac: Changes and Additions to CppAD During 2006: 11-04
      12.7.11.o: Changes and Additions to CppAD During 2007: 11-04
      12.7.7.p: Changes and Additions to CppAD During 2011: 11-04
      12.7.6.w: CppAD Changes and Additions During 2012: 11-04
      12.7.1.q: Changes and Additions to CppAD During 2017: 11-04
11-05 12.7.12.ab: Changes and Additions to CppAD During 2006: 11-05
      12.7.11.n: Changes and Additions to CppAD During 2007: 11-05
11-06 12.7.15.q: Changes and Additions to CppAD During 2003: 11-06
      12.7.13.y: Changes and Additions to CppAD During 2005: 11-06
      12.7.12.aa: Changes and Additions to CppAD During 2006: 11-06
      12.7.11.m: Changes and Additions to CppAD During 2007: 11-06
      12.7.7.o: Changes and Additions to CppAD During 2011: 11-06
      12.7.6.v: CppAD Changes and Additions During 2012: 11-06
      12.7.3.l: CppAD Changes and Additions During 2015: 11-06
      12.7.1.p: Changes and Additions to CppAD During 2017: 11-06
11-07 12.7.13.x: Changes and Additions to CppAD During 2005: 11-07
      12.7.7.n: Changes and Additions to CppAD During 2011: 11-07
11-08 12.7.12.z: Changes and Additions to CppAD During 2006: 11-08
      12.7.3.k: CppAD Changes and Additions During 2015: 11-08
      12.7.1.o: Changes and Additions to CppAD During 2017: 11-08
11-09 12.7.13.w: Changes and Additions to CppAD During 2005: 11-09
      12.7.7.m: Changes and Additions to CppAD During 2011: 11-09
      12.7.6.u: CppAD Changes and Additions During 2012: 11-09
11-10 12.7.14.k: Changes and Additions to CppAD During 2004: 11-10
11-11 12.7.15.p: Changes and Additions to CppAD During 2003: 11-11
11-12 12.7.15.o: Changes and Additions to CppAD During 2003: 11-12
      12.7.14.j: Changes and Additions to CppAD During 2004: 11-12
      12.7.13.v: Changes and Additions to CppAD During 2005: 11-12
      12.7.12.y: Changes and Additions to CppAD During 2006: 11-12
      12.7.5.h: CppAD Changes and Additions During 2013: 11-12
      12.7.1.n: Changes and Additions to CppAD During 2017: 11-12
11-13 12.7.14.i: Changes and Additions to CppAD During 2004: 11-13
      12.7.5.g: CppAD Changes and Additions During 2013: 11-13
      12.7.2.j: Changes and Additions to CppAD During 2016: 11-13
      12.7.1.m: Changes and Additions to CppAD During 2017: 11-13
11-14 12.7.15.n: Changes and Additions to CppAD During 2003: 11-14
      12.7.14.h: Changes and Additions to CppAD During 2004: 11-14
      12.7.6.t: CppAD Changes and Additions During 2012: 11-14
      12.7.3.j: CppAD Changes and Additions During 2015: 11-14
      12.7.2.i: Changes and Additions to CppAD During 2016: 11-14
11-15 12.7.15.m: Changes and Additions to CppAD During 2003: 11-15
      12.7.14.g: Changes and Additions to CppAD During 2004: 11-15
      12.7.13.u: Changes and Additions to CppAD During 2005: 11-15
      12.7.1.l: Changes and Additions to CppAD During 2017: 11-15
11-16 12.7.15.l: Changes and Additions to CppAD During 2003: 11-16
      12.7.14.f: Changes and Additions to CppAD During 2004: 11-16
      12.7.6.s: CppAD Changes and Additions During 2012: 11-16
11-17 12.7.14.e: Changes and Additions to CppAD During 2004: 11-17
      12.7.7.l: Changes and Additions to CppAD During 2011: 11-17
      12.7.6.r: CppAD Changes and Additions During 2012: 11-17
11-18 12.7.12.x: Changes and Additions to CppAD During 2006: 11-18
      12.7.11.l: Changes and Additions to CppAD During 2007: 11-18
      12.7.7.k: Changes and Additions to CppAD During 2011: 11-18
      12.7.2.h: Changes and Additions to CppAD During 2016: 11-18
11-19 12.7.13.t: Changes and Additions to CppAD During 2005: 11-19
      12.7.1.k: Changes and Additions to CppAD During 2017: 11-19
11-20 12.7.15.k: Changes and Additions to CppAD During 2003: 11-20
      12.7.13.s: Changes and Additions to CppAD During 2005: 11-20
      12.7.10.g: Changes and Additions to CppAD During 2008: 11-20
      12.7.7.j: Changes and Additions to CppAD During 2011: 11-20
      12.7.6.q: CppAD Changes and Additions During 2012: 11-20
      12.7.1.j: Changes and Additions to CppAD During 2017: 11-20
11-21 12.7.15.j: Changes and Additions to CppAD During 2003: 11-21
      12.7.10.f: Changes and Additions to CppAD During 2008: 11-21
      12.7.7.i: Changes and Additions to CppAD During 2011: 11-21
      12.7.6.p: CppAD Changes and Additions During 2012: 11-21
11-22 12.7.13.r: Changes and Additions to CppAD During 2005: 11-22
      12.7.10.e: Changes and Additions to CppAD During 2008: 11-22
11-23 12.7.13.q: Changes and Additions to CppAD During 2005: 11-23
      12.7.12.w: Changes and Additions to CppAD During 2006: 11-23
      12.7.11.k: Changes and Additions to CppAD During 2007: 11-23
      12.7.1.i: Changes and Additions to CppAD During 2017: 11-23
11-24 12.7.7.h: Changes and Additions to CppAD During 2011: 11-24
      12.7.3.i: CppAD Changes and Additions During 2015: 11-24
11-25 12.7.3.h: CppAD Changes and Additions During 2015: 11-25
11-26 12.7.9.j: Changes and Additions to CppAD During 2009: 11-26
11-27 12.7.9.i: Changes and Additions to CppAD During 2009: 11-27
      12.7.8.c: Changes and Additions to CppAD During 2010: 11-27
      12.7.7.g: Changes and Additions to CppAD During 2011: 11-27
      12.7.5.f: CppAD Changes and Additions During 2013: 11-27
      12.7.4.n: CppAD Changes and Additions During 2014: 11-27
11-28 12.7.12.v: Changes and Additions to CppAD During 2006: 11-28
      12.7.9.h: Changes and Additions to CppAD During 2009: 11-28
      12.7.6.o: CppAD Changes and Additions During 2012: 11-28
      12.7.4.m: CppAD Changes and Additions During 2014: 11-28
11-29 12.7.12.u: Changes and Additions to CppAD During 2006: 11-29
      12.7.11.j: Changes and Additions to CppAD During 2007: 11-29
      12.7.7.f: Changes and Additions to CppAD During 2011: 11-29
11-30 12.7.12.t: Changes and Additions to CppAD During 2006: 11-30
      12.7.3.g: CppAD Changes and Additions During 2015: 11-30
      12.7.1.h: Changes and Additions to CppAD During 2017: 11-30
12-01 12.7.15.i: Changes and Additions to CppAD During 2003: 12-01
      12.7.13.p: Changes and Additions to CppAD During 2005: 12-01
      12.7.12.s: Changes and Additions to CppAD During 2006: 12-01
      12.7.3.f: CppAD Changes and Additions During 2015: 12-01
      12.7.1.g: Changes and Additions to CppAD During 2017: 12-01
12-02 12.7.13.o: Changes and Additions to CppAD During 2005: 12-02
      12.7.12.r: Changes and Additions to CppAD During 2006: 12-02
      12.7.11.i: Changes and Additions to CppAD During 2007: 12-02
      12.7.9.g: Changes and Additions to CppAD During 2009: 12-02
12-03 12.7.14.d: Changes and Additions to CppAD During 2004: 12-03
      12.7.13.n: Changes and Additions to CppAD During 2005: 12-03
      12.7.12.q: Changes and Additions to CppAD During 2006: 12-03
      12.7.11.h: Changes and Additions to CppAD During 2007: 12-03
12-04 12.7.11.g: Changes and Additions to CppAD During 2007: 12-04
      12.7.10.d: Changes and Additions to CppAD During 2008: 12-04
      12.7.9.f: Changes and Additions to CppAD During 2009: 12-04
      12.7.1.f: Changes and Additions to CppAD During 2017: 12-04
12-05 12.7.15.h: Changes and Additions to CppAD During 2003: 12-05
      12.7.13.m: Changes and Additions to CppAD During 2005: 12-05
      12.7.12.p: Changes and Additions to CppAD During 2006: 12-05
      12.7.11.f: Changes and Additions to CppAD During 2007: 12-05
      12.7.1.e: Changes and Additions to CppAD During 2017: 12-05
12-06 12.7.13.l: Changes and Additions to CppAD During 2005: 12-06
      12.7.1.d: Changes and Additions to CppAD During 2017: 12-06
12-07 12.7.13.k: Changes and Additions to CppAD During 2005: 12-07
      12.7.12.o: Changes and Additions to CppAD During 2006: 12-07
12-08 12.7.13.j: Changes and Additions to CppAD During 2005: 12-08
      12.7.11.e: Changes and Additions to CppAD During 2007: 12-08
      12.7.3.e: CppAD Changes and Additions During 2015: 12-08
      12.7.1.c: Changes and Additions to CppAD During 2017: 12-08
12-09 12.7.14.c: Changes and Additions to CppAD During 2004: 12-09
      12.7.12.n: Changes and Additions to CppAD During 2006: 12-09
      12.7.2.g: Changes and Additions to CppAD During 2016: 12-09
12-10 12.7.15.g: Changes and Additions to CppAD During 2003: 12-10
      12.7.12.m: Changes and Additions to CppAD During 2006: 12-10
12-11 12.7.14.b: Changes and Additions to CppAD During 2004: 12-11
      12.7.13.i: Changes and Additions to CppAD During 2005: 12-11
      12.7.12.l: Changes and Additions to CppAD During 2006: 12-11
      12.7.2.f: Changes and Additions to CppAD During 2016: 12-11
12-12 12.7.15.f: Changes and Additions to CppAD During 2003: 12-12
      12.7.12.k: Changes and Additions to CppAD During 2006: 12-12
      12.7.9.e: Changes and Additions to CppAD During 2009: 12-12
12-13 12.7.15.e: Changes and Additions to CppAD During 2003: 12-13
      12.7.12.j: Changes and Additions to CppAD During 2006: 12-13
      12.7.6.n: CppAD Changes and Additions During 2012: 12-13
      12.7.2.e: Changes and Additions to CppAD During 2016: 12-13
12-14 12.7.15.d: Changes and Additions to CppAD During 2003: 12-14
      12.7.13.h: Changes and Additions to CppAD During 2005: 12-14
      12.7.10.c: Changes and Additions to CppAD During 2008: 12-14
      12.7.6.m: CppAD Changes and Additions During 2012: 12-14
      12.7.1.b: Changes and Additions to CppAD During 2017: 12-14
12-15 12.7.13.g: Changes and Additions to CppAD During 2005: 12-15
      12.7.12.i: Changes and Additions to CppAD During 2006: 12-15
      12.7.6.l: CppAD Changes and Additions During 2012: 12-15
      12.7.4.l: CppAD Changes and Additions During 2014: 12-15
12-16 12.7.13.f: Changes and Additions to CppAD During 2005: 12-16
      12.7.4.k: CppAD Changes and Additions During 2014: 12-16
12-17 12.7.12.h: Changes and Additions to CppAD During 2006: 12-17
      12.7.6.k: CppAD Changes and Additions During 2012: 12-17
      12.7.4.j: CppAD Changes and Additions During 2014: 12-17
12-18 12.7.12.g: Changes and Additions to CppAD During 2006: 12-18
      12.7.9.d: Changes and Additions to CppAD During 2009: 12-18
      12.7.2.d: Changes and Additions to CppAD During 2016: 12-18
12-19 12.7.13.e: Changes and Additions to CppAD During 2005: 12-19
      12.7.12.f: Changes and Additions to CppAD During 2006: 12-19
      12.7.10.b: Changes and Additions to CppAD During 2008: 12-19
      12.7.6.j: CppAD Changes and Additions During 2012: 12-19
12-20 12.7.13.d: Changes and Additions to CppAD During 2005: 12-20
      12.7.7.e: Changes and Additions to CppAD During 2011: 12-20
      12.7.6.i: CppAD Changes and Additions During 2012: 12-20
      12.7.2.c: Changes and Additions to CppAD During 2016: 12-20
12-21 12.7.12.e: Changes and Additions to CppAD During 2006: 12-21
      12.7.11.d: Changes and Additions to CppAD During 2007: 12-21
      12.7.7.d: Changes and Additions to CppAD During 2011: 12-21
12-22 12.7.15.c: Changes and Additions to CppAD During 2003: 12-22
      12.7.13.c: Changes and Additions to CppAD During 2005: 12-22
      12.7.12.d: Changes and Additions to CppAD During 2006: 12-22
      12.7.9.c: Changes and Additions to CppAD During 2009: 12-22
      12.7.6.h: CppAD Changes and Additions During 2012: 12-22
      12.7.4.i: CppAD Changes and Additions During 2014: 12-22
12-23 12.7.13.b: Changes and Additions to CppAD During 2005: 12-23
      12.7.12.c: Changes and Additions to CppAD During 2006: 12-23
      12.7.9.b: Changes and Additions to CppAD During 2009: 12-23
      12.7.6.g: CppAD Changes and Additions During 2012: 12-23
      12.7.4.h: CppAD Changes and Additions During 2014: 12-23
      12.7.2.b: Changes and Additions to CppAD During 2016: 12-23
12-24 12.7.15.b: Changes and Additions to CppAD During 2003: 12-24
      12.7.13.a: Changes and Additions to CppAD During 2005: 12-24
      12.7.12.b: Changes and Additions to CppAD During 2006: 12-24
      12.7.5.e: CppAD Changes and Additions During 2013: 12-24
12-25 12.7.11.c: Changes and Additions to CppAD During 2007: 12-25
      12.7.4.g: CppAD Changes and Additions During 2014: 12-25
12-26 12.7.6.f: CppAD Changes and Additions During 2012: 12-26
      12.7.5.d: CppAD Changes and Additions During 2013: 12-26
      12.7.4.f: CppAD Changes and Additions During 2014: 12-26
12-27 12.7.6.e: CppAD Changes and Additions During 2012: 12-27
      12.7.5.c: CppAD Changes and Additions During 2013: 12-27
      12.7.4.e: CppAD Changes and Additions During 2014: 12-27
12-28 12.7.7.c: Changes and Additions to CppAD During 2011: 12-28
      12.7.6.d: CppAD Changes and Additions During 2012: 12-28
      12.7.4.d: CppAD Changes and Additions During 2014: 12-28
      12.7.3.d: CppAD Changes and Additions During 2015: 12-28
12-29 12.7.11.b: Changes and Additions to CppAD During 2007: 12-29
      12.7.6.c: CppAD Changes and Additions During 2012: 12-29
      12.7.5.b: CppAD Changes and Additions During 2013: 12-29
      12.7.4.c: CppAD Changes and Additions During 2014: 12-29
      12.7.3.c: CppAD Changes and Additions During 2015: 12-29
12-30 12.7.7.b: Changes and Additions to CppAD During 2011: 12-30
      12.7.6.b: CppAD Changes and Additions During 2012: 12-30
      12.7.4.b: CppAD Changes and Additions During 2014: 12-30
12-31 12.7.8.b: Changes and Additions to CppAD During 2010: 12-31
      12.7.3.b: CppAD Changes and Additions During 2015: 12-31
1: 2.a.a: CppAD Download, Test, and Install Instructions: Instructions.Step 1: Download
4.4.7.2.18.1.e: AD Theory for Cholesky Factorization: Lemma 2
2003 12.7.15: Changes and Additions to CppAD During 2003
2004 12.7.14: Changes and Additions to CppAD During 2004
2005 12.7.13: Changes and Additions to CppAD During 2005
2005-08-07 4.4.4.k: AD Conditional Expressions: Deprecate 2005-08-07
2006 12.7.12: Changes and Additions to CppAD During 2006
2006-03-31 12.8.2.e.a: ADFun Object Deprecated Member Functions: Memory.Deprecated 2006-03-31
           12.8.2.d.a: ADFun Object Deprecated Member Functions: Order.Deprecated 2006-03-31
2006-04-03 12.8.2.f.a: ADFun Object Deprecated Member Functions: Size.Deprecated 2006-04-03
2006-04-08 12.8.2.h.a: ADFun Object Deprecated Member Functions: use_VecAD.Deprecated 2006-04-08
2006-06-17 12.8.2.g.a: ADFun Object Deprecated Member Functions: taylor_size.Deprecated 2006-06-17
2006-12-17 12.8.1.b: Deprecated Include Files: Deprecated 2006-12-17
2007 12.7.11: Changes and Additions to CppAD During 2007
2007-07-23 12.8.5.a: Routines That Track Use of New and Delete: Deprecated 2007-07-23
2007-07-28 12.8.9.h: Choosing The Vector Testing Template Class: CppADvector Deprecated 2007-07-28
           4.4.5.n: Discrete AD Functions: CppADCreateDiscrete Deprecated 2007-07-28
2007-07-31 4.5.3.n: AD Boolean Functions: Deprecated 2007-07-31
2007-08-07 12.8.2.c.a: ADFun Object Deprecated Member Functions: Dependent.Deprecated 2007-08-07
2008 12.7.10: Changes and Additions to CppAD During 2008
2009 12.7.9: Changes and Additions to CppAD During 2009
2010 12.7.8: Changes and Additions to CppAD During 2010
2011 12.7.7: Changes and Additions to CppAD During 2011
2011-06-23 12.8.4.a: OpenMP Parallel Setup: Deprecated 2011-06-23
2011-08-23 12.8.6.d: A Quick OpenMP Memory Allocator Used by CppAD: Deprecated 2011-08-23
2011-08-31 12.8.6.13.a: OpenMP Memory Allocator: Example and Test: Deprecated 2011-08-31
           12.8.6.10.a: Return A Raw Array to The Available Memory for a Thread: Deprecated 2011-08-31
           12.8.6.9.a: Allocate Memory and Create A Raw Array: Deprecated 2011-08-31
           12.8.6.8.a: Amount of Memory Available for Quick Use by a Thread: Deprecated 2011-08-31
           12.8.6.7.a: Amount of Memory a Thread is Currently Using: Deprecated 2011-08-31
           12.8.6.6.a: Free Memory Currently Available for Quick Use by a Thread: Deprecated 2011-08-31
           12.8.6.5.a: Return Memory to omp_alloc: Deprecated 2011-08-31
           12.8.6.4.a: Get At Least A Specified Amount of Memory: Deprecated 2011-08-31
           12.8.6.3.a: Get the Current OpenMP Thread Number: Deprecated 2011-08-31
           12.8.6.2.a: Is The Current Execution in OpenMP Parallel Mode: Deprecated 2011-08-31
           12.8.6.1.a: Set and Get Maximum Number of Threads for omp_alloc Allocator: Deprecated 2011-08-31
2012 12.7.6: CppAD Changes and Additions During 2012
2012-04-06 12.8.7.a: Memory Leak Detection: Deprecated 2012-04-06
2012-06-17 12.8.8.a: Machine Epsilon For AD Types: Deprecated 2012-06-17
2012-07-03 12.8.9.a: Choosing The Vector Testing Template Class: Deprecated 2012-07-03
2012-11-28 12.8.10.a: Nonlinear Programming Using the CppAD Interface to Ipopt: Deprecated 2012-11-28
2012-12-26 12.8.13.a: Autotools Unix Test and Installation: Deprecated 2012-12-26
2013 12.7.5: CppAD Changes and Additions During 2013
2013-05-27 12.8.11.5.a: Old Matrix Multiply as a User Atomic Operation: Example and Test: Deprecated 2013-05-27
           12.8.11.4.a: Old Tan and Tanh as User Atomic Operations: Example and Test: Deprecated 2013-05-27
           12.8.11.3.a: Using AD to Compute Atomic Function Derivatives: Deprecated 2013-05-27
           12.8.11.2.a: Using AD to Compute Atomic Function Derivatives: Deprecated 2013-05-27
           12.8.11.1.a: Old Atomic Operation Reciprocal: Example and Test: Deprecated 2013-05-27
           12.8.11.a: User Defined Atomic AD Functions: Deprecated 2013-05-27
2014 12.7.4: CppAD Changes and Additions During 2014
2014-03-18 12.8.2.j.a: ADFun Object Deprecated Member Functions: capacity_taylor.Deprecated 2014-03-18
           12.8.2.i.a: ADFun Object Deprecated Member Functions: size_taylor.Deprecated 2014-03-18
2015 12.7.3: CppAD Changes and Additions During 2015
2015-01-20 12.8.3.b: Comparison Changes During Zero Order Forward Mode: Deprecated 2015-01-20
2015-09-26 12.8.12.a: zdouble: An AD Base Type With Absolute Zero: Deprecated 2015-09-26
2015-10-04 8.11.f.a: Obtain Nan or Determine if a Value is Nan: nan(zero).Deprecated 2015-10-04
2015-11-30 12.8.1.a: Deprecated Include Files: Deprecated 2015-11-30
2016 12.7.2: Changes and Additions to CppAD During 2016
2016-06-27 4.4.7.2.9.b: Atomic Reverse Hessian Sparsity Patterns: Deprecated 2016-06-27
           4.4.7.2.8.b: Atomic Forward Hessian Sparsity Patterns: Deprecated 2016-06-27
           4.4.7.2.7.b: Atomic Reverse Jacobian Sparsity Patterns: Deprecated 2016-06-27
           4.4.7.2.6.b: Atomic Forward Jacobian Sparsity Patterns: Deprecated 2016-06-27
2017 12.7.1: Changes and Additions to CppAD During 2017
2017-06-01 5.6.4.i.b: Sparse Hessian: work.colpack.star Deprecated 2017-06-01
           5.6.3.j.e: Computing Sparse Hessians: coloring.colpack.star Deprecated 2017-06-01
2: 3.2.7.j: exp_eps: Second Order Reverse Sweep: Index 2: f_1
   3.2.5.j: exp_eps: First Order Reverse Sweep: Index 2: f_1
   3.1.7.g: exp_2: Second Order Reverse Mode: Index 2: f_1
   3.1.5.g: exp_2: First Order Reverse Mode: Index 2: f_1
   2.a.b: CppAD Download, Test, and Install Instructions: Instructions.Step 2: Cmake
11.2.5: Check Gradient of Determinant of 3 by 3 matrix
  11.2.5: Check Gradient of Determinant of 3 by 3 matrix
  11.2.4: Check Determinant of 3 by 3 matrix
  11.2.4: Check Determinant of 3 by 3 matrix
3: 3.2.7.i: exp_eps: Second Order Reverse Sweep: Index 3: f_2
   3.2.5.i: exp_eps: First Order Reverse Sweep: Index 3: f_2
   3.1.7.f: exp_2: Second Order Reverse Mode: Index 3: f_2
   3.1.5.f: exp_2: First Order Reverse Mode: Index 3: f_2
   2.a.c: CppAD Download, Test, and Install Instructions: Instructions.Step 3: Check
3rd 8.18: A 3rd and 4th Order Rosenbrock ODE Solver
4: 3.2.7.h: exp_eps: Second Order Reverse Sweep: Index 4: f_3
   3.2.5.h: exp_eps: First Order Reverse Sweep: Index 4: f_3
   3.1.7.e: exp_2: Second Order Reverse Mode: Index 4: f_3
   3.1.5.e: exp_2: First Order Reverse Mode: Index 4: f_3
   2.a.d: CppAD Download, Test, and Install Instructions: Instructions.Step 4: Installation
4th 8.18: A 3rd and 4th Order Rosenbrock ODE Solver
    8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
5: 3.2.7.g: exp_eps: Second Order Reverse Sweep: Index 5: f_4
   3.2.5.g: exp_eps: First Order Reverse Sweep: Index 5: f_4
   3.1.7.d: exp_2: Second Order Reverse Mode: Index 5: f_4
   3.1.5.d: exp_2: First Order Reverse Mode: Index 5: f_4
5th 8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
6: 3.2.7.f: exp_eps: Second Order Reverse Sweep: Index 6: f_5
   3.2.5.f: exp_eps: First Order Reverse Sweep: Index 6: f_5
7: 3.2.7.e: exp_eps: Second Order Reverse Sweep: Index 7: f_6
   3.2.5.e: exp_eps: First Order Reverse Sweep: Index 7: f_6
< 4.5.1.1: AD Binary Comparison Operators: Example and Test
  4.5.1: AD Binary Comparison Operators
<< 4.3.5.1: AD Output Operator: Example and Test
   4.3.4.1: AD Output Operator: Example and Test
   4.3.5: AD Output Stream Operator
<= 4.5.1.1: AD Binary Comparison Operators: Example and Test
   4.5.1: AD Binary Comparison Operators
= 11.2.7.f.b: Evaluate a Function Defined in Terms of an ODE: p.p = 1
  4.4.7.2.18.1.f.a: AD Theory for Cholesky Factorization: Reverse Mode.Case k = 0
== 11.2.7.f.a: Evaluate a Function Defined in Terms of an ODE: p.p == 0
   4.5.1.1: AD Binary Comparison Operators: Example and Test
   4.5.1: AD Binary Comparison Operators
> 10.2.10.c.d: Using Multiple Levels of AD: Procedure.Second Start AD< AD<double> >
  10.2.10.c.b: Using Multiple Levels of AD: Procedure.Start AD< AD<double> > Recording
  4.5.1.1: AD Binary Comparison Operators: Example and Test
  4.5.1: AD Binary Comparison Operators
  4.4.7.2.18.1.f.b: AD Theory for Cholesky Factorization: Reverse Mode.Case k > 0
>= 4.5.1.1: AD Binary Comparison Operators: Example and Test
   4.5.1: AD Binary Comparison Operators
>> 4.3.4: AD Output Stream Operator
[0
     1] 12.9.3: Simulate a [0,1] Uniform Random Variate
     1] 11.2.10: Simulate a [0,1] Uniform Random Variate
[] 8.22: The CppAD::vector Template Class
   8.9: Definition of a Simple Vector
A
A.1.1c 7.2.3: A Simple Parallel Pthread Example and Test
       7.2.2: A Simple Boost Thread Example and Test
       7.2.1: A Simple OpenMP Example and Test
AD 12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
   12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
   10.2.10.2: Computing a Jacobian With Constants that Change
   10.2.2: Example and Test Linking CppAD to Languages Other than C++
   9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
   7.2.6: A Simple pthread AD: Example and Test
   7.2.5: A Simple Boost Threading AD: Example and Test
   7.2.4: A Simple OpenMP AD: Example and Test
   3: An Introduction by Example to Algorithmic Differentiation
   : cppad-20171217: A Package for Differentiation of C++ Algorithms
ADFun 12.8.3: Comparison Changes During Zero Order Forward Mode
      5.3.7: Comparison Changes Between Taping and Zero Order Forward
      5.1.3: Stop Recording and Store Operation Sequence
Automatic 3: An Introduction by Example to Algorithmic Differentiation
a(x) 5.8.1.f.a: Create An Abs-normal Representation of a Function: Abs-normal Approximation.Approximating a(x)
     5.8.1.c.b: Create An Abs-normal Representation of a Function: a.a(x)
a11c 7.2.f: Run Multi-Threading Examples and Speed Tests: a11c
abort 5.1.4.1: Abort Current Recording: Example and Test
      5.1.4: Abort Recording of an Operation Sequence
abort_op_index 5.1.1.f: Declare Independent Variables and Start Recording: abort_op_index
above 12.4.c: Glossary: AD Type Above Base
      12.3.1.c.c: The Theory of Forward Mode: Standard Math Functions.Cases that Apply Recursion Above
abramowitz 12.5.a: Bibliography: Abramowitz and Stegun
abs 4.7.9.3.n: Enable use of AD<Base> where Base is Adolc's adouble Type: abs
    4.4.2.14.1: AD Absolute Value Function: Example and Test
    4.4.2.14: AD Absolute Value Functions: abs, fabs
abs-normal 5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations
           5.8.10: abs_normal: Minimize a Linear Abs-normal Approximation
           5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations
           5.8.6: abs_normal: Minimize a Linear Abs-normal Approximation
           5.8.1.f: Create An Abs-normal Representation of a Function: Abs-normal Approximation
           5.8.1: Create An Abs-normal Representation of a Function
           5.8: Abs-normal Representation of Non-Smooth Functions
abs_eval 5.8.3.2: abs_eval Source Code
abs_eval: 5.8.3.1: abs_eval: Example and Test
abs_min_linear 5.8.6.2: abs_min_linear Source Code
abs_min_linear: 5.8.6.1: abs_min_linear: Example and Test
abs_min_quad 5.8.10.2: abs_min_quad Source Code
abs_min_quad: 5.8.10.1: abs_min_quad: Example and Test
abs_normal 5.8.11.1: abs_normal min_nso_quad: Example and Test
           5.8.9.1: abs_normal qp_box: Example and Test
           5.8.8.1: abs_normal qp_interior: Example and Test
           5.8.7.1: abs_normal min_nso_linear: Example and Test
           5.8.5.1: abs_normal lp_box: Example and Test
           5.8.4.1: abs_normal simplex_method: Example and Test
           5.8.1.1: abs_normal Getting Started: Example and Test
abs_normal: 5.8.10: abs_normal: Minimize a Linear Abs-normal Approximation
            5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
            5.8.6: abs_normal: Minimize a Linear Abs-normal Approximation
            5.8.5: abs_normal: Solve a Linear Program With Box Constraints
            5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
            5.8.3: abs_normal: Evaluate First Order Approximation
            5.8.2: abs_normal: Print a Vector or Matrix
absgeq 8.14.2.l: LU Factorization of A Square Matrix: AbsGeq
       8.14.1.p: Compute Determinant and Solve Linear Equations: AbsGeq
absolute 12.8.12.b: zdouble: An AD Base Type With Absolute Zero: Absolute Zero
         12.8.12: zdouble: An AD Base Type With Absolute Zero
         8.2: Determine if Two Values Are Nearly Equal
         4.7.i: AD<Base> Requirements for a CppAD Base Type: Absolute Zero, azmul
         4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test
         4.4.3.3: Absolute Zero Multiplication
         4.4.2.14.1: AD Absolute Value Function: Example and Test
         4.4.2.14: AD Absolute Value Functions: abs, fabs
access 8.22.f: The CppAD::vector Template Class: Element Access
       8.9.k: Definition of a Simple Vector: Element Access
accurate 3.2: An Epsilon Accurate Exponential Approximation
aclocal 12.7.12: Changes and Additions to CppAD During 2006
acos 12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
     12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
     4.4.2.1.1: The AD acos Function: Example and Test
     4.4.2.1: Inverse Sine Function: acos
acosh 12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
      12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
      4.7.9.3.l: Enable use of AD<Base> where Base is Adolc's adouble Type: erf, asinh, acosh, atanh, expm1, log1p
      4.7.9.1.p: Example AD<Base> Where Base Constructor Allocates Memory: erf, asinh, acosh, atanh, expm1, log1p
      4.7.5.d: Base Type Requirements for Standard Math Functions: erf, asinh, acosh, atanh, expm1, log1p
      4.4.2.15.1: The AD acosh Function: Example and Test
      4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh
active 12.4.k.a: Glossary: Tape.Active
activity 5.7.2: Example Optimization and Reverse Activity Analysis
         5.7.1: Example Optimization and Forward Activity Analysis
ad 12.10: Some Numerical AD Utilities
   12.8.12: zdouble: An AD Base Type With Absolute Zero
   12.8.11.3: Using AD to Compute Atomic Function Derivatives
   12.8.11.2: Using AD to Compute Atomic Function Derivatives
   12.8.11.t.b: User Defined Atomic AD Functions: Example.Use AD
   12.8.11: User Defined Atomic AD Functions
   12.8.8: Machine Epsilon For AD Types
   12.4.c: Glossary: AD Type Above Base
   12.4.b: Glossary: AD of Base
   12.4.a: Glossary: AD Function
   11.1.c.a: Running the Speed Test Program: package.AD Package
   11: Speed Test an Operator Overloading AD Package
   10.2.13.e: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Taylor's Method Using AD
   10.2.12.e: Taylor's Ode Solver: A Multi-Level AD Example and Test: Taylor's Method Using AD
   10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
   10.2.10: Using Multiple Levels of AD
   7.2.11.3: Pthread Implementation of a Team of AD Threads
   7.2.11.2: Boost Thread Implementation of a Team of AD Threads
   7.2.11.1: OpenMP Implementation of a Team of AD Threads
   7.2.11: Specifications for A Team of AD Threads
   7.2.7: Using a Team of AD Threads: Example and Test
   7.1: Enable AD Calculations During Parallel Mode
   7.e: Using CppAD in a Multi-Threading Environment: Parallel AD
   4.7.9.2: Using a User Defined AD Base Type: Example and Test
   4.7.9: Example AD Base Types That are not AD<OtherBase>
   4.6.1: AD Vectors that Record Index Operations: Example and Test
   4.6.i: AD Vectors that Record Index Operations: AD Indexing
   4.6: AD Vectors that Record Index Operations
   4.5.4.1: AD Parameter and Variable Functions: Example and Test
   4.5.4: Is an AD Object a Parameter or Variable
   4.5.3.1: AD Boolean Functions: Example and Test
   4.5.3: AD Boolean Functions
   4.5.2.1: Compare AD with Base Objects: Example and Test
   4.5.2: Compare AD and Base Objects for Nearly Equal
   4.5.1.1: AD Binary Comparison Operators: Example and Test
   4.5.1: AD Binary Comparison Operators
   4.5: Bool Valued Operations and Functions with AD Arguments
   4.4.7.2.18.1: AD Theory for Cholesky Factorization
   4.4.7.2.3: Using AD Version of Atomic Function
   4.4.7.2: User Defined Atomic AD Functions
   4.4.7.1.c.e: Checkpointing Functions: Purpose.Multiple Level AD
   4.4.7: Atomic AD Functions
   4.4.6: Numeric Limits For an AD and Base Types
   4.4.5.i: Discrete AD Functions: Create AD Version
   4.4.5: Discrete AD Functions
   4.4.4: AD Conditional Expressions
   4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test
   4.4.3.2.1: The AD Power Function: Example and Test
   4.4.3.2: The AD Power Function
   4.4.3.1.1: The AD atan2 Function: Example and Test
   4.4.3.1: AD Two Argument Inverse Tangent Function
   4.4.2.20.1: The AD log1p Function: Example and Test
   4.4.2.19.1: The AD exp Function: Example and Test
   4.4.2.18.1: The AD erf Function: Example and Test
   4.4.2.17.1: The AD atanh Function: Example and Test
   4.4.2.16.1: The AD asinh Function: Example and Test
   4.4.2.15.1: The AD acosh Function: Example and Test
   4.4.2.14.1: AD Absolute Value Function: Example and Test
   4.4.2.14: AD Absolute Value Functions: abs, fabs
   4.4.2.13.1: The AD tanh Function: Example and Test
   4.4.2.12.1: The AD tan Function: Example and Test
   4.4.2.11.1: The AD sqrt Function: Example and Test
   4.4.2.10.1: The AD sinh Function: Example and Test
   4.4.2.9.1: The AD sin Function: Example and Test
   4.4.2.8.1: The AD log10 Function: Example and Test
   4.4.2.7.1: The AD log Function: Example and Test
   4.4.2.6.1: The AD exp Function: Example and Test
   4.4.2.5.1: The AD cosh Function: Example and Test
   4.4.2.4.1: The AD cos Function: Example and Test
   4.4.2.3.1: The AD atan Function: Example and Test
   4.4.2.2.1: The AD asin Function: Example and Test
   4.4.2.1.1: The AD acos Function: Example and Test
   4.4.1.4.4: AD Compound Assignment Division: Example and Test
   4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
   4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
   4.4.1.4.1: AD Compound Assignment Addition: Example and Test
   4.4.1.4: AD Compound Assignment Operators
   4.4.1.3.4: AD Binary Division: Example and Test
   4.4.1.3.3: AD Binary Multiplication: Example and Test
   4.4.1.3.2: AD Binary Subtraction: Example and Test
   4.4.1.3.1: AD Binary Addition: Example and Test
   4.4.1.3: AD Binary Arithmetic Operators
   4.4.1.2.1: AD Unary Minus Operator: Example and Test
   4.4.1.2: AD Unary Minus Operator
   4.4.1.1.1: AD Unary Plus Operator: Example and Test
   4.4.1.1: AD Unary Plus Operator
   4.4.1: AD Arithmetic Operators and Compound Assignments
   4.4: AD Valued Operations and Functions
   4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
   4.3.7: Convert an AD Variable to a Parameter
   4.3.6: Printing AD Values During Forward Mode
   4.3.5.1: AD Output Operator: Example and Test
   4.3.4.1: AD Output Operator: Example and Test
   4.3.5: AD Output Stream Operator
   4.3.4: AD Output Stream Operator
   4.3.3: Convert An AD or Base Type to String
   4.3.2.1: Convert From AD to Integer: Example and Test
   4.3.2.d.c: Convert From AD to Integer: x.AD Types
   4.3.2: Convert From AD to Integer
   4.3.1.1: Convert From AD to its Base Type: Example and Test
   4.3.1: Convert From an AD Type to its Base Type
   4.3: Conversion and I/O of AD Objects
   4.2.1: AD Assignment: Example and Test
   4.2: AD Assignment Operator
   4.1.1: AD Constructors: Example and Test
   4.1: AD Constructors
   4: AD Objects
ad: 10.2.10.1: Multiple Level of AD: Example and Test
    7.2.6: A Simple pthread AD: Example and Test
    7.2.5: A Simple Boost Threading AD: Example and Test
    7.2.4: A Simple OpenMP AD: Example and Test
    4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test
ad< 10.2.10.c.d: Using Multiple Levels of AD: Procedure.Second Start AD< AD<double> >
    10.2.10.c.b: Using Multiple Levels of AD: Procedure.Start AD< AD<double> > Recording
ad<base> 4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
         4.7.9.5: Enable use of AD<Base> where Base is double
         4.7.9.4: Enable use of AD<Base> where Base is float
         4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
         4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
         4.7: AD<Base> Requirements for a CppAD Base Type
         4.4.3.3.d: Absolute Zero Multiplication: AD<Base>
         4.4.2.c.b: The Unary Standard Math Functions: Possible Types.AD<Base>
ad<double> 10.2.10.c.d: Using Multiple Levels of AD: Procedure.Second Start AD< AD<double> >
           10.2.10.c.b: Using Multiple Levels of AD: Procedure.Start AD< AD<double> > Recording
           10.2.10.c.a: Using Multiple Levels of AD: Procedure.First Start AD<double>
ad<otherbase> 4.7.9: Example AD Base Types That are not AD<OtherBase>
add 4.4.1.4.4: AD Compound Assignment Division: Example and Test
    4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
    4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
    4.4.1.4.1: AD Compound Assignment Addition: Example and Test
    4.4.1.4: AD Compound Assignment Operators
    4.4.1.3.1: AD Binary Addition: Example and Test
    4.4.1.3: AD Binary Arithmetic Operators
add_static 12.8.7.e: Memory Leak Detection: add_static
addition 12.3.2.b.a: The Theory of Reverse Mode: Binary Operators.Addition
         12.3.1.b.a: The Theory of Forward Mode: Binary Operators.Addition
         4.4.1.4.j.a: AD Compound Assignment Operators: Derivative.Addition
         4.4.1.3.j.a: AD Binary Arithmetic Operators: Derivative.Addition
addition: 4.4.1.4.1: AD Compound Assignment Addition: Example and Test
          4.4.1.3.1: AD Binary Addition: Example and Test
additions 12.7.15: Changes and Additions to CppAD During 2003
          12.7.14: Changes and Additions to CppAD During 2004
          12.7.13: Changes and Additions to CppAD During 2005
          12.7.12: Changes and Additions to CppAD During 2006
          12.7.11: Changes and Additions to CppAD During 2007
          12.7.10: Changes and Additions to CppAD During 2008
          12.7.9: Changes and Additions to CppAD During 2009
          12.7.8: Changes and Additions to CppAD During 2010
          12.7.7: Changes and Additions to CppAD During 2011
          12.7.6: CppAD Changes and Additions During 2012
          12.7.5: CppAD Changes and Additions During 2013
          12.7.4: CppAD Changes and Additions During 2014
          12.7.3: CppAD Changes and Additions During 2015
          12.7.2: Changes and Additions to CppAD During 2016
          12.7.1: Changes and Additions to CppAD During 2017
          12.7: Changes and Additions to CppAD
addons 12.11: CppAD Addons
adfun 12.8.2: ADFun Object Deprecated Member Functions
      10.2.1: Creating Your Own Interface to an ADFun Object
      5.10.1: ADFun Checking For Nan: Example and Test
      5.10: Check an ADFun Object For Nan Results
      5.9.1: ADFun Check and Re-Tape: Example and Test
      5.9: Check an ADFun Sequence of Operations
      5.7: Optimize an ADFun Object Tape
      5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
      5.1.5.1: ADFun Sequence Properties: Example and Test
      5.1.5: ADFun Sequence Properties
      5.1.2.1: ADFun Assignment: Example and Test
      5.1.2: Construct an ADFun Object and Stop Recording
      5.1.1.1: Independent and ADFun Constructor: Example and Test
      5.1: Create an ADFun Object (Record an Operation Sequence)
      5: ADFun Objects
adnumber 12.8.10.j: Nonlinear Programming Using the CppAD Interface to Ipopt: ADNumber
adol-2.2.1: Including the ADOL-C Examples and Tests
adolc 12.6.k: The CppAD Wish List: Adolc
      11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
      11.4.7: adolc Speed: Sparse Jacobian
      11.4.6: Adolc Speed: Sparse Hessian
      11.4.5: Adolc Speed: Second Derivative of a Polynomial
      11.4.4: Adolc Speed: Ode
      11.4.3: Adolc Speed: Matrix Multiplication
      11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
      11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
      11.4: Speed Test of Derivatives Using Adolc
      10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
      4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
      2.2.1.1: Download and Install Adolc in Build Directory
      2.2.1: Including the ADOL-C Examples and Tests
adolc'4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
adolc_alloc_mat 11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
adolc_dir 12.8.13.n: Autotools Unix Test and Installation: adolc_dir
adolc_prefix 11.4.b: Speed Test of Derivatives Using Adolc: adolc_prefix
             2.2.1.b: Including the ADOL-C Examples and Tests: adolc_prefix
adouble 4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
advector 12.10.3.k: LU Factorization of A Square Matrix and Stability Calculation: ADvector
         12.10.1.l: Computing Jacobian and Hessian of Bender's Reduced Objective: ADvector
         12.8.10.k: Nonlinear Programming Using the CppAD Interface to Ipopt: ADVector
         9.l.a: Use Ipopt to Solve a Nonlinear Programming Problem: fg_eval.ADvector
         5.1.3.f: Stop Recording and Store Operation Sequence: ADvector
         4.4.7.2.3.c: Using AD Version of Atomic Function: ADVector
         4.4.7.1.g: Checkpointing Functions: ADVector
affine 5.8.1.e: Create An Abs-normal Representation of a Function: Affine Approximation
after 4.3.6.g: Printing AD Values During Forward Mode: after
afun 12.8.11.m: User Defined Atomic AD Functions: afun
     4.4.7.2.3.d: Using AD Version of Atomic Function: afun
     4.4.7.2.1.b.b: Atomic Function Constructor: atomic_user.afun
algebra 10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
algo 4.4.7.1.o: Checkpointing Functions: algo
algorithm 10.2.3.b: Differentiate Conjugate Gradient Algorithm: Example and Test: Algorithm
          5.5.8.i: Hessian Sparsity Pattern: Forward Mode: Algorithm
          5.5.7.l: Forward Mode Hessian Sparsity Patterns: Algorithm
          3.2: An Epsilon Accurate Exponential Approximation
          3.1: Second Order Exponential Approximation
          : cppad-20171217: A Package for Differentiation of C++ Algorithms
algorithm: 10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test
algorithmic 10.2.2: Example and Test Linking CppAD to Languages Other than C++
            3.b.a: An Introduction by Example to Algorithmic Differentiation: Preface.Algorithmic Differentiation
            3: An Introduction by Example to Algorithmic Differentiation
            : cppad-20171217: A Package for Differentiation of C++ Algorithms
algorithms : cppad-20171217: A Package for Differentiation of C++ Algorithms
alignment 8.23.12.h: Allocate An Array and Call Default Constructor for its Elements: Alignment
          8.23.6.g: Get At Least A Specified Amount of Memory: Alignment
all 10.4: List All (Except Deprecated) CppAD Examples
    8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
    2.3.b: Checking the CppAD Examples and Tests: Check All
alloc 11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
      8.23.9: Control When Thread Alloc Retains Memory For Future Use
allocate 12.8.6.9: Allocate Memory and Create A Raw Array
         11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
         8.23.12: Allocate An Array and Call Default Constructor for its Elements
         8.23.6: Get At Least A Specified Amount of Memory
allocated 8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
allocates 4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
allocation 12.8.6.13: OpenMP Memory Allocator: Example and Test
           12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
           12.8.6.4.g: Get At Least A Specified Amount of Memory: Allocation Speed
           12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
           12.1.j.c: Frequently Asked Questions and Answers: Speed.Memory Allocation
           8.23.6.f: Get At Least A Specified Amount of Memory: Allocation Speed
           8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
           8.23: A Fast Multi-Threading Memory Allocator
           8.d.c: Some General Purpose Utilities: Miscellaneous.Multi-Threading Memory Allocation
           5.3.8: Controlling Taylor Coefficients Memory Allocation
allocation: 5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
allocator 12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
          12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
          12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
          8.23: A Fast Multi-Threading Memory Allocator
allocator: 12.8.6.13: OpenMP Memory Allocator: Example and Test
           8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
also 12.10.2.b: Jacobian and Hessian of Optimal Values: See Also
     12.10.1.b: Computing Jacobian and Hessian of Bender's Reduced Objective: See Also
     8.25.b: Convert Certain Types to a String: See Also
     8.12.b: The Integer Power Function: See Also
     5.7.6.a: Example Optimization and Nested Conditional Expressions: See Also
     5.7.5.a: Example Optimization and Conditional Expressions: See Also
     5.7.3.a: Example Optimization and Comparison Operators: See Also
     5.6.4.3.b: Subset of a Sparse Hessian: Example and Test: See Also
     5.6.4.2.b: Computing Sparse Hessian for a Subset of Variables: See Also
     5.5.6.2.a: Sparsity Patterns For a Subset of Variables: Example and Test: See Also
     5.4.3.2.a: Reverse Mode General Case (Checkpointing): Example and Test: See Also
     5.3.9.a.a: Number of Variables that Can be Skipped: Syntax.See Also
     5.3.8.a.a: Controlling Taylor Coefficients Memory Allocation: Syntax.See Also
     5.3.6.a.a: Number Taylor Coefficient Orders Currently Stored: Syntax.See Also
     5.1.5.a.a: ADFun Sequence Properties: Syntax.See Also
     4.4.7.2.19.1.a: Matrix Multiply as an Atomic Operation: See Also
     4.4.7.2.19.a: User Atomic Matrix Multiply: Example and Test: See Also
     4.4.7.2.16.1.a: Atomic Eigen Matrix Multiply Class: See Also
     4.4.7.1.4.a: Checkpointing an Extended ODE Solver: Example and Test: See Also
     4.4.7.1.3.a: Checkpointing an ODE Solver: Example and Test: See Also
     4.4.7.1.b: Checkpointing Functions: See Also
     4.4.5.3.a: Interpolation With Retaping: Example and Test: See Also
     4.4.5.2.a: Interpolation With Out Retaping: Example and Test: See Also
     4.4.4.1.a: Conditional Expressions: Example and Test: See Also
     4.4.3.2.b: The AD Power Function: See Also
     4.3.7.b: Convert an AD Variable to a Parameter: See Also
     4.3.3.b: Convert An AD or Base Type to String: See Also
     4.3.1.b: Convert From an AD Type to its Base Type: See Also
alternative 4.3.6.j: Printing AD Values During Forward Mode: Alternative
alternatives 4.6.c: AD Vectors that Record Index Operations: Alternatives
amount 12.9.7: Determine Amount of Time to Execute det_by_minor
       12.8.6.8: Amount of Memory Available for Quick Use by a Thread
       12.8.6.7: Amount of Memory a Thread is Currently Using
       12.8.6.4: Get At Least A Specified Amount of Memory
       8.23.11: Amount of Memory Available for Quick Use by a Thread
       8.23.10: Amount of Memory a Thread is Currently Using
       8.23.6: Get At Least A Specified Amount of Memory
       8.5: Determine Amount of Time to Execute a Test
analysis 5.7.2: Example Optimization and Reverse Activity Analysis
         5.7.1: Example Optimization and Forward Activity Analysis
analytic 12.8.10.2.1.c.a: An ODE Inverse Problem Example: Measurements.Simulation Analytic Solution
         9.3.c.a: ODE Inverse Problem Definitions: Source Code: Measurements.Simulation Analytic Solution
another 12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
        4.7.7: Extending to_string To Another Floating Point Type
answers 12.1: Frequently Asked Questions and Answers
any 5.4.3: Any Order Reverse Mode
    5.3.4: Multiple Order Forward Mode
api 12.8: CppAD Deprecated API Features
    12.7.1.a: Changes and Additions to CppAD During 2017: API Changes
    12.6.b.b: The CppAD Wish List: Atomic.New API
    6: CppAD API Preprocessor Symbols
    4.7.c: AD<Base> Requirements for a CppAD Base Type: API Warning
appendix 12: Appendix
apply 12.3.1.c.c: The Theory of Forward Mode: Standard Math Functions.Cases that Apply Recursion Above
approximating 5.8.1.f.b: Create An Abs-normal Representation of a Function: Abs-normal Approximation.Approximating f(x)
              5.8.1.f.a: Create An Abs-normal Representation of a Function: Abs-normal Approximation.Approximating a(x)
approximation 12.8.10.2.3.d: ODE Fitting Using Fast Representation: Trapezoidal Approximation
              12.8.10.2.2.e: ODE Fitting Using Simple Representation: Trapezoidal Approximation Constraint
              12.8.10.2.1.e: An ODE Inverse Problem Example: Trapezoidal Approximation
              9.3.e: ODE Inverse Problem Definitions: Source Code: Trapezoidal Approximation
              5.8.10: abs_normal: Minimize a Linear Abs-normal Approximation
              5.8.6: abs_normal: Minimize a Linear Abs-normal Approximation
              5.8.3: abs_normal: Evaluate First Order Approximation
              5.8.1.f: Create An Abs-normal Representation of a Function: Abs-normal Approximation
              5.8.1.e: Create An Abs-normal Representation of a Function: Affine Approximation
              3.3: Correctness Tests For Exponential Approximation in Introduction
              3.2: An Epsilon Accurate Exponential Approximation
              3.1: Second Order Exponential Approximation
approximations 5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations
               5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations
arbitrary 8.20: An Arbitrary Order Gear Method
archives 2.1.f: Download The CppAD Source Code: Compressed Archives
are 8.2: Determine if Two Values Are Nearly Equal
    4.7.9: Example AD Base Types That are not AD<OtherBase>
    4.5.5: Check if Two Value are Identically Equal
argument 12.8.10.2.2.b: ODE Fitting Using Simple Representation: Argument Vector
         8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
         4.4.3.1: AD Two Argument Inverse Tangent Function
argument: 4.4.2.20: The Logarithm of One Plus Argument: log1p
arguments 4.5: Bool Valued Operations and Functions with AD Arguments
arguments: 8.14.1.1: LuSolve With Complex Arguments: Example and Test
arithmetic 12.8.12.c.c: zdouble: An AD Base Type With Absolute Zero: Syntax.Arithmetic Operators
           4.4.1.3: AD Binary Arithmetic Operators
           4.4.1: AD Arithmetic Operators and Compound Assignments
array 12.8.6.10.e: Return A Raw Array to The Available Memory for a Thread: array
      12.8.6.10: Return A Raw Array to The Available Memory for a Thread
      12.8.6.9.g: Allocate Memory and Create A Raw Array: array
      12.8.6.9: Allocate Memory and Create A Raw Array
      10.2.4.2: Using Eigen Arrays: Example and Test
      8.23.13.d: Deallocate An Array and Call Destructor for its Elements: array
      8.23.13: Deallocate An Array and Call Destructor for its Elements
      8.23.12.f: Allocate An Array and Call Default Constructor for its Elements: array
      8.23.12: Allocate An Array and Call Default Constructor for its Elements
      4.4.5.1: Taping Array Index Operation: Example and Test
arrays: 10.2.4.2: Using Eigen Arrays: Example and Test
asin 12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
     12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
     4.4.2.2.1: The AD asin Function: Example and Test
     4.4.2.2: Inverse Sine Function: asin
asinh 12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
      12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
      4.7.9.3.l: Enable use of AD<Base> where Base is Adolc's adouble Type: erf, asinh, acosh, atanh, expm1, log1p
      4.7.9.1.p: Example AD<Base> Where Base Constructor Allocates Memory: erf, asinh, acosh, atanh, expm1, log1p
      4.7.5.d: Base Type Requirements for Standard Math Functions: erf, asinh, acosh, atanh, expm1, log1p
      4.4.2.16.1: The AD asinh Function: Example and Test
      4.4.2.16: The Inverse Hyperbolic Sine Function: asinh
asked 12.1: Frequently Asked Questions and Answers
assert 8.1.2: CppAD Assertions During Execution
       8.1: Replacing the CppAD Error Handler
assertions 8.1.2: CppAD Assertions During Execution
assign 4.4.4: AD Conditional Expressions
       4.4.1.4.4: AD Compound Assignment Division: Example and Test
       4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
       4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
       4.4.1.4.1: AD Compound Assignment Addition: Example and Test
       4.2.1: AD Assignment: Example and Test
       4.2: AD Assignment Operator
assignment 12.8.12.c.a: zdouble: An AD Base Type With Absolute Zero: Syntax.Constructor and Assignment
           12.1.a: Frequently Asked Questions and Answers: Assignment and Independent
           8.22.e: The CppAD::vector Template Class: Assignment
           8.9.k.b: Definition of a Simple Vector: Element Access.Assignment
           8.9.g: Definition of a Simple Vector: Assignment
           8.7.e: Definition of a Numeric Type: Assignment
           5.1.2.1: ADFun Assignment: Example and Test
           5.1.2.k.c: Construct an ADFun Object and Stop Recording: Example.Assignment Operator
           5.1.2.i: Construct an ADFun Object and Stop Recording: Assignment Operator
           4.7.9.1.c: Example AD<Base> Where Base Constructor Allocates Memory: Compound Assignment Macro
           4.7.1.f: Required Base Class Member Functions: Assignment Operators
           4.4.1.4.4: AD Compound Assignment Division: Example and Test
           4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
           4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
           4.4.1.4.1: AD Compound Assignment Addition: Example and Test
           4.4.1.4: AD Compound Assignment Operators
           4.2: AD Assignment Operator
assignment: 5.1.2.1: ADFun Assignment: Example and Test
            4.2.1: AD Assignment: Example and Test
assignments 4.4.1: AD Arithmetic Operators and Compound Assignments
assumption 4.3.5.c: AD Output Stream Operator: Assumption
assumptions 4.5.1.h: AD Binary Comparison Operators: Assumptions
atan 12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
     4.4.2.3.1: The AD atan Function: Example and Test
     4.4.2.3: Inverse Tangent Function: atan
atan2 12.6.q: The CppAD Wish List: atan2
      4.4.4.o: AD Conditional Expressions: Atan2
      4.4.3.1.1: The AD atan2 Function: Example and Test
      4.4.3.1: AD Two Argument Inverse Tangent Function
atanh 12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
      4.7.9.3.l: Enable use of AD<Base> where Base is Adolc's adouble Type: erf, asinh, acosh, atanh, expm1, log1p
      4.7.9.1.p: Example AD<Base> Where Base Constructor Allocates Memory: erf, asinh, acosh, atanh, expm1, log1p
      4.7.5.d: Base Type Requirements for Standard Math Functions: erf, asinh, acosh, atanh, expm1, log1p
      4.4.2.17.1: The AD atanh Function: Example and Test
      4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh
atom_fun 4.4.7.1.p: Checkpointing Functions: atom_fun
atomic 12.8.11.5.1.i: Define Matrix Multiply as a User Atomic Operation: CppAD User Atomic Callback Functions
       12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
       12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
       12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
       12.8.11.3: Using AD to Compute Atomic Function Derivatives
       12.8.11.2: Using AD to Compute Atomic Function Derivatives
       12.8.11.1: Old Atomic Operation Reciprocal: Example and Test
       12.8.11: User Defined Atomic AD Functions
       12.8.c: CppAD Deprecated API Features: Atomic Functions
       12.6.b: The CppAD Wish List: Atomic
       12.4.g.a: Glossary: Operation.Atomic
       12.1.g.b: Frequently Asked Questions and Answers: Matrix Inverse.Atomic Operation
       11.1.f.d: Running the Speed Test Program: Global Options.atomic
       7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation
       7.2.9.6: Run Multi-Threaded User Atomic Calculation
       7.2.9.5: Multi-Threaded User Atomic Take Down
       7.2.9.4: Multi-Threaded User Atomic Worker
       7.2.9.3: Multi-Threaded User Atomic Set Up
       7.2.9.2: Multi-Threaded User Atomic Common Information
       7.2.9.1: Defines a User Atomic Operation that Computes Square Root
       7.2.9: Multi-Threading User Atomic Example / Test
       5.7.h: Optimize an ADFun Object Tape: Atomic Functions
       5.5.11.d: Subgraph Dependency Sparsity Patterns: Atomic Function
       4.4.7.2.19.1: Matrix Multiply as an Atomic Operation
       4.4.7.2.19.c: User Atomic Matrix Multiply: Example and Test: Use Atomic Function
       4.4.7.2.19: User Atomic Matrix Multiply: Example and Test
       4.4.7.2.18.2: Atomic Eigen Cholesky Factorization Class
       4.4.7.2.18.c: Atomic Eigen Cholesky Factorization: Example and Test: Use Atomic Function
       4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test
       4.4.7.2.17.1: Atomic Eigen Matrix Inversion Class
       4.4.7.2.17.c: Atomic Eigen Matrix Inverse: Example and Test: Use Atomic Function
       4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test
       4.4.7.2.16.1: Atomic Eigen Matrix Multiply Class
       4.4.7.2.16.c: Atomic Eigen Matrix Multiply: Example and Test: Use Atomic Function
       4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test
       4.4.7.2.15.k: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function
       4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
       4.4.7.2.14.k: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function
       4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
       4.4.7.2.13.k: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function
       4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test
       4.4.7.2.12.k: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function
       4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
       4.4.7.2.11.f: Getting Started with Atomic Operations: Example and Test: Use Atomic Function
       4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
       4.4.7.2.9.1.i: Atomic Reverse Hessian Sparsity: Example and Test: Use Atomic Function
       4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test
       4.4.7.2.8.1.i: Atomic Forward Hessian Sparsity: Example and Test: Use Atomic Function
       4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test
       4.4.7.2.7.1.g: Atomic Reverse Jacobian Sparsity: Example and Test: Use Atomic Function
       4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test
       4.4.7.2.6.1.g: Atomic Forward Jacobian Sparsity: Example and Test: Use Atomic Function
       4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test
       4.4.7.2.5.1.g: Atomic Reverse: Example and Test: Use Atomic Function
       4.4.7.2.5.1: Atomic Reverse: Example and Test
       4.4.7.2.4.1.f: Atomic Forward: Example and Test: Use Atomic Function
       4.4.7.2.4.1: Atomic Forward: Example and Test
       4.4.7.2.9: Atomic Reverse Hessian Sparsity Patterns
       4.4.7.2.8: Atomic Forward Hessian Sparsity Patterns
       4.4.7.2.7: Atomic Reverse Jacobian Sparsity Patterns
       4.4.7.2.6: Atomic Forward Jacobian Sparsity Patterns
       4.4.7.2.5: Atomic Reverse Mode
       4.4.7.2.4: Atomic Forward Mode
       4.4.7.2.3: Using AD Version of Atomic Function
       4.4.7.2.2: Set Atomic Function Options
       4.4.7.2.1: Atomic Function Constructor
       4.4.7.2: User Defined Atomic AD Functions
       4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test
       4.4.7: Atomic AD Functions
       4.4.2.21.d: The Sign: sign: Atomic
       4.4.2.14.c: AD Absolute Value Functions: abs, fabs: Atomic
       4.4.2.13.c: The Hyperbolic Tangent Function: tanh: Atomic
       4.4.2.12.c: The Tangent Function: tan: Atomic
       4.4.2.11.c: The Square Root Function: sqrt: Atomic
       4.4.2.10.c: The Hyperbolic Sine Function: sinh: Atomic
       4.4.2.9.c: The Sine Function: sin: Atomic
       4.4.2.7.c: The Exponential Function: log: Atomic
       4.4.2.6.c: The Exponential Function: exp: Atomic
       4.4.2.5.c: The Hyperbolic Cosine Function: cosh: Atomic
       4.4.2.4.c: The Cosine Function: cos: Atomic
       4.4.2.3.c: Inverse Tangent Function: atan: Atomic
       4.4.2.2.c: Inverse Sine Function: asin: Atomic
       4.4.2.1.c: Inverse Sine Function: acos: Atomic
atomic_base 4.4.7.2.1.c: Atomic Function Constructor: atomic_base
atomic_sparsity 4.4.7.2.2.b: Set Atomic Function Options: atomic_sparsity
atomic_user 4.4.7.2.1.b: Atomic Function Constructor: atomic_user
au 7.2.9.1.c: Defines a User Atomic Operation that Computes Square Root: au
automatic 10.2.2: Example and Test Linking CppAD to Languages Other than C++
          : cppad-20171217: A Package for Differentiation of C++ Algorithms
autotools 12.8.13: Autotools Unix Test and Installation
available 12.8.7.h: Memory Leak Detection: available
          12.8.6.10: Return A Raw Array to The Available Memory for a Thread
          12.8.6.8: Amount of Memory Available for Quick Use by a Thread
          12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
          8.23.11: Amount of Memory Available for Quick Use by a Thread
          8.23.8: Free Memory Currently Available for Quick Use by a Thread
          8.23.7: Return Memory to thread_alloc
ax 12.8.11.m.a: User Defined Atomic AD Functions: afun.ax
   4.4.7.2.3.e: Using AD Version of Atomic Function: ax
   4.4.7.1.i: Checkpointing Functions: ax
   4.4.5.g: Discrete AD Functions: ax
ay 12.8.11.m.b: User Defined Atomic AD Functions: afun.ay
   7.2.9.1.d: Defines a User Atomic Operation that Computes Square Root: ay
   4.4.7.2.3.f: Using AD Version of Atomic Function: ay
   4.4.7.1.j: Checkpointing Functions: ay
   4.4.5.h: Discrete AD Functions: ay
azmul 4.7.9.6.i: Enable use of AD<Base> where Base is std::complex<double>: azmul
      4.7.9.5.f: Enable use of AD<Base> where Base is double: azmul
      4.7.9.4.f: Enable use of AD<Base> where Base is float: azmul
      4.7.9.3.i: Enable use of AD<Base> where Base is Adolc's adouble Type: azmul
      4.7.9.1.m: Example AD<Base> Where Base Constructor Allocates Memory: azmul
      4.7.i: AD<Base> Requirements for a CppAD Base Type: Absolute Zero, azmul
B
Base 4.2: AD Assignment Operator
     4.1: AD Constructors
BenderQuad 12.10.1.1: BenderQuad: Example and Test
           12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
b_in 5.8.11.m: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: b_in
     5.8.7.m: Non-Smooth Optimization Using Abs-normal Linear Approximations: b_in
background 10.2.10.a: Using Multiple Levels of AD: Background
base 12.8.12.e: zdouble: An AD Base Type With Absolute Zero: Base Type Requirements
     12.8.12: zdouble: An AD Base Type With Absolute Zero
     12.8.11.e.b: User Defined Atomic AD Functions: CPPAD_USER_ATOMIC.Base
     12.6.j: The CppAD Wish List: Base Requirements
     12.4.e: Glossary: Base Type
     12.4.d: Glossary: Base Function
     12.4.c: Glossary: AD Type Above Base
     12.4.b: Glossary: AD of Base
     4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
     4.7.9.5: Enable use of AD<Base> where Base is double
     4.7.9.4: Enable use of AD<Base> where Base is float
     4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
     4.7.9.2: Using a User Defined AD Base Type: Example and Test
     4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
     4.7.9: Example AD Base Types That are not AD<OtherBase>
     4.7.8: Base Type Requirements for Hash Coding Values
     4.7.7.a: Extending to_string To Another Floating Point Type: Base Requirement
     4.7.6: Base Type Requirements for Numeric Limits
     4.7.5: Base Type Requirements for Standard Math Functions
     4.7.4: Base Type Requirements for Ordered Comparisons
     4.7.3: Base Type Requirements for Identically Equal Comparisons
     4.7.2: Base Type Requirements for Conditional Expressions
     4.7.1: Required Base Class Member Functions
     4.7.d: AD<Base> Requirements for a CppAD Base Type: Standard Base Types
     4.7: AD<Base> Requirements for a CppAD Base Type
     4.5.2.1: Compare AD with Base Objects: Example and Test
     4.5.2: Compare AD and Base Objects for Nearly Equal
     4.4.7.2.1.c.b: Atomic Function Constructor: atomic_base.Base
     4.4.7.1.f: Checkpointing Functions: Base
     4.4.6: Numeric Limits For an AD and Base Types
     4.4.5.c: Discrete AD Functions: Base
     4.4.3.3.c: Absolute Zero Multiplication: Base
     4.4.2.8: The Base 10 Logarithm Function: log10
     4.4.2.c.a: The Unary Standard Math Functions: Possible Types.Base
     4.4.1.4.d: AD Compound Assignment Operators: Base
     4.4.1.3.d: AD Binary Arithmetic Operators: Base
     4.4.1.2.c: AD Unary Minus Operator: Base
     4.3.3: Convert An AD or Base Type to String
     4.3.1.1: Convert From AD to its Base Type: Example and Test
     4.3.1: Convert From an AD Type to its Base Type
     4.b: AD Objects: Base Type Requirements
base_adolc.hpp 10.2.13.f: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: base_adolc.hpp
basevector 12.10.2.e: Jacobian and Hessian of Optimal Values: BaseVector
           5.6.5.d: Compute Sparse Jacobians Using Subgraphs: BaseVector
           5.6.3.d: Computing Sparse Hessians: BaseVector
           5.6.1.d: Computing Sparse Jacobians: BaseVector
           5.4.4.d: Reverse Mode Using Subgraphs: BaseVector
bavector 12.10.1.k: Computing Jacobian and Hessian of Bender's Reduced Objective: BAvector
be 5.3.9.1: Number of Variables That Can be Skipped: Example and Test
   5.3.9: Number of Variables that Can be Skipped
before 4.3.6.e: Printing AD Values During Forward Mode: before
begin 12.8.11.5.1.c: Define Matrix Multiply as a User Atomic Operation: Begin Source
bender'12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
benderquad 12.6.r: The CppAD Wish List: BenderQuad
benderquad: 12.10.1.1: BenderQuad: Example and Test
between 5.3.7: Comparison Changes Between Taping and Zero Order Forward
bibliography 12.5: Bibliography
binary 12.3.2.b: The Theory of Reverse Mode: Binary Operators
       12.3.1.b: The Theory of Forward Mode: Binary Operators
       4.7.9.1.d: Example AD<Base> Where Base Constructor Allocates Memory: Binary Operator Macro
       4.7.1.g: Required Base Class Member Functions: Binary Operators
       4.5.3.k: AD Boolean Functions: Create Binary
       4.5.1.1: AD Binary Comparison Operators: Example and Test
       4.5.1: AD Binary Comparison Operators
       4.4.3: The Binary Math Functions
       4.4.1.3.4: AD Binary Division: Example and Test
       4.4.1.3.3: AD Binary Multiplication: Example and Test
       4.4.1.3.2: AD Binary Subtraction: Example and Test
       4.4.1.3.1: AD Binary Addition: Example and Test
       4.4.1.3: AD Binary Arithmetic Operators
binary_name 4.5.3.h: AD Boolean Functions: binary_name
bit_per_unit 8.22.m.b: The CppAD::vector Template Class: vectorBool.bit_per_unit
black 12.8.10.2.1.f: An ODE Inverse Problem Example: Black Box Method
bool 8.22.2: CppAD::vectorBool Class: Example and Test
     4.7.1.h: Required Base Class Member Functions: Bool Operators
     4.5.3.1: AD Boolean Functions: Example and Test
     4.5.3: AD Boolean Functions
     4.5: Bool Valued Operations and Functions with AD Arguments
bool_sparsity_enum 4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
                   4.4.7.2.2.b.b: Set Atomic Function Options: atomic_sparsity.bool_sparsity_enum
boolean 12.4.j.b: Glossary: Sparsity Pattern.Boolean Vector
        4.7.9.1.e: Example AD<Base> Where Base Constructor Allocates Memory: Boolean Operator Macro
        4.5.3.1: AD Boolean Functions: Example and Test
        4.5.3: AD Boolean Functions
boolsparsity 11.1.g.a: Running the Speed Test Program: Sparsity Options.boolsparsity
boolvector 5.6.5.f: Compute Sparse Jacobians Using Subgraphs: BoolVector
           5.5.11.e: Subgraph Dependency Sparsity Patterns: BoolVector
           5.5.7.d: Forward Mode Hessian Sparsity Patterns: BoolVector
           5.5.5.d: Reverse Mode Hessian Sparsity Patterns: BoolVector
           5.4.4.e: Reverse Mode Using Subgraphs: BoolVector
boost 8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
      7.2.11.2: Boost Thread Implementation of a Team of AD Threads
      7.2.5: A Simple Boost Threading AD: Example and Test
      7.2.2: A Simple Boost Thread Example and Test
      2.2.7.d: Choosing the CppAD Test Vector Template Class: boost
boost::numeric::ublas::vector 12.8.9.g: Choosing The Vector Testing Template Class: boost::numeric::ublas::vector
                              10.5.f: Using The CppAD Test Vector Template Class: boost::numeric::ublas::vector
boost_dir 12.8.13.o: Autotools Unix Test and Installation: boost_dir
both 12.8.10.3: Speed Test for Both Simple and Fast Representations
     12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
     4.4.7.2.9.1.j: Atomic Reverse Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.8.1.j: Atomic Forward Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.7.1.h: Atomic Reverse Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.6.1.h: Atomic Forward Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
bound 5.8.10.p: abs_normal: Minimize a Linear Abs-normal Approximation: bound
      5.8.6.o: abs_normal: Minimize a Linear Abs-normal Approximation: bound
box 12.8.10.2.1.f: An ODE Inverse Problem Example: Black Box Method
    5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
    5.8.5: abs_normal: Solve a Linear Program With Box Constraints
bthread 7.2.11.2: Boost Thread Implementation of a Team of AD Threads
bug 7.2.11.3.a: Pthread Implementation of a Team of AD Threads: Bug in Cygwin
bugs 12.1.b: Frequently Asked Questions and Answers: Bugs
build 12.8.13.c: Autotools Unix Test and Installation: Build Directory
      7.2.b: Run Multi-Threading Examples and Speed Tests: build
      2.2.6.1: Download and Install Sacado in Build Directory
      2.2.5.1: Download and Install Ipopt in Build Directory
      2.2.4.1: Download and Install Fadbad in Build Directory
      2.2.3.1: Download and Install Eigen in Build Directory
      2.2.2.5: Download and Install ColPack in Build Directory
      2.2.1.1: Download and Install Adolc in Build Directory
      2.2.b.a: Using CMake to Configure CppAD: CMake Command.Build Directory
building 2.1.k: Download The CppAD Source Code: Building Documentation
bvector 9.d: Use Ipopt to Solve a Nonlinear Programming Problem: Bvector
C
10.2.7: Interfacing to C: Example and Test
  10.2.2: Example and Test Linking CppAD to Languages Other than C++
     compare speed with C++ 12.9: Compare Speed of C and C++
C++ : cppad-20171217: A Package for Differentiation of C++ Algorithms
     compare speed with 12.9: Compare Speed of C and C++
CheckNumericType 8.8: Check NumericType Class Concept
CheckSimpleVector 8.10: Check Simple Vector Concept
CompareChange 12.8.3: Comparison Changes During Zero Order Forward Mode
CondExp 4.7.2: Base Type Requirements for Conditional Expressions
CPPAD_ 6: CppAD API Preprocessor Symbols
CPPAD_ASSERT_KNOWN 8.1.2: CppAD Assertions During Execution
CPPAD_ASSERT_UNKNOWN 8.1.2: CppAD Assertions During Execution
CPPAD_BOOL_BINARY 4.5.3: AD Boolean Functions
CPPAD_BOOL_UNARY 4.5.3: AD Boolean Functions
CPPAD_COND_EXP_REL 4.7.2: Base Type Requirements for Conditional Expressions
CPPAD_DISCRETE_FUNCTION 4.4.5: Discrete AD Functions
CPPAD_TEST_VECTOR 12.8.9: Choosing The Vector Testing Template Class
CPPAD_TESTVECTOR 10.5: Using The CppAD Test Vector Template Class
CPPAD_TRACK_COUNT 12.8.5: Routines That Track Use of New and Delete
CPPAD_TRACK_DEL_VEC 12.8.5: Routines That Track Use of New and Delete
CPPAD_TRACK_EXTEND 12.8.5: Routines That Track Use of New and Delete
CPPAD_TRACK_NEW_VEC 12.8.5: Routines That Track Use of New and Delete
CppAD 8.22.2: CppAD::vectorBool Class: Example and Test
      8.22.1: CppAD::vector Template Class: Example and Test
      8.22: The CppAD::vector Template Class
      : cppad-20171217: A Package for Differentiation of C++ Algorithms
CppADTrackDelVec 12.8.5: Routines That Track Use of New and Delete
CppADTrackExtend 12.8.5: Routines That Track Use of New and Delete
CppADTrackNewVec 12.8.5: Routines That Track Use of New and Delete
12.9.8: Main Program For Comparing C and C++ Speed
  12.9.1.h: Determinant of a Minor: c
  12.9: Compare Speed of C and C++
  12.8.3.e: Comparison Changes During Zero Order Forward Mode: c
  11.2.2.i: Determinant of a Minor: c
  8.27.j.c: Row and Column Index Sparsity Patterns: set.c
  5.8.9.k: abs_normal: Solve a Quadratic Program With Box Constraints: C
  5.8.9.j: abs_normal: Solve a Quadratic Program With Box Constraints: c
  5.8.8.i: Solve a Quadratic Program Using Interior Point Method: C
  5.8.8.h: Solve a Quadratic Program Using Interior Point Method: c
  5.8.5.i: abs_normal: Solve a Linear Program With Box Constraints: c
  5.8.4.i: abs_normal: Solve a Linear Program Using Simplex Method: c
  5.3.8.d: Controlling Taylor Coefficients Memory Allocation: c
c++ 12.9.8: Main Program For Comparing C and C++ Speed
    12.9: Compare Speed of C and C++
    12.5.b: Bibliography: The C++ Programming Language
    10.2.2: Example and Test Linking CppAD to Languages Other than C++
    8.b: Some General Purpose Utilities: C++ Concepts
    : cppad-20171217: A Package for Differentiation of C++ Algorithms
c++11 2.2.m.a: Using CMake to Configure CppAD: cppad_cxx_flags.C++11
c: 10.2.7: Interfacing to C: Example and Test
calculating 5.6: Calculating Sparse Derivatives
            5.5: Calculating Sparsity Patterns
calculation 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
            7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation
            7.2.9.6: Run Multi-Threaded User Atomic Calculation
calculations 12.3: The Theory of Derivative Calculations
             7.1: Enable AD Calculations During Parallel Mode
             2.2.2: Including the ColPack Sparsity Calculations
call 12.8.11.5.1.d: Define Matrix Multiply as a User Atomic Operation: Extra Call Information
     8.23.13: Deallocate An Array and Call Destructor for its Elements
     8.23.12: Allocate An Array and Call Default Constructor for its Elements
     8.1.c: Replacing the CppAD Error Handler: Call
callback 12.8.11.5.1.i: Define Matrix Multiply as a User Atomic Operation: CppAD User Atomic Callback Functions
         12.8.11.b.b: User Defined Atomic AD Functions: Syntax Function.Callback Routines
         4.4.7.2.4: Atomic Forward Mode
can 5.3.9.1: Number of Variables That Can be Skipped: Example and Test
    5.3.9: Number of Variables that Can be Skipped
cap_bytes 12.8.6.4.e: Get At Least A Specified Amount of Memory: cap_bytes
          8.23.6.d: Get At Least A Specified Amount of Memory: cap_bytes
capacity 8.22.d: The CppAD::vector Template Class: capacity
capacity_order 5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
               5.3.8: Controlling Taylor Coefficients Memory Allocation
               5.3.6.g: Number Taylor Coefficient Orders Currently Stored: capacity_order
capacity_taylor 12.8.2.j: ADFun Object Deprecated Member Functions: capacity_taylor
case 12.8.11.2.c: Using AD to Compute Atomic Function Derivatives: Simple Case
     5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
     5.3.3.j: Second Order Forward Mode: Derivative Values: Special Case
     5.3.2.h: First Order Forward Mode: Derivative Values: Special Case
     5.3.1.i: Zero Order Forward Mode: Function Values: Special Case
     4.7.3.a.a: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq.The Simple Case
     4.4.7.2.18.1.f.b: AD Theory for Cholesky Factorization: Reverse Mode.Case k > 0
     4.4.7.2.18.1.f.a: AD Theory for Cholesky Factorization: Reverse Mode.Case k = 0
     4.4.7.2.f: User Defined Atomic AD Functions: General Case
cases 12.3.1.c.d: The Theory of Forward Mode: Standard Math Functions.Special Cases
      12.3.1.c.c: The Theory of Forward Mode: Standard Math Functions.Cases that Apply Recursion Above
      4.7.3.a.b: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq.More Complicated Cases
central 10.2.7: Interfacing to C: Example and Test
certain 8.25: Convert Certain Types to a String
change 10.2.10.2: Computing a Jacobian With Constants that Change
       5.3.7.1: CompareChange and Re-Tape: Example and Test
changes 12.8.3: Comparison Changes During Zero Order Forward Mode
        12.8.b: CppAD Deprecated API Features: Name Changes
        12.7.15: Changes and Additions to CppAD During 2003
        12.7.14: Changes and Additions to CppAD During 2004
        12.7.13: Changes and Additions to CppAD During 2005
        12.7.12: Changes and Additions to CppAD During 2006
        12.7.11: Changes and Additions to CppAD During 2007
        12.7.10: Changes and Additions to CppAD During 2008
        12.7.9: Changes and Additions to CppAD During 2009
        12.7.8: Changes and Additions to CppAD During 2010
        12.7.7: Changes and Additions to CppAD During 2011
        12.7.6: CppAD Changes and Additions During 2012
        12.7.5: CppAD Changes and Additions During 2013
        12.7.4: CppAD Changes and Additions During 2014
        12.7.3: CppAD Changes and Additions During 2015
        12.7.2: Changes and Additions to CppAD During 2016
        12.7.1.a: Changes and Additions to CppAD During 2017: API Changes
        12.7.1: Changes and Additions to CppAD During 2017
        12.7: Changes and Additions to CppAD
        5.3.7: Comparison Changes Between Taping and Zero Order Forward
check 12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
      12.8.7: Memory Leak Detection
      12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
      11.2.5: Check Gradient of Determinant of 3 by 3 matrix
      11.2.4: Check Determinant of 3 by 3 matrix
      8.22.e.a: The CppAD::vector Template Class: Assignment.Check Size
      8.10.1: The CheckSimpleVector Function: Example and Test
      8.10: Check Simple Vector Concept
      8.8.1: The CheckNumericType Function: Example and Test
      8.8: Check NumericType Class Concept
      5.10: Check an ADFun Object For Nan Results
      5.9.1: ADFun Check and Re-Tape: Example and Test
      5.9: Check an ADFun Sequence of Operations
      4.5.5: Check if Two Value are Identically Equal
      2.3.c: Checking the CppAD Examples and Tests: Subsets of make check
      2.3.b: Checking the CppAD Examples and Tests: Check All
      2.2.c: Using CMake to Configure CppAD: make check
      2.a.c: CppAD Download, Test, and Install Instructions: Instructions.Step 3: Check
check_finite 12.6.c: The CppAD Wish List: check_finite
checking 5.10.1: ADFun Checking For Nan: Example and Test
         5.7.i: Optimize an ADFun Object Tape: Checking Optimization
         2.3: Checking the CppAD Examples and Tests
checknumerictype 8.8.1: The CheckNumericType Function: Example and Test
checkpoint 12.8.11.3: Using AD to Compute Atomic Function Derivatives
           12.8.11.2: Using AD to Compute Atomic Function Derivatives
           12.6.h: The CppAD Wish List: checkpoint
checkpointing 4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
              4.4.7.1.3: Checkpointing an ODE Solver: Example and Test
              4.4.7.1: Checkpointing Functions
checkpointing: 4.4.7.1.1: Simple Checkpointing: Example and Test
checksimplevector 8.10.1: The CheckSimpleVector Function: Example and Test
                  7.1.d: Enable AD Calculations During Parallel Mode: CheckSimpleVector
choice 10.5.c: Using The CppAD Test Vector Template Class: Choice
cholesky 4.4.7.2.18.2: Atomic Eigen Cholesky Factorization Class
         4.4.7.2.18.1.b.a: AD Theory for Cholesky Factorization: Notation.Cholesky Factor
         4.4.7.2.18.1: AD Theory for Cholesky Factorization
         4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test
choosing 12.8.9: Choosing The Vector Testing Template Class
         2.2.7: Choosing the CppAD Test Vector Template Class
class 12.8.9: Choosing The Vector Testing Template Class
      10.5: Using The CppAD Test Vector Template Class
      10.d: Examples: The CppAD Test Vector Template Class
      8.22: The CppAD::vector Template Class
      8.9.a: Definition of a Simple Vector: Template Class Requirements
      8.8: Check NumericType Class Concept
      8.d.b: Some General Purpose Utilities: Miscellaneous.The CppAD Vector Template Class
      4.7.9.1.f: Example AD<Base> Where Base Constructor Allocates Memory: Class Definition
      4.7.1: Required Base Class Member Functions
      4.4.7.2.19.1.o: Matrix Multiply as an Atomic Operation: End Class Definition
      4.4.7.2.19.1.c: Matrix Multiply as an Atomic Operation: Start Class Definition
      4.4.7.2.19.b: User Atomic Matrix Multiply: Example and Test: Class Definition
      4.4.7.2.18.2.e: Atomic Eigen Cholesky Factorization Class: End Class Definition
      4.4.7.2.18.2.b: Atomic Eigen Cholesky Factorization Class: Start Class Definition
      4.4.7.2.18.2: Atomic Eigen Cholesky Factorization Class
      4.4.7.2.17.1.g: Atomic Eigen Matrix Inversion Class: End Class Definition
      4.4.7.2.17.1.d: Atomic Eigen Matrix Inversion Class: Start Class Definition
      4.4.7.2.17.1: Atomic Eigen Matrix Inversion Class
      4.4.7.2.17.b: Atomic Eigen Matrix Inverse: Example and Test: Class Definition
      4.4.7.2.16.1.h: Atomic Eigen Matrix Multiply Class: End Class Definition
      4.4.7.2.16.1.e: Atomic Eigen Matrix Multiply Class: Start Class Definition
      4.4.7.2.16.1: Atomic Eigen Matrix Multiply Class
      4.4.7.2.16.b: Atomic Eigen Matrix Multiply: Example and Test: Class Definition
      4.4.7.2.15.j: Tan and Tanh as User Atomic Operations: Example and Test: End Class Definition
      4.4.7.2.15.c: Tan and Tanh as User Atomic Operations: Example and Test: Start Class Definition
      4.4.7.2.14.j: Atomic Sparsity with Set Patterns: Example and Test: End Class Definition
      4.4.7.2.14.c: Atomic Sparsity with Set Patterns: Example and Test: Start Class Definition
      4.4.7.2.13.j: Reciprocal as an Atomic Operation: Example and Test: End Class Definition
      4.4.7.2.13.c: Reciprocal as an Atomic Operation: Example and Test: Start Class Definition
      4.4.7.2.12.j: Atomic Euclidean Norm Squared: Example and Test: End Class Definition
      4.4.7.2.12.c: Atomic Euclidean Norm Squared: Example and Test: Start Class Definition
      4.4.7.2.11.e: Getting Started with Atomic Operations: Example and Test: End Class Definition
      4.4.7.2.11.b: Getting Started with Atomic Operations: Example and Test: Start Class Definition
      4.4.7.2.9.1.c: Atomic Reverse Hessian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.8.1.c: Atomic Forward Hessian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.7.1.c: Atomic Reverse Jacobian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.6.1.c: Atomic Forward Jacobian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.5.1.c: Atomic Reverse: Example and Test: Start Class Definition
      4.4.7.2.4.1.c: Atomic Forward: Example and Test: Start Class Definition
      2.2.7: Choosing the CppAD Test Vector Template Class
class: 8.22.2: CppAD::vectorBool Class: Example and Test
       8.22.1: CppAD::vector Template Class: Example and Test
       8.9.1: Simple Vector Template Class: Example and Test
clear 12.8.11.s: User Defined Atomic AD Functions: clear
      8.22.k: The CppAD::vector Template Class: clear
      4.4.7.2.10: Free Static Variables
      4.4.7.1.q: Checkpointing Functions: clear
cmake 2.2.b: Using CMake to Configure CppAD: CMake Command
      2.2.a: Using CMake to Configure CppAD: The CMake Program
      2.2: Using CMake to Configure CppAD
      2.a.b: CppAD Download, Test, and Install Instructions: Instructions.Step 2: Cmake
cmake_install_datadir 2.2.j: Using CMake to Configure CppAD: cmake_install_datadir
cmake_install_docdir 2.2.k: Using CMake to Configure CppAD: cmake_install_docdir
cmake_install_includedirs 2.2.h: Using CMake to Configure CppAD: cmake_install_includedirs
cmake_install_libdirs 2.2.i: Using CMake to Configure CppAD: cmake_install_libdirs
cmake_verbose_makefile 2.2.d: Using CMake to Configure CppAD: cmake_verbose_makefile
code 12.9.8.a: Main Program For Comparing C and C++ Speed: Source Code
     12.9.7.e: Determine Amount of Time to Execute det_by_minor: Source Code
     12.9.6.d: Returns Elapsed Number of Seconds: Source Code
     12.9.5.d: Repeat det_by_minor Routine A Specified Number of Times: Source Code
     12.9.4.c: Correctness Test of det_by_minor Routine: Source Code
     12.9.3.f: Simulate a [0,1] Uniform Random Variate: Source Code
     12.9.2.e: Compute Determinant using Expansion by Minors: Source Code
     12.9.1.j: Determinant of a Minor: Source Code
     12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
     11.2.10.h: Simulate a [0,1] Uniform Random Variate: Source Code
     11.2.9.m: Evaluate a Function That Has a Sparse Hessian: Source Code
     11.2.8.n: Evaluate a Function That Has a Sparse Jacobian: Source Code
     11.2.7.i: Evaluate a Function Defined in Terms of an ODE: Source Code
     11.2.6.j: Sum Elements of a Matrix Times Itself: Source Code
     11.2.5.h: Check Gradient of Determinant of 3 by 3 matrix: Source Code
     11.2.4.h: Check Determinant of 3 by 3 matrix: Source Code
     11.2.3.i: Determinant Using Expansion by Minors: Source Code
     11.2.2.m: Determinant of a Minor: Source Code
     11.2.1.i: Determinant Using Expansion by Lu Factorization: Source Code
     11.2.d: Speed Testing Utilities: Source Code
     10.2.4.1: Source Code for eigen_plugin.hpp
     9.3: ODE Inverse Problem Definitions: Source Code
     8.21.x: An Error Controller for Gear's Ode Solvers: Source Code
     8.20.m: An Arbitrary Order Gear Method: Source Code
     8.19.w: An Error Controller for ODE Solvers: Source Code
     8.18.o: A 3rd and 4th Order Rosenbrock ODE Solver: Source Code
     8.17.p: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Source Code
     8.16.o: Multi-dimensional Romberg Integration: Source Code
     8.15.m: One DimensionalRomberg Integration: Source Code
     7.2.7.c: Using a Team of AD Threads: Example and Test: Source Code
     7.2.6.b: A Simple pthread AD: Example and Test: Source Code
     7.2.5.b: A Simple Boost Threading AD: Example and Test: Source Code
     7.2.4.b: A Simple OpenMP AD: Example and Test: Source Code
     7.2.3.b: A Simple Parallel Pthread Example and Test: Source Code
     7.2.2.b: A Simple Boost Thread Example and Test: Source Code
     7.2.1.b: A Simple OpenMP Example and Test: Source Code
     5.8.11.2: min_nso_quad Source Code
     5.8.10.2: abs_min_quad Source Code
     5.8.9.2: qp_box Source Code
     5.8.8.2: qp_interior Source Code
     5.8.7.2: min_nso_linear Source Code
     5.8.6.2: abs_min_linear Source Code
     5.8.5.2: lp_box Source Code
     5.8.4.2: simplex_method Source Code
     5.8.3.2: abs_eval Source Code
     4.7.8.e: Base Type Requirements for Hash Coding Values: code
     4.3.6.1.b: Printing During Forward Mode: Example and Test: Source Code
     3.2.3.b.d: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Code
     3.1.3.c.b: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence.Code
     2.1.g: Download The CppAD Source Code: Source Code Control
     2.1: Download The CppAD Source Code
coding 4.7.8: Base Type Requirements for Hash Coding Values
coefficient 12.4.l: Glossary: Taylor Coefficient
            5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
            5.3.6: Number Taylor Coefficient Orders Currently Stored
            4.4.7.2.18.1.b.b: AD Theory for Cholesky Factorization: Notation.Taylor Coefficient
coefficients 12.3.1.9.b: Error Function Forward Taylor Polynomial Theory: Taylor Coefficients Recursion
             12.3.1.8.b: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory: Taylor Coefficients Recursion
             12.3.1.7.b: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory: Taylor Coefficients Recursion
             12.3.1.6.b: Inverse Sine and Hyperbolic Sine Forward Mode Theory: Taylor Coefficients Recursion
             12.3.1.5.b: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory: Taylor Coefficients Recursion
             12.3.1.2.b: Logarithm Function Forward Mode Theory: Taylor Coefficients Recursion
             12.3.1.1.b: Exponential Function Forward Mode Theory: Taylor Coefficients Recursion
             12.3.1.c.b: The Theory of Forward Mode: Standard Math Functions.Taylor Coefficients Recursion Formula
             5.4.3.1.a: Third Order Reverse Mode: Example and Test: Taylor Coefficients
             5.3.8: Controlling Taylor Coefficients Memory Allocation
             5.1.2.i.a: Construct an ADFun Object and Stop Recording: Assignment Operator.Taylor Coefficients
coin 2.1.f.a: Download The CppAD Source Code: Compressed Archives.Coin
col 11.2.9.i: Evaluate a Function That Has a Sparse Hessian: col
    11.2.8.j: Evaluate a Function That Has a Sparse Jacobian: col
    11.1.7.g: Speed Testing Sparse Jacobian: col
    11.1.6.g: Speed Testing Sparse Hessian: col
    8.28.m: Sparse Matrix Row, Column, Value Representation: col
    8.27.l: Row and Column Index Sparsity Patterns: col
    5.6.4.g: Sparse Hessian: row, col
    5.6.2.f: Sparse Jacobian: row, col
    5.4.4.j: Reverse Mode Using Subgraphs: col
col_major 8.28.p: Sparse Matrix Row, Column, Value Representation: col_major
          8.27.n: Row and Column Index Sparsity Patterns: col_major
color_method 5.6.4.i.a: Sparse Hessian: work.color_method
             5.6.2.h.a: Sparse Jacobian: work.color_method
coloring 5.6.3.j: Computing Sparse Hessians: coloring
         5.6.1.l: Computing Sparse Jacobians: coloring
colpack 11.1.g.d: Running the Speed Test Program: Sparsity Options.colpack
        5.6.1.l.b: Computing Sparse Jacobians: coloring.colpack
        2.2.2.5: Download and Install ColPack in Build Directory
        2.2.2: Including the ColPack Sparsity Calculations
colpack.general 5.6.3.j.d: Computing Sparse Hessians: coloring.colpack.general
colpack.star 5.6.4.i.b: Sparse Hessian: work.colpack.star Deprecated 2017-06-01
             5.6.3.j.e: Computing Sparse Hessians: coloring.colpack.star Deprecated 2017-06-01
colpack.symmetric 5.6.3.j.c: Computing Sparse Hessians: coloring.colpack.symmetric
colpack: 2.2.2.4: ColPack: Sparse Hessian Example and Test
         2.2.2.3: ColPack: Sparse Hessian Example and Test
         2.2.2.2: ColPack: Sparse Jacobian Example and Test
         2.2.2.1: ColPack: Sparse Jacobian Example and Test
colpack_prefix 2.2.2.b: Including the ColPack Sparsity Calculations: colpack_prefix
column 12.4.j.a: Glossary: Sparsity Pattern.Row and Column Index Vectors
       8.28: Sparse Matrix Row, Column, Value Representation
       8.27: Row and Column Index Sparsity Patterns
       5.6.4.f.c: Sparse Hessian: p.Column Subset
       5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
command 2.2.b: Using CMake to Configure CppAD: CMake Command
common 7.2.10.1: Common Variables use by Multi-Threaded Newton Method
       7.2.9.2: Multi-Threaded User Atomic Common Information
       7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i
compare 12.9: Compare Speed of C and C++
        5.3.7.1: CompareChange and Re-Tape: Example and Test
        4.5.2.1: Compare AD with Base Objects: Example and Test
        4.5.2: Compare AD and Base Objects for Nearly Equal
        4.5.1.1: AD Binary Comparison Operators: Example and Test
        4.5.1: AD Binary Comparison Operators
     speed C and C++ 12.9: Compare Speed of C and C++
compare_change 5.3.7: Comparison Changes Between Taping and Zero Order Forward
comparechange 12.1.c: Frequently Asked Questions and Answers: CompareChange
              5.3.7.1: CompareChange and Re-Tape: Example and Test
compareop 4.7.2.b: Base Type Requirements for Conditional Expressions: CompareOp
comparing 12.9.8: Main Program For Comparing C and C++ Speed
comparison 12.8.12.c.b: zdouble: An AD Base Type With Absolute Zero: Syntax.Comparison Operators
           12.8.3: Comparison Changes During Zero Order Forward Mode
           5.7.3: Example Optimization and Comparison Operators
           5.3.7: Comparison Changes Between Taping and Zero Order Forward
           4.5.1.1: AD Binary Comparison Operators: Example and Test
           4.5.1: AD Binary Comparison Operators
comparisons 4.7.4: Base Type Requirements for Ordered Comparisons
            4.7.3: Base Type Requirements for Identically Equal Comparisons
            3.2.3.d: exp_eps: Operation Sequence and Zero Order Forward Sweep: Comparisons
compilation 12.6.i: The CppAD Wish List: Compilation Speed
compile 2.2: Using CMake to Configure CppAD
complex 12.1.d: Frequently Asked Questions and Answers: Complex Types
        8.14.1.1: LuSolve With Complex Arguments: Example and Test
        4.7.9.6.1: Complex Polynomial: Example and Test
        4.4.2.14.d: AD Absolute Value Functions: abs, fabs: Complex Types
        4.3.2.d.b: Convert From AD to Integer: x.Complex Types
complicated 4.7.3.a.b: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq.More Complicated Cases
compound 4.7.9.1.c: Example AD<Base> Where Base Constructor Allocates Memory: Compound Assignment Macro
         4.4.1.4.4: AD Compound Assignment Division: Example and Test
         4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
         4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
         4.4.1.4.1: AD Compound Assignment Addition: Example and Test
         4.4.1.4: AD Compound Assignment Operators
         4.4.1: AD Arithmetic Operators and Compound Assignments
compressed 2.1.f: Download The CppAD Source Code: Compressed Archives
computation 5.5.9.c: Computing Dependency: Example and Test: Computation
compute 12.9.2: Compute Determinant using Expansion by Minors
        12.8.11.3: Using AD to Compute Atomic Function Derivatives
        12.8.11.2: Using AD to Compute Atomic Function Derivatives
        10.2.4.3: Using Eigen To Compute Determinant: Example and Test
        10.1: Getting Started Using CppAD to Compute Derivatives
        8.14.1: Compute Determinant and Solve Linear Equations
        8.14: Compute Determinants and Solve Equations by LU Factorization
        5.6.5: Compute Sparse Jacobians Using Subgraphs
computes 7.2.9.1: Defines a User Atomic Operation that Computes Square Root
computing 12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
          10.2.10.2: Computing a Jacobian With Constants that Change
          5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
          5.6.4.2: Computing Sparse Hessian for a Subset of Variables
          5.6.3.1: Computing Sparse Hessian: Example and Test
          5.6.3: Computing Sparse Hessians
          5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
          5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
          5.6.1: Computing Sparse Jacobians
          5.5.9: Computing Dependency: Example and Test
          5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
concept 8.10: Check Simple Vector Concept
        8.8: Check NumericType Class Concept
concepts 8.b: Some General Purpose Utilities: C++ Concepts
cond_exp_12.7.3.at.a: CppAD Changes and Additions During 2015: 05-26.cond_exp_1
cond_exp_12.7.3.at.b: CppAD Changes and Additions During 2015: 05-26.cond_exp_2
condexpop 4.7.9.6.c: Enable use of AD<Base> where Base is std::complex<double>: CondExpOp
          4.7.9.5.a: Enable use of AD<Base> where Base is double: CondExpOp
          4.7.9.4.a: Enable use of AD<Base> where Base is float: CondExpOp
          4.7.9.3.d: Enable use of AD<Base> where Base is Adolc's adouble Type: CondExpOp
          4.7.9.1.g: Example AD<Base> Where Base Constructor Allocates Memory: CondExpOp
condexprel 4.7.9.6.d: Enable use of AD<Base> where Base is std::complex<double>: CondExpRel
           4.7.9.5.b: Enable use of AD<Base> where Base is double: CondExpRel
           4.7.9.4.b: Enable use of AD<Base> where Base is float: CondExpRel
           4.7.9.3.e: Enable use of AD<Base> where Base is Adolc's adouble Type: CondExpRel
           4.7.9.1.h: Example AD<Base> Where Base Constructor Allocates Memory: CondExpRel
           4.7.2.d: Base Type Requirements for Conditional Expressions: CondExpRel
condexptemplate 4.7.2.c: Base Type Requirements for Conditional Expressions: CondExpTemplate
condition 12.8.10.2.3.c: ODE Fitting Using Fast Representation: Initial Condition
          12.8.10.2.2.d: ODE Fitting Using Simple Representation: Initial Condition Constraint
          5.3.9.1: Number of Variables That Can be Skipped: Example and Test
conditional 5.7.6: Example Optimization and Nested Conditional Expressions
            5.7.5: Example Optimization and Conditional Expressions
            5.3.9.1: Number of Variables That Can be Skipped: Example and Test
            4.7.2: Base Type Requirements for Conditional Expressions
            4.4.4.1: Conditional Expressions: Example and Test
            4.4.4: AD Conditional Expressions
conditions 5.8.9.s: abs_normal: Solve a Quadratic Program With Box Constraints: KKT Conditions
           5.8.8.s: Solve a Quadratic Program Using Interior Point Method: KKT Conditions
configuration 12.8.10.1.b: Nonlinear Programming Using CppAD and Ipopt: Example and Test: Configuration Requirement
              10.2.13.h: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Configuration Requirement
              9.1.b: Nonlinear Programming Using CppAD and Ipopt: Example and Test: Configuration Requirement
              4.7.9.3.1.c: Using Adolc with Multiple Levels of Taping: Example and Test: Configuration Requirement
              2.4.d: CppAD pkg-config Files: CppAD Configuration Files
configure 12.8.13.d: Autotools Unix Test and Installation: Configure
          2.2: Using CMake to Configure CppAD
conjugate 10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test
constants 10.2.10.2: Computing a Jacobian With Constants that Change
constraint 12.8.10.2.2.e: ODE Fitting Using Simple Representation: Trapezoidal Approximation Constraint
           12.8.10.2.2.d: ODE Fitting Using Simple Representation: Initial Condition Constraint
constraints 5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
            5.8.5: abs_normal: Solve a Linear Program With Box Constraints
construct 5.1.2: Construct an ADFun Object and Stop Recording
constructor 12.8.12.c.a: zdouble: An AD Base Type With Absolute Zero: Syntax.Constructor and Assignment
            11.2.3.c: Determinant Using Expansion by Minors: Constructor
            11.2.1.c: Determinant Using Expansion by Lu Factorization: Constructor
            8.23.12: Allocate An Array and Call Default Constructor for its Elements
            8.9.f: Definition of a Simple Vector: Element Constructor and Destructor
            8.9.e: Definition of a Simple Vector: Copy Constructor
            8.9.d: Definition of a Simple Vector: Sizing Constructor
            8.9.c: Definition of a Simple Vector: Default Constructor
            8.7.d: Definition of a Numeric Type: Copy Constructor
            8.7.c: Definition of a Numeric Type: Constructor From Integer
            8.7.b: Definition of a Numeric Type: Default Constructor
            8.1.b: Replacing the CppAD Error Handler: Constructor
            5.3.6.e: Number Taylor Coefficient Orders Currently Stored: Constructor
            5.1.2.k.b: Construct an ADFun Object and Stop Recording: Example.Default Constructor
            5.1.2.k.a: Construct an ADFun Object and Stop Recording: Example.Sequence Constructor
            5.1.2.h: Construct an ADFun Object and Stop Recording: Copy Constructor
            5.1.2.g: Construct an ADFun Object and Stop Recording: Sequence Constructor
            5.1.2.f: Construct an ADFun Object and Stop Recording: Default Constructor
            4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
            4.7.1.d: Required Base Class Member Functions: Copy Constructor
            4.7.1.c: Required Base Class Member Functions: Double Constructor
            4.7.1.b: Required Base Class Member Functions: Default Constructor
            4.6.e: AD Vectors that Record Index Operations: Constructor
            4.4.7.2.19.1.d: Matrix Multiply as an Atomic Operation: Constructor
            4.4.7.2.19.c.a: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.18.2.c.b: Atomic Eigen Cholesky Factorization Class: Public.Constructor
            4.4.7.2.18.c.a: Atomic Eigen Cholesky Factorization: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.17.1.e.b: Atomic Eigen Matrix Inversion Class: Public.Constructor
            4.4.7.2.17.c.a: Atomic Eigen Matrix Inverse: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.16.1.f.b: Atomic Eigen Matrix Multiply Class: Public.Constructor
            4.4.7.2.16.c.a: Atomic Eigen Matrix Multiply: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.15.k.a: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.15.d: Tan and Tanh as User Atomic Operations: Example and Test: Constructor
            4.4.7.2.14.k.a: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.Constructor
            4.4.7.2.14.d: Atomic Sparsity with Set Patterns: Example and Test: Constructor
            4.4.7.2.13.k.a: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.13.d: Reciprocal as an Atomic Operation: Example and Test: Constructor
            4.4.7.2.12.k.a: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.12.d: Atomic Euclidean Norm Squared: Example and Test: Constructor
            4.4.7.2.11.f.a: Getting Started with Atomic Operations: Example and Test: Use Atomic Function.Constructor
            4.4.7.2.11.c: Getting Started with Atomic Operations: Example and Test: Constructor
            4.4.7.2.9.1.d: Atomic Reverse Hessian Sparsity: Example and Test: Constructor
            4.4.7.2.8.1.d: Atomic Forward Hessian Sparsity: Example and Test: Constructor
            4.4.7.2.7.1.d: Atomic Reverse Jacobian Sparsity: Example and Test: Constructor
            4.4.7.2.6.1.d: Atomic Forward Jacobian Sparsity: Example and Test: Constructor
            4.4.7.2.5.1.d: Atomic Reverse: Example and Test: Constructor
            4.4.7.2.4.1.d: Atomic Forward: Example and Test: Constructor
            4.4.7.2.1.d.b: Atomic Function Constructor: Example.Use Constructor
            4.4.7.2.1.d.a: Atomic Function Constructor: Example.Define Constructor
            4.4.7.2.1: Atomic Function Constructor
            4.4.7.1.e: Checkpointing Functions: constructor
            4.1.1: AD Constructors: Example and Test
constructor: 5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
             5.1.1.1: Independent and ADFun Constructor: Example and Test
constructors 4.1: AD Constructors
constructors: 4.1.1: AD Constructors: Example and Test
control 8.23.9: Control When Thread Alloc Retains Memory For Future Use
        5.3.8: Controlling Taylor Coefficients Memory Allocation
        2.1.g: Download The CppAD Source Code: Source Code Control
controller 8.21: An Error Controller for Gear's Ode Solvers
           8.19: An Error Controller for ODE Solvers
controlling 5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
            5.3.8: Controlling Taylor Coefficients Memory Allocation
convention 10.3.3.c: Lu Factor and Solve with Recorded Pivoting: Storage Convention
conversion 10.6: Suppress Suspect Implicit Conversion Warnings
           4.3: Conversion and I/O of AD Objects
convert 8.25: Convert Certain Types to a String
        4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
        4.3.7: Convert an AD Variable to a Parameter
        4.3.3: Convert An AD or Base Type to String
        4.3.2.1: Convert From AD to Integer: Example and Test
        4.3.2: Convert From AD to Integer
        4.3.1.1: Convert From AD to its Base Type: Example and Test
        4.3.1: Convert From an AD Type to its Base Type
        4.3: Conversion and I/O of AD Objects
        4.1: AD Constructors
copy 8.9.e: Definition of a Simple Vector: Copy Constructor
     8.7.d: Definition of a Numeric Type: Copy Constructor
     5.1.2.h: Construct an ADFun Object and Stop Recording: Copy Constructor
     4.7.1.d: Required Base Class Member Functions: Copy Constructor
correct 11.2.5: Check Gradient of Determinant of 3 by 3 matrix
        11.2.4: Check Determinant of 3 by 3 matrix
        11.1.d.a: Running the Speed Test Program: test.correct
correctness 12.9.4: Correctness Test of det_by_minor Routine
            12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
            11.1.h: Running the Speed Test Program: Correctness Results
            3.3: Correctness Tests For Exponential Approximation in Introduction
correspondence 5.8.1.g: Create An Abs-normal Representation of a Function: Correspondence to Literature
cos 12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
    12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
    4.4.2.4.1: The AD cos Function: Example and Test
    4.4.2.4: The Cosine Function: cos
cosh 12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     4.4.2.5.1: The AD cosh Function: Example and Test
     4.4.2.5: The Hyperbolic Cosine Function: cosh
cosine 12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
       12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
       12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
       12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
       12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
       12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
       4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh
       4.4.2.5: The Hyperbolic Cosine Function: cosh
       4.4.2.4: The Cosine Function: cos
count 5.3.7.d: Comparison Changes Between Taping and Zero Order Forward: count
      3.b.d: An Introduction by Example to Algorithmic Differentiation: Preface.Operation Count
cppad 12.12: Your License for the CppAD Software
      12.11: CppAD Addons
      12.8.13.f: Autotools Unix Test and Installation: Profiling CppAD
      12.8.12.d.b: zdouble: An AD Base Type With Absolute Zero: Motivation.CppAD
      12.8.11.5.1.i: Define Matrix Multiply as a User Atomic Operation: CppAD User Atomic Callback Functions
      12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
      12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
      12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
      12.8: CppAD Deprecated API Features
      12.7.15: Changes and Additions to CppAD During 2003
      12.7.14: Changes and Additions to CppAD During 2004
      12.7.13: Changes and Additions to CppAD During 2005
      12.7.12: Changes and Additions to CppAD During 2006
      12.7.11: Changes and Additions to CppAD During 2007
      12.7.10: Changes and Additions to CppAD During 2008
      12.7.9: Changes and Additions to CppAD During 2009
      12.7.8: Changes and Additions to CppAD During 2010
      12.7.7: Changes and Additions to CppAD During 2011
      12.7.6: CppAD Changes and Additions During 2012
      12.7.5: CppAD Changes and Additions During 2013
      12.7.4: CppAD Changes and Additions During 2014
      12.7.3: CppAD Changes and Additions During 2015
      12.7.2: Changes and Additions to CppAD During 2016
      12.7.1: Changes and Additions to CppAD During 2017
      12.7: Changes and Additions to CppAD
      12.6: The CppAD Wish List
      11.5.7: CppAD Speed: Sparse Jacobian
      11.5.6: CppAD Speed: Sparse Hessian
      11.5.5: CppAD Speed: Second Derivative of a Polynomial
      11.5.4: CppAD Speed: Gradient of Ode Solution
      11.5.3: CppAD Speed, Matrix Multiplication
      11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
      11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
      11.5: Speed Test Derivatives Using CppAD
      11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
      11.1: Running the Speed Test Program
      10.5: Using The CppAD Test Vector Template Class
      10.3.1: CppAD Examples and Tests
      10.2.4.f: Enable Use of Eigen Linear Algebra Package with CppAD: CppAD Namespace
      10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
      10.2.2: Example and Test Linking CppAD to Languages Other than C++
      10.4: List All (Except Deprecated) CppAD Examples
      10.3: Utility Routines used by CppAD Examples
      10.1: Getting Started Using CppAD to Compute Derivatives
      10.d: Examples: The CppAD Test Vector Template Class
      9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
      8.1.2: CppAD Assertions During Execution
      8.1.1: Replacing The CppAD Error Handler: Example and Test
      8.1: Replacing the CppAD Error Handler
      8.d.b: Some General Purpose Utilities: Miscellaneous.The CppAD Vector Template Class
      7: Using CppAD in a Multi-Threading Environment
      6: CppAD API Preprocessor Symbols
      5.6.1.l.a: Computing Sparse Jacobians: coloring.cppad
      4.7: AD<Base> Requirements for a CppAD Base Type
      3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
      3.1.8: exp_2: CppAD Forward and Reverse Sweeps
      2.4.d: CppAD pkg-config Files: CppAD Configuration Files
      2.4: CppAD pkg-config Files
      2.3: Checking the CppAD Examples and Tests
      2.2.7.c: Choosing the CppAD Test Vector Template Class: cppad
      2.2.7: Choosing the CppAD Test Vector Template Class
      2.2.5: Including the cppad_ipopt Library and Tests
      2.2: Using CMake to Configure CppAD
      2.1: Download The CppAD Source Code
      2: CppAD Download, Test, and Install Instructions
cppad-20171217: : cppad-20171217: A Package for Differentiation of C++ Algorithms
cppad.general 5.6.3.j.b: Computing Sparse Hessians: coloring.cppad.general
cppad.hpp : cppad-20171217: A Package for Differentiation of C++ Algorithms
cppad.symmetric 5.6.3.j.a: Computing Sparse Hessians: coloring.cppad.symmetric
cppad::numeric_limits 4.7.6.a: Base Type Requirements for Numeric Limits: CppAD::numeric_limits
                      4.4.6.b: Numeric Limits For an AD and Base Types: CppAD::numeric_limits
cppad::vector 12.8.9.e: Choosing The Vector Testing Template Class: CppAD::vector
              10.5.d: Using The CppAD Test Vector Template Class: CppAD::vector
              8.22.1: CppAD::vector Template Class: Example and Test
              8.22: The CppAD::vector Template Class
cppad::vectorbool 8.22.2: CppAD::vectorBool Class: Example and Test
cppad_cxx_flags 2.2.m: Using CMake to Configure CppAD: cppad_cxx_flags
cppad_debug_and_release 6.b.a: CppAD API Preprocessor Symbols: Documented Here.CPPAD_DEBUG_AND_RELEASE
cppad_debug_which 2.2.s: Using CMake to Configure CppAD: cppad_debug_which
cppad_deprecated 2.2.t: Using CMake to Configure CppAD: cppad_deprecated
cppad_ipopt 12.8.10.d: Nonlinear Programming Using the CppAD Interface to Ipopt: cppad_ipopt namespace
            2.2.5: Including the cppad_ipopt Library and Tests
cppad_ipopt_nlp 12.8.10.2.3.1: ODE Fitting Using Fast Representation
                12.8.10.2.2.1: ODE Fitting Using Simple Representation
                12.8.10.2.2: ODE Fitting Using Simple Representation
cppad_lib 2.2.2.c: Including the ColPack Sparsity Calculations: cppad_lib
cppad_max_num_threads 7.b: Using CppAD in a Multi-Threading Environment: CPPAD_MAX_NUM_THREADS
                      2.2.p: Using CMake to Configure CppAD: cppad_max_num_threads
cppad_null 6.b.b: CppAD API Preprocessor Symbols: Documented Here.CPPAD_NULL
cppad_numeric_limits 4.7.6.b: Base Type Requirements for Numeric Limits: CPPAD_NUMERIC_LIMITS
cppad_package_string 6.b.c: CppAD API Preprocessor Symbols: Documented Here.CPPAD_PACKAGE_STRING
cppad_postfix 2.2.g: Using CMake to Configure CppAD: cppad_postfix
cppad_prefix 2.2.f: Using CMake to Configure CppAD: cppad_prefix
cppad_profile_flag 2.2.n: Using CMake to Configure CppAD: cppad_profile_flag
cppad_standard_math_unary 4.7.5.c: Base Type Requirements for Standard Math Functions: CPPAD_STANDARD_MATH_UNARY
cppad_tape_addr_type 2.2.r: Using CMake to Configure CppAD: cppad_tape_addr_type
cppad_tape_id_type 2.2.q: Using CMake to Configure CppAD: cppad_tape_id_type
cppad_testvector 2.2.o: Using CMake to Configure CppAD: cppad_testvector
cppad_to_string 4.7.7.b: Extending to_string To Another Floating Point Type: CPPAD_TO_STRING
cppad_use_cplusplus_2011 6.b.d: CppAD API Preprocessor Symbols: Documented Here.CPPAD_USE_CPLUSPLUS_2011
                         4.4.2.20.d: The Logarithm of One Plus Argument: log1p: CPPAD_USE_CPLUSPLUS_2011
                         4.4.2.19.d: The Exponential Function Minus One: expm1: CPPAD_USE_CPLUSPLUS_2011
                         4.4.2.18.d: The Error Function: CPPAD_USE_CPLUSPLUS_2011
                         4.4.2.17.d: The Inverse Hyperbolic Tangent Function: atanh: CPPAD_USE_CPLUSPLUS_2011
                         4.4.2.16.d: The Inverse Hyperbolic Sine Function: asinh: CPPAD_USE_CPLUSPLUS_2011
                         4.4.2.15.d: The Inverse Hyperbolic Cosine Function: acosh: CPPAD_USE_CPLUSPLUS_2011
cppad_user_atomic 12.8.11.e: User Defined Atomic AD Functions: CPPAD_USER_ATOMIC
cppadcreatediscrete 4.4.5.n: Discrete AD Functions: CppADCreateDiscrete Deprecated 2007-07-28
cppadvector 12.8.9.h: Choosing The Vector Testing Template Class: CppADvector Deprecated 2007-07-28
create 12.8.6.9: Allocate Memory and Create A Raw Array
       5.8.1: Create An Abs-normal Representation of a Function
       5.1: Create an ADFun Object (Record an Operation Sequence)
       4.5.3.k: AD Boolean Functions: Create Binary
       4.5.3.g: AD Boolean Functions: Create Unary
       4.4.5.i: Discrete AD Functions: Create AD Version
create_array 12.8.6.9: Allocate Memory and Create A Raw Array
             8.23.12: Allocate An Array and Call Default Constructor for its Elements
creating 10.2.1: Creating Your Own Interface to an ADFun Object
criteria 8.21.s: An Error Controller for Gear's Ode Solvers: Error Criteria Discussion
         8.19.r: An Error Controller for ODE Solvers: Error Criteria Discussion
cstdint 2.2.r.a: Using CMake to Configure CppAD: cppad_tape_addr_type.cstdint
        2.2.q.a: Using CMake to Configure CppAD: cppad_tape_id_type.cstdint
ctor 2.2: Using CMake to Configure CppAD
ctor_arg_list 4.4.7.2.1.b.a: Atomic Function Constructor: atomic_user.ctor_arg_list
cumulative 5.7.7: Example Optimization and Cumulative Sum Operations
current 12.8.6.3: Get the Current OpenMP Thread Number
        12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
        8.23.5: Get the Current Thread Number
        8.23.4: Is The Current Execution in Parallel Mode
        5.1.4.1: Abort Current Recording: Example and Test
currently 12.8.6.7: Amount of Memory a Thread is Currently Using
          12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
          8.23.10: Amount of Memory a Thread is Currently Using
          8.23.8: Free Memory Currently Available for Quick Use by a Thread
          5.3.6: Number Taylor Coefficient Orders Currently Stored
cutting 5.8.10.t.b: abs_normal: Minimize a Linear Abs-normal Approximation: Method.Cutting Planes
        5.8.6.s.b: abs_normal: Minimize a Linear Abs-normal Approximation: Method.Cutting Planes
cxx_flags 12.8.13.k: Autotools Unix Test and Installation: cxx_flags
cygwin 12.8.13.n.b: Autotools Unix Test and Installation: adolc_dir.Cygwin
       7.2.11.3.a: Pthread Implementation of a Team of AD Threads: Bug in Cygwin
       2.2.1.f: Including the ADOL-C Examples and Tests: Cygwin
D
Dependent 5.9.1: ADFun Check and Re-Tape: Example and Test
          5.1.3: Stop Recording and Store Operation Sequence
Domain 5.1.5.1: ADFun Sequence Properties: Example and Test
data 8.22.m.c: The CppAD::vector Template Class: vectorBool.data
     8.22.l: The CppAD::vector Template Class: data
datadir 2.2: Using CMake to Configure CppAD
dblvector 5.8.11.e: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: DblVector
          5.8.10.e: abs_normal: Minimize a Linear Abs-normal Approximation: DblVector
          5.8.7.e: Non-Smooth Optimization Using Abs-normal Linear Approximations: DblVector
          5.8.6.e: abs_normal: Minimize a Linear Abs-normal Approximation: DblVector
ddp 11.1.5.i: Speed Testing Second Derivative of a Polynomial: ddp
ddw 5.2.6.g: Reverse Mode Second Partial Derivative Driver: ddw
ddy 5.2.5.g: Forward Mode Second Partial Derivative Driver: ddy
deallocate 8.23.13: Deallocate An Array and Call Destructor for its Elements
debug 4.3.6: Printing AD Values During Forward Mode
      2.2.m.b: Using CMake to Configure CppAD: cppad_cxx_flags.debug and release
debug_which 11.b: Speed Test an Operator Overloading AD Package: debug_which
debugging 5.10.b: Check an ADFun Object For Nan Results: Debugging
declare 12.8.11.5.1.j: Define Matrix Multiply as a User Atomic Operation: Declare mat_mul Function
        5.1.1: Declare Independent Variables and Start Recording
default 8.23.12: Allocate An Array and Call Default Constructor for its Elements
        8.9.c: Definition of a Simple Vector: Default Constructor
        8.7.b: Definition of a Numeric Type: Default Constructor
        5.10.e: Check an ADFun Object For Nan Results: Default
        5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
        5.1.2.k.b: Construct an ADFun Object and Stop Recording: Example.Default Constructor
        5.1.2.f: Construct an ADFun Object and Stop Recording: Default Constructor
        4.7.8.c: Base Type Requirements for Hash Coding Values: Default
        4.7.1.b: Required Base Class Member Functions: Default Constructor
define 12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
       4.4.7.2.1.d.a: Atomic Function Constructor: Example.Define Constructor
defined 12.8.11: User Defined Atomic AD Functions
        11.2.7: Evaluate a Function Defined in Terms of an ODE
        4.7.9.2: Using a User Defined AD Base Type: Example and Test
        4.4.7.2: User Defined Atomic AD Functions
        2.4.c: CppAD pkg-config Files: Defined Fields
defines 7.2.9.1: Defines a User Atomic Operation that Computes Square Root
definition 8.9: Definition of a Simple Vector
           8.7: Definition of a Numeric Type
           4.7.9.1.f: Example AD<Base> Where Base Constructor Allocates Memory: Class Definition
           4.4.7.2.19.1.o: Matrix Multiply as an Atomic Operation: End Class Definition
           4.4.7.2.19.1.c: Matrix Multiply as an Atomic Operation: Start Class Definition
           4.4.7.2.19.b: User Atomic Matrix Multiply: Example and Test: Class Definition
           4.4.7.2.18.2.e: Atomic Eigen Cholesky Factorization Class: End Class Definition
           4.4.7.2.18.2.b: Atomic Eigen Cholesky Factorization Class: Start Class Definition
           4.4.7.2.17.1.g: Atomic Eigen Matrix Inversion Class: End Class Definition
           4.4.7.2.17.1.d: Atomic Eigen Matrix Inversion Class: Start Class Definition
           4.4.7.2.17.b: Atomic Eigen Matrix Inverse: Example and Test: Class Definition
           4.4.7.2.16.1.h: Atomic Eigen Matrix Multiply Class: End Class Definition
           4.4.7.2.16.1.e: Atomic Eigen Matrix Multiply Class: Start Class Definition
           4.4.7.2.16.b: Atomic Eigen Matrix Multiply: Example and Test: Class Definition
           4.4.7.2.15.j: Tan and Tanh as User Atomic Operations: Example and Test: End Class Definition
           4.4.7.2.15.c: Tan and Tanh as User Atomic Operations: Example and Test: Start Class Definition
           4.4.7.2.14.j: Atomic Sparsity with Set Patterns: Example and Test: End Class Definition
           4.4.7.2.14.c: Atomic Sparsity with Set Patterns: Example and Test: Start Class Definition
           4.4.7.2.13.j: Reciprocal as an Atomic Operation: Example and Test: End Class Definition
           4.4.7.2.13.c: Reciprocal as an Atomic Operation: Example and Test: Start Class Definition
           4.4.7.2.12.j: Atomic Euclidean Norm Squared: Example and Test: End Class Definition
           4.4.7.2.12.c: Atomic Euclidean Norm Squared: Example and Test: Start Class Definition
           4.4.7.2.11.e: Getting Started with Atomic Operations: Example and Test: End Class Definition
           4.4.7.2.11.b: Getting Started with Atomic Operations: Example and Test: Start Class Definition
           4.4.7.2.9.1.c: Atomic Reverse Hessian Sparsity: Example and Test: Start Class Definition
           4.4.7.2.8.1.c: Atomic Forward Hessian Sparsity: Example and Test: Start Class Definition
           4.4.7.2.7.1.c: Atomic Reverse Jacobian Sparsity: Example and Test: Start Class Definition
           4.4.7.2.6.1.c: Atomic Forward Jacobian Sparsity: Example and Test: Start Class Definition
           4.4.7.2.5.1.c: Atomic Reverse: Example and Test: Start Class Definition
           4.4.7.2.4.1.c: Atomic Forward: Example and Test: Start Class Definition
definitions: 12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
             9.3: ODE Inverse Problem Definitions: Source Code
delete 12.8.5.1: Tracking Use of New and Delete: Example and Test
       12.8.5: Routines That Track Use of New and Delete
delete: 12.8.5.1: Tracking Use of New and Delete: Example and Test
delete_array 12.8.6.10: Return A Raw Array to The Available Memory for a Thread
             8.23.13: Deallocate An Array and Call Destructor for its Elements
delta 12.8.6.10.g: Return A Raw Array to The Available Memory for a Thread: Delta
      12.8.6.9.h: Allocate Memory and Create A Raw Array: Delta
      8.23.13.f: Deallocate An Array and Call Destructor for its Elements: Delta
      8.23.12.g: Allocate An Array and Call Default Constructor for its Elements: Delta
delta_5.8.10.s: abs_normal: Minimize a Linear Abs-normal Approximation: delta_x
        5.8.6.r: abs_normal: Minimize a Linear Abs-normal Approximation: delta_x
        5.8.3.m: abs_normal: Evaluate First Order Approximation: delta_x
dependency 5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
           5.5.11: Subgraph Dependency Sparsity Patterns
           5.5.9.b: Computing Dependency: Example and Test: Dependency Pattern
           5.5.4.g: Jacobian Sparsity Pattern: Reverse Mode: dependency
           5.5.3.h: Reverse Mode Jacobian Sparsity Patterns: dependency
           5.5.2.g: Jacobian Sparsity Pattern: Forward Mode: dependency
           5.5.1.h: Forward Mode Jacobian Sparsity Patterns: dependency
dependency: 5.5.9: Computing Dependency: Example and Test
dependent 12.8.2.c: ADFun Object Deprecated Member Functions: Dependent
          12.4.k.d: Glossary: Tape.Dependent Variables
          12.4.g.c: Glossary: Operation.Dependent
deprecate 4.4.4.k: AD Conditional Expressions: Deprecate 2005-08-07
deprecated 12.8.13.a: Autotools Unix Test and Installation: Deprecated 2012-12-26
           12.8.12.a: zdouble: An AD Base Type With Absolute Zero: Deprecated 2015-09-26
           12.8.11.5.a: Old Matrix Multiply as a User Atomic Operation: Example and Test: Deprecated 2013-05-27
           12.8.11.4.a: Old Tan and Tanh as User Atomic Operations: Example and Test: Deprecated 2013-05-27
           12.8.11.3.a: Using AD to Compute Atomic Function Derivatives: Deprecated 2013-05-27
           12.8.11.2.a: Using AD to Compute Atomic Function Derivatives: Deprecated 2013-05-27
           12.8.11.1.a: Old Atomic Operation Reciprocal: Example and Test: Deprecated 2013-05-27
           12.8.11.a: User Defined Atomic AD Functions: Deprecated 2013-05-27
           12.8.10.a: Nonlinear Programming Using the CppAD Interface to Ipopt: Deprecated 2012-11-28
           12.8.9.h: Choosing The Vector Testing Template Class: CppADvector Deprecated 2007-07-28
           12.8.9.a: Choosing The Vector Testing Template Class: Deprecated 2012-07-03
           12.8.8.a: Machine Epsilon For AD Types: Deprecated 2012-06-17
           12.8.7.a: Memory Leak Detection: Deprecated 2012-04-06
           12.8.6.13.a: OpenMP Memory Allocator: Example and Test: Deprecated 2011-08-31
           12.8.6.10.a: Return A Raw Array to The Available Memory for a Thread: Deprecated 2011-08-31
           12.8.6.9.a: Allocate Memory and Create A Raw Array: Deprecated 2011-08-31
           12.8.6.8.a: Amount of Memory Available for Quick Use by a Thread: Deprecated 2011-08-31
           12.8.6.7.a: Amount of Memory a Thread is Currently Using: Deprecated 2011-08-31
           12.8.6.6.a: Free Memory Currently Available for Quick Use by a Thread: Deprecated 2011-08-31
           12.8.6.5.a: Return Memory to omp_alloc: Deprecated 2011-08-31
           12.8.6.4.a: Get At Least A Specified Amount of Memory: Deprecated 2011-08-31
           12.8.6.3.a: Get the Current OpenMP Thread Number: Deprecated 2011-08-31
           12.8.6.2.a: Is The Current Execution in OpenMP Parallel Mode: Deprecated 2011-08-31
           12.8.6.1.a: Set and Get Maximum Number of Threads for omp_alloc Allocator: Deprecated 2011-08-31
           12.8.6.d: A Quick OpenMP Memory Allocator Used by CppAD: Deprecated 2011-08-23
           12.8.5.n.b: Routines That Track Use of New and Delete: TrackCount.Previously Deprecated
           12.8.5.m.b: Routines That Track Use of New and Delete: TrackExtend.Previously Deprecated
           12.8.5.l.b: Routines That Track Use of New and Delete: TrackDelVec.Previously Deprecated
           12.8.5.k.b: Routines That Track Use of New and Delete: TrackNewVec.Previously Deprecated
           12.8.5.a: Routines That Track Use of New and Delete: Deprecated 2007-07-23
           12.8.4.a: OpenMP Parallel Setup: Deprecated 2011-06-23
           12.8.3.b: Comparison Changes During Zero Order Forward Mode: Deprecated 2015-01-20
           12.8.2.j.a: ADFun Object Deprecated Member Functions: capacity_taylor.Deprecated 2014-03-18
           12.8.2.i.a: ADFun Object Deprecated Member Functions: size_taylor.Deprecated 2014-03-18
           12.8.2.h.a: ADFun Object Deprecated Member Functions: use_VecAD.Deprecated 2006-04-08
           12.8.2.g.a: ADFun Object Deprecated Member Functions: taylor_size.Deprecated 2006-06-17
           12.8.2.f.a: ADFun Object Deprecated Member Functions: Size.Deprecated 2006-04-03
           12.8.2.e.a: ADFun Object Deprecated Member Functions: Memory.Deprecated 2006-03-31
           12.8.2.d.a: ADFun Object Deprecated Member Functions: Order.Deprecated 2006-03-31
           12.8.2.c.a: ADFun Object Deprecated Member Functions: Dependent.Deprecated 2007-08-07
           12.8.2: ADFun Object Deprecated Member Functions
           12.8.1.b: Deprecated Include Files: Deprecated 2006-12-17
           12.8.1.a: Deprecated Include Files: Deprecated 2015-11-30
           12.8.1: Deprecated Include Files
           12.8: CppAD Deprecated API Features
           8.11.f.a: Obtain Nan or Determine if a Value is Nan: nan(zero).Deprecated 2015-10-04
           6.d: CppAD API Preprocessor Symbols: Deprecated
           5.6.4.i.b: Sparse Hessian: work.colpack.star Deprecated 2017-06-01
           5.6.3.j.e: Computing Sparse Hessians: coloring.colpack.star Deprecated 2017-06-01
           4.5.3.n: AD Boolean Functions: Deprecated 2007-07-31
           4.4.7.2.9.e.b: Atomic Reverse Hessian Sparsity Patterns: u.x
           4.4.7.2.9.b: Atomic Reverse Hessian Sparsity Patterns: Deprecated 2016-06-27
           4.4.7.2.8.d.e: Atomic Forward Hessian Sparsity Patterns: Implementation.x
           4.4.7.2.8.b: Atomic Forward Hessian Sparsity Patterns: Deprecated 2016-06-27
           4.4.7.2.7.d.d: Atomic Reverse Jacobian Sparsity Patterns: Implementation.x
           4.4.7.2.7.b: Atomic Reverse Jacobian Sparsity Patterns: Deprecated 2016-06-27
           4.4.7.2.6.d.d: Atomic Forward Jacobian Sparsity Patterns: Implementation.x
           4.4.7.2.6.b: Atomic Forward Jacobian Sparsity Patterns: Deprecated 2016-06-27
           4.4.5.n: Discrete AD Functions: CppADCreateDiscrete Deprecated 2007-07-28
           2.c: CppAD Download, Test, and Install Instructions: Deprecated
deprecated) 10.4: List All (Except Deprecated) CppAD Examples
derivative 12.3: The Theory of Derivative Calculations
           11.7.5: Sacado Speed: Second Derivative of a Polynomial
           11.6.5: Fadbad Speed: Second Derivative of a Polynomial
           11.5.5: CppAD Speed: Second Derivative of a Polynomial
           11.4.5: Adolc Speed: Second Derivative of a Polynomial
           11.1.5: Speed Testing Second Derivative of a Polynomial
           11.1.3: Speed Testing Derivative of Matrix Multiply
           10.2.13.d: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Derivative of ODE Solution
           10.2.12.d: Taylor's Ode Solver: A Multi-Level AD Example and Test: Derivative of ODE Solution
           10.1.c: Getting Started Using CppAD to Compute Derivatives: Derivative
           8.13: Evaluate a Polynomial or its Derivative
           5.4.3: Any Order Reverse Mode
           5.4.2: Second Order Reverse Mode
           5.4.1: First Order Reverse Mode
           5.3.4.b.b: Multiple Order Forward Mode: Purpose.Derivative Values
           5.3.3: Second Order Forward Mode: Derivative Values
           5.3.2: First Order Forward Mode: Derivative Values
           5.2.6: Reverse Mode Second Partial Derivative Driver
           5.2.5: Forward Mode Second Partial Derivative Driver
           5.2.4.1: First Order Derivative Driver: Example and Test
           5.2.4: First Order Derivative: Driver Routine
           5.2.2: Hessian: Easy Driver
           5.2.1: Jacobian: Driver Routine
           4.4.2.21.e: The Sign: sign: Derivative
           4.4.2.14.e: AD Absolute Value Functions: abs, fabs: Derivative
           4.4.2.13.d: The Hyperbolic Tangent Function: tanh: Derivative
           4.4.2.12.d: The Tangent Function: tan: Derivative
           4.4.2.11.d: The Square Root Function: sqrt: Derivative
           4.4.2.10.d: The Hyperbolic Sine Function: sinh: Derivative
           4.4.2.9.d: The Sine Function: sin: Derivative
           4.4.2.7.d: The Exponential Function: log: Derivative
           4.4.2.6.d: The Exponential Function: exp: Derivative
           4.4.2.5.d: The Hyperbolic Cosine Function: cosh: Derivative
           4.4.2.4.d: The Cosine Function: cos: Derivative
           4.4.2.3.d: Inverse Tangent Function: atan: Derivative
           4.4.2.2.d: Inverse Sine Function: asin: Derivative
           4.4.2.1.d: Inverse Sine Function: acos: Derivative
           4.4.1.4.j: AD Compound Assignment Operators: Derivative
           4.4.1.3.j: AD Binary Arithmetic Operators: Derivative
           4.4.1.2.g: AD Unary Minus Operator: Derivative
           4.4.1.1.f: AD Unary Plus Operator: Derivative
           3.2.6.d.e: exp_eps: Second Order Forward Mode: Operation Sequence.Derivative
           3.2.4.c.d: exp_eps: First Order Forward Sweep: Operation Sequence.Derivative
           3.1.6.d.e: exp_2: Second Order Forward Mode: Operation Sequence.Derivative
           3.1.4.d.d: exp_2: First Order Forward Mode: Operation Sequence.Derivative
           : cppad-20171217: A Package for Differentiation of C++ Algorithms
derivative: 5.2.4: First Order Derivative: Driver Routine
            5.2.3: First Order Partial Derivative: Driver Routine
derivatives 12.8.11.3: Using AD to Compute Atomic Function Derivatives
            12.8.11.2: Using AD to Compute Atomic Function Derivatives
            12.8.10.2.1.f.b: An ODE Inverse Problem Example: Black Box Method.Derivatives
            12.5.c: Bibliography: Evaluating Derivatives
            12.3.1.9.a: Error Function Forward Taylor Polynomial Theory: Derivatives
            12.3.1.8.a: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory: Derivatives
            12.3.1.7.a: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory: Derivatives
            12.3.1.6.a: Inverse Sine and Hyperbolic Sine Forward Mode Theory: Derivatives
            12.3.1.5.a: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory: Derivatives
            12.3.1.2.a: Logarithm Function Forward Mode Theory: Derivatives
            12.3.1.1.a: Exponential Function Forward Mode Theory: Derivatives
            11.7: Speed Test Derivatives Using Sacado
            11.6: Speed Test Derivatives Using Fadbad
            11.5: Speed Test Derivatives Using CppAD
            11.4: Speed Test of Derivatives Using Adolc
            10.2.10.c.f: Using Multiple Levels of AD: Procedure.Derivatives of Outer Function
            10.1: Getting Started Using CppAD to Compute Derivatives
            5.6: Calculating Sparse Derivatives
            4.4.5.k: Discrete AD Functions: Derivatives
derivatives: 5.2: First and Second Order Derivatives: Easy Drivers
description 12.10.3.b: LU Factorization of A Square Matrix and Stability Calculation: Description
            10.2.a: General Examples: Description
            8.22.b: The CppAD::vector Template Class: Description
            8.19.b: An Error Controller for ODE Solvers: Description
            8.18.b: A 3rd and 4th Order Rosenbrock ODE Solver: Description
            8.16.b: Multi-dimensional Romberg Integration: Description
            8.15.b: One DimensionalRomberg Integration: Description
            8.14.3.b: Invert an LU Factored Equation: Description
            8.14.2.b: LU Factorization of A Square Matrix: Description
            8.14.1.b: Compute Determinant and Solve Linear Equations: Description
            8.13.b: Evaluate a Polynomial or its Derivative: Description
            4.4.7.2.18.a: Atomic Eigen Cholesky Factorization: Example and Test: Description
            4.4.7.2.17.a: Atomic Eigen Matrix Inverse: Example and Test: Description
            4.4.7.2.16.a: Atomic Eigen Matrix Multiply: Example and Test: Description
            4.4.4.1.b: Conditional Expressions: Example and Test: Description
            4.4.2.21.b: The Sign: sign: Description
            4.4.2.20.b: The Logarithm of One Plus Argument: log1p: Description
            4.4.2.19.b: The Exponential Function Minus One: expm1: Description
            4.4.2.18.b: The Error Function: Description
            4.4.2.17.b: The Inverse Hyperbolic Tangent Function: atanh: Description
            4.4.2.16.b: The Inverse Hyperbolic Sine Function: asinh: Description
            4.4.2.15.b: The Inverse Hyperbolic Cosine Function: acosh: Description
destructor 8.23.13: Deallocate An Array and Call Destructor for its Elements
           8.9.f: Definition of a Simple Vector: Element Constructor and Destructor
det 11.2.3.f: Determinant Using Expansion by Minors: det
    11.2.1.f: Determinant Using Expansion by Lu Factorization: det
det_33 11.2.4.1: Source: det_33
       11.2.4: Check Determinant of 3 by 3 matrix
det_by_lu 11.2.1.2: Source: det_by_lu
          11.2.1: Determinant Using Expansion by Lu Factorization
det_by_minor 12.9.7: Determine Amount of Time to Execute det_by_minor
             12.9.5: Repeat det_by_minor Routine A Specified Number of Times
             12.9.4: Correctness Test of det_by_minor Routine
             11.2.3.2: Source: det_by_minor
det_grad_33 11.2.5.1: Source: det_grad_33
            11.2.5: Check Gradient of Determinant of 3 by 3 matrix
det_of_minor 11.2.2.2: Source: det_of_minor
             11.2.2.1: Determinant of a Minor: Example and Test
             11.2.2: Determinant of a Minor
detection 12.8.7: Memory Leak Detection
determinant 12.10.3.h.f: LU Factorization of A Square Matrix and Stability Calculation: LU.Determinant
            12.9.2: Compute Determinant using Expansion by Minors
            12.9.1.c: Determinant of a Minor: Determinant of A
            12.9.1: Determinant of a Minor
            11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
            11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
            11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
            11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
            11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
            11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
            11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
            11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
            11.3.2: Double Speed: Determinant Using Lu Factorization
            11.3.1: Double Speed: Determinant by Minor Expansion
            11.2.5: Check Gradient of Determinant of 3 by 3 matrix
            11.2.4: Check Determinant of 3 by 3 matrix
            11.2.3.1: Determinant Using Expansion by Minors: Example and Test
            11.2.3: Determinant Using Expansion by Minors
            11.2.2.1: Determinant of a Minor: Example and Test
            11.2.2.d: Determinant of a Minor: Determinant of A
            11.2.2: Determinant of a Minor
            11.2.1.1: Determinant Using Lu Factorization: Example and Test
            11.2.1: Determinant Using Expansion by Lu Factorization
            11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
            11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
            10.3.3: Lu Factor and Solve with Recorded Pivoting
            10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
            10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
            10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
            10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
            10.2.4.3: Using Eigen To Compute Determinant: Example and Test
            8.14.2.h.f: LU Factorization of A Square Matrix: LU.Determinant
            8.14.1: Compute Determinant and Solve Linear Equations
determinant: 10.2.4.3: Using Eigen To Compute Determinant: Example and Test
determinants 8.14: Compute Determinants and Solve Equations by LU Factorization
determine 12.9.7: Determine Amount of Time to Execute det_by_minor
          8.11: Obtain Nan or Determine if a Value is Nan
          8.5: Determine Amount of Time to Execute a Test
          8.2: Determine if Two Values Are Nearly Equal
difference 10.2.7: Interfacing to C: Example and Test
           8.2: Determine if Two Values Are Nearly Equal
differential 12.3.1.4.a: Trigonometric and Hyperbolic Sine and Cosine Forward Theory: Differential Equation
             12.3.1.c.a: The Theory of Forward Mode: Standard Math Functions.Differential Equation
             8.21: An Error Controller for Gear's Ode Solvers
             8.20: An Arbitrary Order Gear Method
             8.19: An Error Controller for ODE Solvers
             8.18: A 3rd and 4th Order Rosenbrock ODE Solver
             8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
differentiate 10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test
differentiating 10.2.15: Example Differentiating a Stack Machine Interpreter
differentiation 10.2.2: Example and Test Linking CppAD to Languages Other than C++
                3.b.a: An Introduction by Example to Algorithmic Differentiation: Preface.Algorithmic Differentiation
                3: An Introduction by Example to Algorithmic Differentiation
                : cppad-20171217: A Package for Differentiation of C++ Algorithms
                : cppad-20171217: A Package for Differentiation of C++ Algorithms
digits10 4.4.6.i: Numeric Limits For an AD and Base Types: digits10
dimension 8.16: Multi-dimensional Romberg Integration
dimensional 8.16.1: One Dimensional Romberg Integration: Example and Test
            8.16: Multi-dimensional Romberg Integration
            8.15.1: One Dimensional Romberg Integration: Example and Test
dimensionalromberg 8.15: One DimensionalRomberg Integration
dimensions 4.4.7.2.19.1.b: Matrix Multiply as an Atomic Operation: Matrix Dimensions
           4.4.7.2.17.1.b: Atomic Eigen Matrix Inversion Class: Matrix Dimensions
           4.4.7.2.16.1.c: Atomic Eigen Matrix Multiply Class: Matrix Dimensions
direction 5.4.2.2: Hessian Times Direction: Example and Test
          5.4.2.i: Second Order Reverse Mode: Hessian Times Direction
direction: 5.4.2.2: Hessian Times Direction: Example and Test
directions 5.3.5.1: Forward Mode: Example and Test of Multiple Directions
           5.3.5: Multiple Directions Forward Mode
           5.4.a: Reverse Mode: Multiple Directions
directories 2.2: Using CMake to Configure CppAD
directory 12.8.13.c: Autotools Unix Test and Installation: Build Directory
          12.8.13.b: Autotools Unix Test and Installation: Distribution Directory
          12.2.b: Directory Structure: Example Directory
          12.2.a: Directory Structure: Distribution Directory
          12.2: Directory Structure
          2.2.6.1.e: Download and Install Sacado in Build Directory: Prefix Directory
          2.2.6.1.d: Download and Install Sacado in Build Directory: External Directory
          2.2.6.1.c: Download and Install Sacado in Build Directory: Distribution Directory
          2.2.6.1: Download and Install Sacado in Build Directory
          2.2.5.1.e: Download and Install Ipopt in Build Directory: Prefix Directory
          2.2.5.1.d: Download and Install Ipopt in Build Directory: External Directory
          2.2.5.1.c: Download and Install Ipopt in Build Directory: Distribution Directory
          2.2.5.1: Download and Install Ipopt in Build Directory
          2.2.4.1.e: Download and Install Fadbad in Build Directory: Prefix Directory
          2.2.4.1.d: Download and Install Fadbad in Build Directory: External Directory
          2.2.4.1.c: Download and Install Fadbad in Build Directory: Distribution Directory
          2.2.4.1: Download and Install Fadbad in Build Directory
          2.2.3.1.e: Download and Install Eigen in Build Directory: Prefix Directory
          2.2.3.1.d: Download and Install Eigen in Build Directory: External Directory
          2.2.3.1.c: Download and Install Eigen in Build Directory: Distribution Directory
          2.2.3.1: Download and Install Eigen in Build Directory
          2.2.2.5.e: Download and Install ColPack in Build Directory: Prefix Directory
          2.2.2.5.d: Download and Install ColPack in Build Directory: External Directory
          2.2.2.5.c: Download and Install ColPack in Build Directory: Distribution Directory
          2.2.2.5: Download and Install ColPack in Build Directory
          2.2.1.1.f: Download and Install Adolc in Build Directory: Prefix Directory
          2.2.1.1.e: Download and Install Adolc in Build Directory: External Directory
          2.2.1.1.d: Download and Install Adolc in Build Directory: Distribution Directory
          2.2.1.1: Download and Install Adolc in Build Directory
          2.2.b.a: Using CMake to Configure CppAD: CMake Command.Build Directory
          2.1.b: Download The CppAD Source Code: Distribution Directory
discrete 4.4.5: Discrete AD Functions
discussion 12.8.3.f: Comparison Changes During Zero Order Forward Mode: Discussion
           8.21.s: An Error Controller for Gear's Ode Solvers: Error Criteria Discussion
           8.19.r: An Error Controller for ODE Solvers: Error Criteria Discussion
           7.1.c: Enable AD Calculations During Parallel Mode: Discussion
           5.9.l: Check an ADFun Sequence of Operations: Discussion
           5.8.11.1.b: abs_normal min_nso_quad: Example and Test: Discussion
           5.8.7.1.b: abs_normal min_nso_linear: Example and Test: Discussion
           5.5.9.a: Computing Dependency: Example and Test: Discussion
           5.3.7.e.a: Comparison Changes Between Taping and Zero Order Forward: number.Discussion
           4.4.7.2.4.k: Atomic Forward Mode: Discussion
           4.4.7.1.4.b: Checkpointing an Extended ODE Solver: Example and Test: Discussion
           4.4.7.1.2.a: Atomic Operations and Multiple-Levels of AD: Example and Test: Discussion
           4.3.6.i: Printing AD Values During Forward Mode: Discussion
disk 12.1.k: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
distribution 12.8.13.b: Autotools Unix Test and Installation: Distribution Directory
             12.2.a: Directory Structure: Distribution Directory
             2.2.6.1.c: Download and Install Sacado in Build Directory: Distribution Directory
             2.2.5.1.c: Download and Install Ipopt in Build Directory: Distribution Directory
             2.2.4.1.c: Download and Install Fadbad in Build Directory: Distribution Directory
             2.2.3.1.c: Download and Install Eigen in Build Directory: Distribution Directory
             2.2.2.5.c: Download and Install ColPack in Build Directory: Distribution Directory
             2.2.1.1.d: Download and Install Adolc in Build Directory: Distribution Directory
             2.1.b: Download The CppAD Source Code: Distribution Directory
divide 4.4.1.4.4: AD Compound Assignment Division: Example and Test
       4.4.1.4: AD Compound Assignment Operators
       4.4.1.3.4: AD Binary Division: Example and Test
       4.4.1.3: AD Binary Arithmetic Operators
division 12.3.2.b.d: The Theory of Reverse Mode: Binary Operators.Division
         12.3.1.b.d: The Theory of Forward Mode: Binary Operators.Division
         4.4.1.4.j.d: AD Compound Assignment Operators: Derivative.Division
         4.4.1.3.j.d: AD Binary Arithmetic Operators: Derivative.Division
division: 4.4.1.4.4: AD Compound Assignment Division: Example and Test
          4.4.1.3.4: AD Binary Division: Example and Test
do 7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
   7.2.8.3: Do One Thread's Work for Sum of 1/i
documentation 2.2: Using CMake to Configure CppAD
              2.1.k: Download The CppAD Source Code: Building Documentation
documented 6.c: CppAD API Preprocessor Symbols: Documented Elsewhere
           6.b: CppAD API Preprocessor Symbols: Documented Here
domain 12.8.10.2.3.d.b: ODE Fitting Using Fast Representation: Trapezoidal Approximation.Domain Indices J(k,0)
       12.8.10.2.3.c.b: ODE Fitting Using Fast Representation: Initial Condition.Domain Indices J(k,0)
       12.8.10.2.3.b.b: ODE Fitting Using Fast Representation: Objective Function.Domain Indices J(k,0)
       5.1.5.d: ADFun Sequence Properties: Domain
double 11.3.7: Double Speed: Sparse Jacobian
       11.3.6: Double Speed: Sparse Hessian
       11.3.5: Double Speed: Evaluate a Polynomial
       11.3.4: Double Speed: Ode Solution
       11.3.2: Double Speed: Determinant Using Lu Factorization
       11.3.1: Double Speed: Determinant by Minor Expansion
       11.3: Speed Test of Functions in Double
       11.1.7.j.a: Speed Testing Sparse Jacobian: n_sweep.double
       11.1.6.i.a: Speed Testing Sparse Hessian: n_sweep.double
       11.1.5.i.a: Speed Testing Second Derivative of a Polynomial: ddp.double
       11.1.4.i.a: Speed Testing the Jacobian of Ode Solution: jacobian.double
       11.1.2.h.a: Speed Testing Gradient of Determinant by Minor Expansion: gradient.double
       11.1.1.h.a: Speed Testing Gradient of Determinant Using Lu Factorization: gradient.double
       11.1.c.b: Running the Speed Test Program: package.double
       4.7.9.5: Enable use of AD<Base> where Base is double
       4.7.1.c: Required Base Class Member Functions: Double Constructor
down 7.2.10.4: Take Down Multi-threaded Newton Method
     7.2.9.5: Multi-Threaded User Atomic Take Down
     7.2.8.4: Take Down Multi-threading Sum of 1/i
download 2.2.6.1: Download and Install Sacado in Build Directory
         2.2.5.1: Download and Install Ipopt in Build Directory
         2.2.4.1: Download and Install Fadbad in Build Directory
         2.2.3.1: Download and Install Eigen in Build Directory
         2.2.2.5: Download and Install ColPack in Build Directory
         2.2.1.1: Download and Install Adolc in Build Directory
         2.1: Download The CppAD Source Code
         2.a.a: CppAD Download, Test, and Install Instructions: Instructions.Step 1: Download
         2: CppAD Download, Test, and Install Instructions
     install fadbad 2.2.4.1: Download and Install Fadbad in Build Directory
     install ipopt 2.2.5.1: Download and Install Ipopt in Build Directory
     install sacado 2.2.6.1: Download and Install Sacado in Build Directory
driver 12.8.10.2.4: Driver for Running the Ipopt ODE Example
       5.2.6: Reverse Mode Second Partial Derivative Driver
       5.2.5: Forward Mode Second Partial Derivative Driver
       5.2.4: First Order Derivative: Driver Routine
       5.2.3: First Order Partial Derivative: Driver Routine
       5.2.2: Hessian: Easy Driver
       5.2.1: Jacobian: Driver Routine
driver: 5.2.6.1: Second Partials Reverse Driver: Example and Test
        5.2.4.1: First Order Derivative Driver: Example and Test
        5.2.3.1: First Order Partial Driver: Example and Test
drivers 5.2: First and Second Order Derivatives: Easy Drivers
during 12.8.3: Comparison Changes During Zero Order Forward Mode
       12.7.15: Changes and Additions to CppAD During 2003
       12.7.14: Changes and Additions to CppAD During 2004
       12.7.13: Changes and Additions to CppAD During 2005
       12.7.12: Changes and Additions to CppAD During 2006
       12.7.11: Changes and Additions to CppAD During 2007
       12.7.10: Changes and Additions to CppAD During 2008
       12.7.9: Changes and Additions to CppAD During 2009
       12.7.8: Changes and Additions to CppAD During 2010
       12.7.7: Changes and Additions to CppAD During 2011
       12.7.6: CppAD Changes and Additions During 2012
       12.7.5: CppAD Changes and Additions During 2013
       12.7.4: CppAD Changes and Additions During 2014
       12.7.3: CppAD Changes and Additions During 2015
       12.7.2: Changes and Additions to CppAD During 2016
       12.7.1: Changes and Additions to CppAD During 2017
       8.1.2: CppAD Assertions During Execution
       7.1: Enable AD Calculations During Parallel Mode
       4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
       4.3.7: Convert an AD Variable to a Parameter
       4.3.6.2: Print During Zero Order Forward Mode: Example and Test
       4.3.6.1: Printing During Forward Mode: Example and Test
       4.3.6: Printing AD Values During Forward Mode
dvector 9.e: Use Ipopt to Solve a Nonlinear Programming Problem: Dvector
dw 5.4.4.k: Reverse Mode Using Subgraphs: dw
   5.4.3.g: Any Order Reverse Mode: dw
   5.4.2.g: Second Order Reverse Mode: dw
   5.4.1.f: First Order Reverse Mode: dw
   5.2.4.f: First Order Derivative: Driver Routine: dw
dy 5.2.3.f: First Order Partial Derivative: Driver Routine: dy
dz 11.1.3.h: Speed Testing Derivative of Matrix Multiply: dz
E
EqualOpSeq 4.5.5.1: EqualOpSeq: Example and Test
           4.5.5: Check if Two Value are Identically Equal
ErrorHandler 8.1: Replacing the CppAD Error Handler
8.20.i: An Arbitrary Order Gear Method: e
  8.18.j: A 3rd and 4th Order Rosenbrock ODE Solver: e
  8.17.k: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: e
  8.16.k: Multi-dimensional Romberg Integration: e
  8.15.j: One DimensionalRomberg Integration: e
eabs 8.21.n: An Error Controller for Gear's Ode Solvers: eabs
     8.19.m: An Error Controller for ODE Solvers: eabs
easy 5.2.6: Reverse Mode Second Partial Derivative Driver
     5.2.5: Forward Mode Second Partial Derivative Driver
     5.2.4: First Order Derivative: Driver Routine
     5.2.3: First Order Partial Derivative: Driver Routine
     5.2.2: Hessian: Easy Driver
     5.2: First and Second Order Derivatives: Easy Drivers
ef 8.21.p: An Error Controller for Gear's Ode Solvers: ef
   8.19.o: An Error Controller for ODE Solvers: ef
efficiency 5.7.f: Optimize an ADFun Object Tape: Efficiency
           3.b.e: An Introduction by Example to Algorithmic Differentiation: Preface.Efficiency
efficient 12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
          12.4: Glossary
eigen 12.6.e: The CppAD Wish List: Eigen
      10.5.g: Using The CppAD Test Vector Template Class: Eigen Vectors
      10.2.4.3: Using Eigen To Compute Determinant: Example and Test
      10.2.4.2: Using Eigen Arrays: Example and Test
      10.2.4.e: Enable Use of Eigen Linear Algebra Package with CppAD: Eigen NumTraits
      10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
      4.4.7.2.18.2: Atomic Eigen Cholesky Factorization Class
      4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test
      4.4.7.2.17.1: Atomic Eigen Matrix Inversion Class
      4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test
      4.4.7.2.16.1: Atomic Eigen Matrix Multiply Class
      4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test
      2.2.7.e: Choosing the CppAD Test Vector Template Class: eigen
      2.2.3.1: Download and Install Eigen in Build Directory
      2.2.3: Including the Eigen Examples and Tests
      2.2.n.a: Using CMake to Configure CppAD: cppad_profile_flag.Eigen and Fadbad
eigen_dir 12.8.13.p: Autotools Unix Test and Installation: eigen_dir
eigen_plugin.hpp 10.2.4.1: Source Code for eigen_plugin.hpp
eigen_prefix 2.2.3.b: Including the Eigen Examples and Tests: eigen_prefix
elapsed 12.9.6: Returns Elapsed Number of Seconds
        11.1.8: Microsoft Version of Elapsed Number of Seconds
        8.5.1.1: Elapsed Seconds: Example and Test
        8.5.1: Returns Elapsed Number of Seconds
elapsed_seconds 8.5.1: Returns Elapsed Number of Seconds
element 8.26.c: Union of Standard Sets: Element
        8.22.m.e: The CppAD::vector Template Class: vectorBool.Element Type
        8.22.f: The CppAD::vector Template Class: Element Access
        8.9.k: Definition of a Simple Vector: Element Access
        8.9.f: Definition of a Simple Vector: Element Constructor and Destructor
        4.4.7.2.19.1.g: Matrix Multiply as an Atomic Operation: Result Element Index
        4.4.7.2.19.1.f: Matrix Multiply as an Atomic Operation: Right Operand Element Index
        4.4.7.2.19.1.e: Matrix Multiply as an Atomic Operation: Left Operand Element Index
element-wise 12.6.b.d: The CppAD Wish List: Atomic.Element-wise Operations
elementary 12.4.f: Glossary: Elementary Vector
           5.8.8.t.a: Solve a Quadratic Program Using Interior Point Method: Newton Step.Elementary Row Reduction
elements 11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
         11.2.6: Sum Elements of a Matrix Times Itself
         8.23.13: Deallocate An Array and Call Destructor for its Elements
         8.23.12: Allocate An Array and Call Default Constructor for its Elements
         8.9.b: Definition of a Simple Vector: Elements of Specified Type
eliminating 12.3.2.8.b: Tangent and Hyperbolic Tangent Reverse Mode Theory: Eliminating Y(t)
ell 5.4.4.i: Reverse Mode Using Subgraphs: ell
elsewhere 6.c: CppAD API Preprocessor Symbols: Documented Elsewhere
embedded 8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
empty 8.28.d: Sparse Matrix Row, Column, Value Representation: empty
      8.27.c: Row and Column Index Sparsity Patterns: empty
enable 10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
       7.1: Enable AD Calculations During Parallel Mode
       4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
       4.7.9.5: Enable use of AD<Base> where Base is double
       4.7.9.4: Enable use of AD<Base> where Base is float
       4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
end 7.2.8.3.d: Do One Thread's Work for Sum of 1/i: end
    4.4.7.2.19.1.o: Matrix Multiply as an Atomic Operation: End Class Definition
    4.4.7.2.18.2.e: Atomic Eigen Cholesky Factorization Class: End Class Definition
    4.4.7.2.17.1.g: Atomic Eigen Matrix Inversion Class: End Class Definition
    4.4.7.2.16.1.h: Atomic Eigen Matrix Multiply Class: End Class Definition
    4.4.7.2.15.j: Tan and Tanh as User Atomic Operations: Example and Test: End Class Definition
    4.4.7.2.14.j: Atomic Sparsity with Set Patterns: Example and Test: End Class Definition
    4.4.7.2.13.j: Reciprocal as an Atomic Operation: Example and Test: End Class Definition
    4.4.7.2.12.j: Atomic Euclidean Norm Squared: Example and Test: End Class Definition
    4.4.7.2.11.e: Getting Started with Atomic Operations: Example and Test: End Class Definition
entire 5.5.7.k: Forward Mode Hessian Sparsity Patterns: Sparsity for Entire Hessian
       5.5.6.k: Hessian Sparsity Pattern: Reverse Mode: Entire Sparsity Pattern
       5.5.5.l: Reverse Mode Hessian Sparsity Patterns: Sparsity for Entire Hessian
       5.5.4.k: Jacobian Sparsity Pattern: Reverse Mode: Entire Sparsity Pattern
       5.5.3.k: Reverse Mode Jacobian Sparsity Patterns: Sparsity for Entire Jacobian
       5.5.2.k: Jacobian Sparsity Pattern: Forward Mode: Entire Sparsity Pattern
       5.5.1.k: Forward Mode Jacobian Sparsity Patterns: Sparsity for Entire Jacobian
environment 8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
            7: Using CppAD in a Multi-Threading Environment
eps 12.8.8.e: Machine Epsilon For AD Types: eps
epsilon 12.8.8: Machine Epsilon For AD Types
        7.2.10.5.k: A Multi-Threaded Newton's Method: epsilon
        7.2.10.2.g: Set Up Multi-Threaded Newton Method: epsilon
        5.8.11.k: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: epsilon
        5.8.10.q: abs_normal: Minimize a Linear Abs-normal Approximation: epsilon
        5.8.9.n: abs_normal: Solve a Quadratic Program With Box Constraints: epsilon
        5.8.8.l: Solve a Quadratic Program Using Interior Point Method: epsilon
        5.8.7.k: Non-Smooth Optimization Using Abs-normal Linear Approximations: epsilon
        5.8.6.p: abs_normal: Minimize a Linear Abs-normal Approximation: epsilon
        4.4.6.e: Numeric Limits For an AD and Base Types: epsilon
        3.2.7.c: exp_eps: Second Order Reverse Sweep: epsilon
        3.2.5.c: exp_eps: First Order Reverse Sweep: epsilon
        3.2.f: An Epsilon Accurate Exponential Approximation: epsilon
        3.2: An Epsilon Accurate Exponential Approximation
equal 8.2: Determine if Two Values Are Nearly Equal
      4.7.3: Base Type Requirements for Identically Equal Comparisons
      4.5.5: Check if Two Value are Identically Equal
      4.5.2: Compare AD and Base Objects for Nearly Equal
equalopseq 4.7.9.6.e: Enable use of AD<Base> where Base is std::complex<double>: EqualOpSeq
           4.7.9.5.c: Enable use of AD<Base> where Base is double: EqualOpSeq
           4.7.9.4.c: Enable use of AD<Base> where Base is float: EqualOpSeq
           4.7.9.3.f: Enable use of AD<Base> where Base is Adolc's adouble Type: EqualOpSeq
           4.7.9.1.i: Example AD<Base> Where Base Constructor Allocates Memory: EqualOpSeq
           4.7.3.a: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq
equalopseq: 4.5.5.1: EqualOpSeq: Example and Test
equation 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
         12.3.1.4.a: Trigonometric and Hyperbolic Sine and Cosine Forward Theory: Differential Equation
         12.3.1.c.a: The Theory of Forward Mode: Standard Math Functions.Differential Equation
         10.3.3: Lu Factor and Solve with Recorded Pivoting
         8.21: An Error Controller for Gear's Ode Solvers
         8.20: An Arbitrary Order Gear Method
         8.19: An Error Controller for ODE Solvers
         8.18: A 3rd and 4th Order Rosenbrock ODE Solver
         8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
         8.14.3: Invert an LU Factored Equation
         8.14.2: LU Factorization of A Square Matrix
equations 8.14.1: Compute Determinant and Solve Linear Equations
          8.14: Compute Determinants and Solve Equations by LU Factorization
erel 8.21.o: An Error Controller for Gear's Ode Solvers: erel
     8.19.n: An Error Controller for ODE Solvers: erel
erf 12.7.10: Changes and Additions to CppAD During 2008
    12.3.2.9: Error Function Reverse Mode Theory
    12.3.1.9: Error Function Forward Taylor Polynomial Theory
    4.7.9.3.l: Enable use of AD<Base> where Base is Adolc's adouble Type: erf, asinh, acosh, atanh, expm1, log1p
    4.7.9.1.p: Example AD<Base> Where Base Constructor Allocates Memory: erf, asinh, acosh, atanh, expm1, log1p
    4.7.5.d: Base Type Requirements for Standard Math Functions: erf, asinh, acosh, atanh, expm1, log1p
    4.4.2.18.1: The AD erf Function: Example and Test
error 12.8.7.j: Memory Leak Detection: Error Message
      12.3.2.9: Error Function Reverse Mode Theory
      12.3.1.9: Error Function Forward Taylor Polynomial Theory
      8.21.s: An Error Controller for Gear's Ode Solvers: Error Criteria Discussion
      8.21: An Error Controller for Gear's Ode Solvers
      8.19.r: An Error Controller for ODE Solvers: Error Criteria Discussion
      8.19: An Error Controller for ODE Solvers
      8.1.2.i: CppAD Assertions During Execution: Error Handler
      8.1.1: Replacing The CppAD Error Handler: Example and Test
      8.1: Replacing the CppAD Error Handler
      8.d.a: Some General Purpose Utilities: Miscellaneous.Error Handler
      5.10.f: Check an ADFun Object For Nan Results: Error Message
      4.4.2.18: The Error Function
errors 8.4.j: Run One Speed Test and Print Results: Errors
euclidean 4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
evaluate 11.3.5: Double Speed: Evaluate a Polynomial
         11.2.9: Evaluate a Function That Has a Sparse Hessian
         11.2.8: Evaluate a Function That Has a Sparse Jacobian
         11.2.7: Evaluate a Function Defined in Terms of an ODE
         8.13: Evaluate a Polynomial or its Derivative
         5.8.3: abs_normal: Evaluate First Order Approximation
evaluating 12.5.c: Bibliography: Evaluating Derivatives
evaluation: 8.13.1: Polynomial Evaluation: Example and Test
example 12.10.3.1: LuRatio: Example and Test
        12.10.3.l: LU Factorization of A Square Matrix and Stability Calculation: Example
        12.10.2.1: opt_val_hes: Example and Test
        12.10.2.l: Jacobian and Hessian of Optimal Values: Example
        12.10.1.1: BenderQuad: Example and Test
        12.10.1.m: Computing Jacobian and Hessian of Bender's Reduced Objective: Example
        12.8.12.1: zdouble: Example and Test
        12.8.12.f: zdouble: An AD Base Type With Absolute Zero: Example
        12.8.11.5.1.b: Define Matrix Multiply as a User Atomic Operation: Example
        12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
        12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
        12.8.11.1: Old Atomic Operation Reciprocal: Example and Test
        12.8.11.t: User Defined Atomic AD Functions: Example
        12.8.10.2.4: Driver for Running the Ipopt ODE Example
        12.8.10.2.3.1: ODE Fitting Using Fast Representation
        12.8.10.2.2.1: ODE Fitting Using Simple Representation
        12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
        12.8.10.2.1: An ODE Inverse Problem Example
        12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem
        12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
        12.8.10.u: Nonlinear Programming Using the CppAD Interface to Ipopt: Example
        12.8.6.13: OpenMP Memory Allocator: Example and Test
        12.8.6.10.h: Return A Raw Array to The Available Memory for a Thread: Example
        12.8.6.9.i: Allocate Memory and Create A Raw Array: Example
        12.8.6.8.f: Amount of Memory Available for Quick Use by a Thread: Example
        12.8.6.7.f: Amount of Memory a Thread is Currently Using: Example
        12.8.6.6.e: Free Memory Currently Available for Quick Use by a Thread: Example
        12.8.6.5.g: Return Memory to omp_alloc: Example
        12.8.6.4.h: Get At Least A Specified Amount of Memory: Example
        12.8.6.3.e: Get the Current OpenMP Thread Number: Example
        12.8.6.2.e: Is The Current Execution in OpenMP Parallel Mode: Example
        12.8.5.1: Tracking Use of New and Delete: Example and Test
        12.8.5.p: Routines That Track Use of New and Delete: Example
        12.6.f: The CppAD Wish List: Example
        12.2.b: Directory Structure: Example Directory
        11.2.9.1: sparse_hes_fun: Example and test
        11.2.9.l: Evaluate a Function That Has a Sparse Hessian: Example
        11.2.8.1: sparse_jac_fun: Example and test
        11.2.8.m: Evaluate a Function That Has a Sparse Jacobian: Example
        11.2.7.1: ode_evaluate: Example and test
        11.2.7.h: Evaluate a Function Defined in Terms of an ODE: Example
        11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
        11.2.6.i: Sum Elements of a Matrix Times Itself: Example
        11.2.3.1: Determinant Using Expansion by Minors: Example and Test
        11.2.3.h: Determinant Using Expansion by Minors: Example
        11.2.2.1: Determinant of a Minor: Example and Test
        11.2.2.l: Determinant of a Minor: Example
        11.2.1.1: Determinant Using Lu Factorization: Example and Test
        11.2.1.h: Determinant Using Expansion by Lu Factorization: Example
        10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
        10.3.3.j: Lu Factor and Solve with Recorded Pivoting: Example
        10.2.15: Example Differentiating a Stack Machine Interpreter
        10.2.14: Taylor's Ode Solver: An Example and Test
        10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
        10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
        10.2.11: A Stiff Ode: Example and Test
        10.2.10.1: Multiple Level of AD: Example and Test
        10.2.10.d: Using Multiple Levels of AD: Example
        10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
        10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
        10.2.7: Interfacing to C: Example and Test
        10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
        10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
        10.2.4.3: Using Eigen To Compute Determinant: Example and Test
        10.2.4.2: Using Eigen Arrays: Example and Test
        10.2.4.c: Enable Use of Eigen Linear Algebra Package with CppAD: Example
        10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test
        10.2.2: Example and Test Linking CppAD to Languages Other than C++
        10.2.1: Creating Your Own Interface to an ADFun Object
        10.1: Getting Started Using CppAD to Compute Derivatives
        9.3: ODE Inverse Problem Definitions: Source Code
        9.2: Nonlinear Programming Retaping: Example and Test
        9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
        9.n: Use Ipopt to Solve a Nonlinear Programming Problem: Example
        8.28.1: sparse_rcv: Example and Test
        8.28.q: Sparse Matrix Row, Column, Value Representation: Example
        8.27.1: sparse_rc: Example and Test
        8.27.o: Row and Column Index Sparsity Patterns: Example
        8.26.1: Set Union: Example and Test
        8.26.g: Union of Standard Sets: Example
        8.25.1: to_string: Example and Test
        8.25.f: Convert Certain Types to a String: Example
        8.24.1: Index Sort: Example and Test
        8.24.d: Returns Indices that Sort a Vector: Example
        8.23.14.e: Free All Memory That Was Allocated for Use by thread_alloc: Example
        8.23.13.g: Deallocate An Array and Call Destructor for its Elements: Example
        8.23.12.i: Allocate An Array and Call Default Constructor for its Elements: Example
        8.23.11.e: Amount of Memory Available for Quick Use by a Thread: Example
        8.23.10.e: Amount of Memory a Thread is Currently Using: Example
        8.23.8.d: Free Memory Currently Available for Quick Use by a Thread: Example
        8.23.7.f: Return Memory to thread_alloc: Example
        8.23.6.h: Get At Least A Specified Amount of Memory: Example
        8.23.5.d: Get the Current Thread Number: Example
        8.23.4.d: Is The Current Execution in Parallel Mode: Example
        8.23.3.d: Get Number of Threads: Example
        8.23.2.h: Setup thread_alloc For Use in Multi-Threading Environment: Example
        8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
        8.22.2: CppAD::vectorBool Class: Example and Test
        8.22.1: CppAD::vector Template Class: Example and Test
        8.22.o: The CppAD::vector Template Class: Example
        8.21.1: OdeGearControl: Example and Test
        8.21.v: An Error Controller for Gear's Ode Solvers: Example
        8.20.1: OdeGear: Example and Test
        8.20.l: An Arbitrary Order Gear Method: Example
        8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
        8.19.1: OdeErrControl: Example and Test
        8.19.u: An Error Controller for ODE Solvers: Example
        8.18.1: Rosen34: Example and Test
        8.18.n: A 3rd and 4th Order Rosenbrock ODE Solver: Example
        8.17.2: Runge45: Example and Test
        8.17.1: Runge45: Example and Test
        8.17.o: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Example
        8.16.1: One Dimensional Romberg Integration: Example and Test
        8.16.n: Multi-dimensional Romberg Integration: Example
        8.15.1: One Dimensional Romberg Integration: Example and Test
        8.15.l: One DimensionalRomberg Integration: Example
        8.14.3.1: LuInvert: Example and Test
        8.14.3.i: Invert an LU Factored Equation: Example
        8.14.2.1: LuFactor: Example and Test
        8.14.2.m: LU Factorization of A Square Matrix: Example
        8.14.1.1: LuSolve With Complex Arguments: Example and Test
        8.14.1.q: Compute Determinant and Solve Linear Equations: Example
        8.13.1: Polynomial Evaluation: Example and Test
        8.13.k: Evaluate a Polynomial or its Derivative: Example
        8.12.1: The Pow Integer Exponent: Example and Test
        8.12.j: The Integer Power Function: Example
        8.11.1: nan: Example and Test
        8.11.i: Obtain Nan or Determine if a Value is Nan: Example
        8.10.1: The CheckSimpleVector Function: Example and Test
        8.10.g: Check Simple Vector Concept: Example
        8.9.1: Simple Vector Template Class: Example and Test
        8.9.l: Definition of a Simple Vector: Example
        8.8.1: The CheckNumericType Function: Example and Test
        8.8.e: Check NumericType Class Concept: Example
        8.7.1: The NumericType: Example and Test
        8.7.g: Definition of a Numeric Type: Example
        8.6.i: Object that Runs a Group of Tests: Example
        8.5.2: time_test: Example and test
        8.5.1.1: Elapsed Seconds: Example and Test
        8.5.1.e: Returns Elapsed Number of Seconds: Example
        8.5.j: Determine Amount of Time to Execute a Test: Example
        8.4.1: Example Use of SpeedTest
        8.3.1: speed_test: Example and test
        8.4.k: Run One Speed Test and Print Results: Example
        8.3.k: Run One Speed Test and Return Results: Example
        8.2.1: NearEqual Function: Example and Test
        8.2.j: Determine if Two Values Are Nearly Equal: Example
        8.1.1: Replacing The CppAD Error Handler: Example and Test
        8.1.k: Replacing the CppAD Error Handler: Example
        7.2.11.j: Specifications for A Team of AD Threads: Example Implementation
        7.2.11.i: Specifications for A Team of AD Threads: Example Use
        7.2.10: Multi-Threaded Newton Method Example / Test
        7.2.9: Multi-Threading User Atomic Example / Test
        7.2.8: Multi-Threading Harmonic Summation Example / Test
        7.2.7: Using a Team of AD Threads: Example and Test
        7.2.6: A Simple pthread AD: Example and Test
        7.2.5: A Simple Boost Threading AD: Example and Test
        7.2.4: A Simple OpenMP AD: Example and Test
        7.2.3: A Simple Parallel Pthread Example and Test
        7.2.2: A Simple Boost Thread Example and Test
        7.2.1: A Simple OpenMP Example and Test
        7.1.e: Enable AD Calculations During Parallel Mode: Example
        5.10.1: ADFun Checking For Nan: Example and Test
        5.10.h: Check an ADFun Object For Nan Results: Example
        5.9.1: ADFun Check and Re-Tape: Example and Test
        5.9.m: Check an ADFun Sequence of Operations: Example
        5.8.11.1: abs_normal min_nso_quad: Example and Test
        5.8.11.p: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: Example
        5.8.10.1: abs_min_quad: Example and Test
        5.8.10.u: abs_normal: Minimize a Linear Abs-normal Approximation: Example
        5.8.9.1: abs_normal qp_box: Example and Test
        5.8.9.t: abs_normal: Solve a Quadratic Program With Box Constraints: Example
        5.8.8.1: abs_normal qp_interior: Example and Test
        5.8.8.v: Solve a Quadratic Program Using Interior Point Method: Example
        5.8.7.1: abs_normal min_nso_linear: Example and Test
        5.8.7.p: Non-Smooth Optimization Using Abs-normal Linear Approximations: Example
        5.8.6.1: abs_min_linear: Example and Test
        5.8.6.t: abs_normal: Minimize a Linear Abs-normal Approximation: Example
        5.8.5.1: abs_normal lp_box: Example and Test
        5.8.5.n: abs_normal: Solve a Linear Program With Box Constraints: Example
        5.8.4.1: abs_normal simplex_method: Example and Test
        5.8.4.m: abs_normal: Solve a Linear Program Using Simplex Method: Example
        5.8.3.1: abs_eval: Example and Test
        5.8.3.o: abs_normal: Evaluate First Order Approximation: Example
        5.8.1.1: abs_normal Getting Started: Example and Test
        5.8.1.h: Create An Abs-normal Representation of a Function: Example
        5.7.7: Example Optimization and Cumulative Sum Operations
        5.7.6: Example Optimization and Nested Conditional Expressions
        5.7.5: Example Optimization and Conditional Expressions
        5.7.4: Example Optimization and Print Forward Operators
        5.7.3: Example Optimization and Comparison Operators
        5.7.2: Example Optimization and Reverse Activity Analysis
        5.7.1: Example Optimization and Forward Activity Analysis
        5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
        5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
        5.6.5.n: Compute Sparse Jacobians Using Subgraphs: Example
        5.6.4.3: Subset of a Sparse Hessian: Example and Test
        5.6.4.2.e: Computing Sparse Hessian for a Subset of Variables: Example
        5.6.4.1: Sparse Hessian: Example and Test
        5.6.4.o: Sparse Hessian: Example
        5.6.3.1: Computing Sparse Hessian: Example and Test
        5.6.3.n: Computing Sparse Hessians: Example
        5.6.2.1: Sparse Jacobian: Example and Test
        5.6.2.n: Sparse Jacobian: Example
        5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
        5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
        5.6.1.p: Computing Sparse Jacobians: Example
        5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
        5.5.11.l: Subgraph Dependency Sparsity Patterns: Example
        5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
        5.5.9: Computing Dependency: Example and Test
        5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
        5.5.8.j: Hessian Sparsity Pattern: Forward Mode: Example
        5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
        5.5.7.m: Forward Mode Hessian Sparsity Patterns: Example
        5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test
        5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
        5.5.6.l: Hessian Sparsity Pattern: Reverse Mode: Example
        5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
        5.5.5.m: Reverse Mode Hessian Sparsity Patterns: Example
        5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
        5.5.4.l: Jacobian Sparsity Pattern: Reverse Mode: Example
        5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
        5.5.3.l: Reverse Mode Jacobian Sparsity Patterns: Example
        5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
        5.5.2.l: Jacobian Sparsity Pattern: Forward Mode: Example
        5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
        5.5.1.l: Forward Mode Jacobian Sparsity Patterns: Example
        5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
        5.4.4.l: Reverse Mode Using Subgraphs: Example
        5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
        5.4.3.1: Third Order Reverse Mode: Example and Test
        5.4.3.k: Any Order Reverse Mode: Example
        5.4.2.2: Hessian Times Direction: Example and Test
        5.4.2.1: Second Order Reverse ModeExample and Test
        5.4.2.j: Second Order Reverse Mode: Example
        5.4.1.1: First Order Reverse Mode: Example and Test
        5.4.1.h: First Order Reverse Mode: Example
        5.3.9.1: Number of Variables That Can be Skipped: Example and Test
        5.3.9.e: Number of Variables that Can be Skipped: Example
        5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
        5.3.8.f: Controlling Taylor Coefficients Memory Allocation: Example
        5.3.7.1: CompareChange and Re-Tape: Example and Test
        5.3.7.g: Comparison Changes Between Taping and Zero Order Forward: Example
        5.3.6.h: Number Taylor Coefficient Orders Currently Stored: Example
        5.3.5.1: Forward Mode: Example and Test of Multiple Directions
        5.3.5.o: Multiple Directions Forward Mode: Example
        5.3.4.2: Forward Mode: Example and Test of Multiple Orders
        5.3.4.1: Forward Mode: Example and Test
        5.3.4.p: Multiple Order Forward Mode: Example
        5.3.3.i: Second Order Forward Mode: Derivative Values: Example
        5.3.2.g: First Order Forward Mode: Derivative Values: Example
        5.3.1.h: Zero Order Forward Mode: Function Values: Example
        5.2.6.1: Second Partials Reverse Driver: Example and Test
        5.2.5.1: Subset of Second Order Partials: Example and Test
        5.2.4.1: First Order Derivative Driver: Example and Test
        5.2.4.i: First Order Derivative: Driver Routine: Example
        5.2.3.1: First Order Partial Driver: Example and Test
        5.2.3.i: First Order Partial Derivative: Driver Routine: Example
        5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
        5.2.2.1: Hessian: Example and Test
        5.2.2.j: Hessian: Easy Driver: Example
        5.2.1.1: Jacobian: Example and Test
        5.2.1.h: Jacobian: Driver Routine: Example
        5.1.5.1: ADFun Sequence Properties: Example and Test
        5.1.5.n: ADFun Sequence Properties: Example
        5.1.4.1: Abort Current Recording: Example and Test
        5.1.4.c: Abort Recording of an Operation Sequence: Example
        5.1.3.j: Stop Recording and Store Operation Sequence: Example
        5.1.2.1: ADFun Assignment: Example and Test
        5.1.2.k: Construct an ADFun Object and Stop Recording: Example
        5.1.1.1: Independent and ADFun Constructor: Example and Test
        5.1.1.i: Declare Independent Variables and Start Recording: Example
        4.7.9.6.1: Complex Polynomial: Example and Test
        4.7.9.6.a: Enable use of AD<Base> where Base is std::complex<double>: Example
        4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
        4.7.9.3.b: Enable use of AD<Base> where Base is Adolc's adouble Type: Example
        4.7.9.2: Using a User Defined AD Base Type: Example and Test
        4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
        4.7.9: Example AD Base Types That are not AD<OtherBase>
        4.7.8.g: Base Type Requirements for Hash Coding Values: Example
        4.7.1.i: Required Base Class Member Functions: Example
        4.6.1: AD Vectors that Record Index Operations: Example and Test
        4.6.j: AD Vectors that Record Index Operations: Example
        4.5.5.1: EqualOpSeq: Example and Test
        4.5.5.g: Check if Two Value are Identically Equal: Example
        4.5.4.1: AD Parameter and Variable Functions: Example and Test
        4.5.4.f: Is an AD Object a Parameter or Variable: Example
        4.5.3.1: AD Boolean Functions: Example and Test
        4.5.3.m: AD Boolean Functions: Example
        4.5.2.1: Compare AD with Base Objects: Example and Test
        4.5.2.j: Compare AD and Base Objects for Nearly Equal: Example
        4.5.1.1: AD Binary Comparison Operators: Example and Test
        4.5.1.i: AD Binary Comparison Operators: Example
        4.4.7.2.19: User Atomic Matrix Multiply: Example and Test
        4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test
        4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test
        4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test
        4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
        4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
        4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test
        4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
        4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
        4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test
        4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test
        4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test
        4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test
        4.4.7.2.5.1: Atomic Reverse: Example and Test
        4.4.7.2.4.1: Atomic Forward: Example and Test
        4.4.7.2.1.d: Atomic Function Constructor: Example
        4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
        4.4.7.1.3: Checkpointing an ODE Solver: Example and Test
        4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test
        4.4.7.1.1: Simple Checkpointing: Example and Test
        4.4.7.1.r: Checkpointing Functions: Example
        4.4.6.1: Numeric Limits: Example and Test
        4.4.6.j: Numeric Limits For an AD and Base Types: Example
        4.4.5.3: Interpolation With Retaping: Example and Test
        4.4.5.2: Interpolation With Out Retaping: Example and Test
        4.4.5.1: Taping Array Index Operation: Example and Test
        4.4.5.m: Discrete AD Functions: Example
        4.4.4.1: Conditional Expressions: Example and Test
        4.4.4.m: AD Conditional Expressions: Example
        4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test
        4.4.3.3.f: Absolute Zero Multiplication: Example
        4.4.3.2.1: The AD Power Function: Example and Test
        4.4.3.2.h: The AD Power Function: Example
        4.4.3.1.1: The AD atan2 Function: Example and Test
        4.4.3.1.g: AD Two Argument Inverse Tangent Function: Example
        4.4.2.21.1: Sign Function: Example and Test
        4.4.2.21.f: The Sign: sign: Example
        4.4.2.20.1: The AD log1p Function: Example and Test
        4.4.2.20.e: The Logarithm of One Plus Argument: log1p: Example
        4.4.2.19.1: The AD exp Function: Example and Test
        4.4.2.19.e: The Exponential Function Minus One: expm1: Example
        4.4.2.18.1: The AD erf Function: Example and Test
        4.4.2.18.e: The Error Function: Example
        4.4.2.17.1: The AD atanh Function: Example and Test
        4.4.2.17.e: The Inverse Hyperbolic Tangent Function: atanh: Example
        4.4.2.16.1: The AD asinh Function: Example and Test
        4.4.2.16.e: The Inverse Hyperbolic Sine Function: asinh: Example
        4.4.2.15.1: The AD acosh Function: Example and Test
        4.4.2.15.e: The Inverse Hyperbolic Cosine Function: acosh: Example
        4.4.2.14.1: AD Absolute Value Function: Example and Test
        4.4.2.14.f: AD Absolute Value Functions: abs, fabs: Example
        4.4.2.13.1: The AD tanh Function: Example and Test
        4.4.2.12.1: The AD tan Function: Example and Test
        4.4.2.11.1: The AD sqrt Function: Example and Test
        4.4.2.10.1: The AD sinh Function: Example and Test
        4.4.2.9.1: The AD sin Function: Example and Test
        4.4.2.8.1: The AD log10 Function: Example and Test
        4.4.2.7.1: The AD log Function: Example and Test
        4.4.2.6.1: The AD exp Function: Example and Test
        4.4.2.5.1: The AD cosh Function: Example and Test
        4.4.2.4.1: The AD cos Function: Example and Test
        4.4.2.3.1: The AD atan Function: Example and Test
        4.4.2.2.1: The AD asin Function: Example and Test
        4.4.2.1.1: The AD acos Function: Example and Test
        4.4.2.13.e: The Hyperbolic Tangent Function: tanh: Example
        4.4.2.12.e: The Tangent Function: tan: Example
        4.4.2.11.e: The Square Root Function: sqrt: Example
        4.4.2.10.e: The Hyperbolic Sine Function: sinh: Example
        4.4.2.9.e: The Sine Function: sin: Example
        4.4.2.8.d: The Base 10 Logarithm Function: log10: Example
        4.4.2.7.e: The Exponential Function: log: Example
        4.4.2.6.e: The Exponential Function: exp: Example
        4.4.2.5.e: The Hyperbolic Cosine Function: cosh: Example
        4.4.2.4.e: The Cosine Function: cos: Example
        4.4.2.3.e: Inverse Tangent Function: atan: Example
        4.4.2.2.e: Inverse Sine Function: asin: Example
        4.4.2.1.e: Inverse Sine Function: acos: Example
        4.4.1.4.4: AD Compound Assignment Division: Example and Test
        4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
        4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
        4.4.1.4.1: AD Compound Assignment Addition: Example and Test
        4.4.1.4.i: AD Compound Assignment Operators: Example
        4.4.1.3.4: AD Binary Division: Example and Test
        4.4.1.3.3: AD Binary Multiplication: Example and Test
        4.4.1.3.2: AD Binary Subtraction: Example and Test
        4.4.1.3.1: AD Binary Addition: Example and Test
        4.4.1.3.i: AD Binary Arithmetic Operators: Example
        4.4.1.2.1: AD Unary Minus Operator: Example and Test
        4.4.1.2.h: AD Unary Minus Operator: Example
        4.4.1.1.1: AD Unary Plus Operator: Example and Test
        4.4.1.1.g: AD Unary Plus Operator: Example
        4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
        4.3.7.f: Convert an AD Variable to a Parameter: Example
        4.3.6.2: Print During Zero Order Forward Mode: Example and Test
        4.3.6.1: Printing During Forward Mode: Example and Test
        4.3.6.k: Printing AD Values During Forward Mode: Example
        4.3.5.1: AD Output Operator: Example and Test
        4.3.4.1: AD Output Operator: Example and Test
        4.3.5.h: AD Output Stream Operator: Example
        4.3.4.g: AD Output Stream Operator: Example
        4.3.3.e: Convert An AD or Base Type to String: Example
        4.3.2.1: Convert From AD to Integer: Example and Test
        4.3.2.f: Convert From AD to Integer: Example
        4.3.1.1: Convert From AD to its Base Type: Example and Test
        4.3.1.h: Convert From an AD Type to its Base Type: Example
        4.2.1: AD Assignment: Example and Test
        4.2.e: AD Assignment Operator: Example
        4.1.1: AD Constructors: Example and Test
        4.1.e: AD Constructors: Example
        3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
        3.2: An Epsilon Accurate Exponential Approximation
        3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
        3.1: Second Order Exponential Approximation
        3: An Introduction by Example to Algorithmic Differentiation
        2.2.2.4: ColPack: Sparse Hessian Example and Test
        2.2.2.3: ColPack: Sparse Hessian Example and Test
        2.2.2.2: ColPack: Sparse Jacobian Example and Test
        2.2.2.1: ColPack: Sparse Jacobian Example and Test
        2.2.2.d: Including the ColPack Sparsity Calculations: Example
        c: cppad-20171217: A Package for Differentiation of C++ Algorithms: Example
examples 12.8.13.e.a: Autotools Unix Test and Installation: make.Examples and Tests
         10.3.2: Run the Speed Examples
         10.3.1: CppAD Examples and Tests
         10.4: List All (Except Deprecated) CppAD Examples
         10.3: Utility Routines used by CppAD Examples
         10.2: General Examples
         10.c: Examples: Running Examples
         10: Examples
         7.2: Run Multi-Threading Examples and Speed Tests
         5.7.e: Optimize an ADFun Object Tape: Examples
         5.2.6.k: Reverse Mode Second Partial Derivative Driver: Examples
         5.2.5.k: Forward Mode Second Partial Derivative Driver: Examples
         4.7.3.b.d: Base Type Requirements for Identically Equal Comparisons: Identical.Examples
         4.4.7.2.9.f: Atomic Reverse Hessian Sparsity Patterns: Examples
         4.4.7.2.8.e: Atomic Forward Hessian Sparsity Patterns: Examples
         4.4.7.2.7.f: Atomic Reverse Jacobian Sparsity Patterns: Examples
         4.4.7.2.6.f: Atomic Forward Jacobian Sparsity Patterns: Examples
         4.4.7.2.5.k: Atomic Reverse Mode: Examples
         4.4.7.2.4.l: Atomic Forward Mode: Examples
         4.4.7.2.3.g: Using AD Version of Atomic Function: Examples
         4.4.7.2.e: User Defined Atomic AD Functions: Examples
         2.3: Checking the CppAD Examples and Tests
         2.2.5.c: Including the cppad_ipopt Library and Tests: Examples and Tests
         2.2.3.c: Including the Eigen Examples and Tests: Examples
         2.2.3: Including the Eigen Examples and Tests
         2.2.1.c: Including the ADOL-C Examples and Tests: Examples
         2.2.1: Including the ADOL-C Examples and Tests
exception 8.1: Replacing the CppAD Error Handler
          2.2.s.a: Using CMake to Configure CppAD: cppad_debug_which.Exception
exceptions 12.1.e: Frequently Asked Questions and Answers: Exceptions
           4.6.d.a: AD Vectors that Record Index Operations: VecAD<Base>::reference.Exceptions
execute 12.9.7: Determine Amount of Time to Execute det_by_minor
        8.5: Determine Amount of Time to Execute a Test
execution 12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
          8.23.4: Is The Current Execution in Parallel Mode
          8.1.2: CppAD Assertions During Execution
          7: Using CppAD in a Multi-Threading Environment
exercise 8.22.p: The CppAD::vector Template Class: Exercise
         8.9.m: Definition of a Simple Vector: Exercise
         8.7.h: Definition of a Numeric Type: Exercise
         8.2.k: Determine if Two Values Are Nearly Equal: Exercise
exercises 10.1.f: Getting Started Using CppAD to Compute Derivatives: Exercises
          3.2.8.b: exp_eps: CppAD Forward and Reverse Sweeps: Exercises
          3.2.7.l: exp_eps: Second Order Reverse Sweep: Exercises
          3.2.6.g: exp_eps: Second Order Forward Mode: Exercises
          3.2.5.l: exp_eps: First Order Reverse Sweep: Exercises
          3.2.4.f: exp_eps: First Order Forward Sweep: Exercises
          3.2.3.f: exp_eps: Operation Sequence and Zero Order Forward Sweep: Exercises
          3.2.k: An Epsilon Accurate Exponential Approximation: Exercises
          3.1.8.b: exp_2: CppAD Forward and Reverse Sweeps: Exercises
          3.1.7.i: exp_2: Second Order Reverse Mode: Exercises
          3.1.6.g: exp_2: Second Order Forward Mode: Exercises
          3.1.5.i: exp_2: First Order Reverse Mode: Exercises
          3.1.4.g: exp_2: First Order Forward Mode: Exercises
          3.1.3.f: exp_2: Operation Sequence and Zero Order Forward Mode: Exercises
          3.1.k: Second Order Exponential Approximation: Exercises
exp 12.3.2.1: Exponential Function Reverse Mode Theory
    12.3.1.1: Exponential Function Forward Mode Theory
    8.1.2.g: CppAD Assertions During Execution: Exp
    8.1.i: Replacing the CppAD Error Handler: exp
    4.4.2.19.1: The AD exp Function: Example and Test
    4.4.2.6.1: The AD exp Function: Example and Test
    4.4.2.6: The Exponential Function: exp
exp_3.2.6.1: exp_eps: Verify Second Order Forward Sweep
      3.2.4.1: exp_eps: Verify First Order Forward Sweep
      3.1.7.1: exp_2: Verify Second Order Reverse Sweep
      3.1.6.1: exp_2: Verify Second Order Forward Sweep
      3.1.5.1: exp_2: Verify First Order Reverse Sweep
      3.1.4.1: exp_2: Verify First Order Forward Sweep
      3.1.3.1: exp_2: Verify Zero Order Forward Sweep
      3.1.1: exp_2: Implementation
      3.1: Second Order Exponential Approximation
exp_2: 3.1.8: exp_2: CppAD Forward and Reverse Sweeps
       3.1.7.1: exp_2: Verify Second Order Reverse Sweep
       3.1.6.1: exp_2: Verify Second Order Forward Sweep
       3.1.5.1: exp_2: Verify First Order Reverse Sweep
       3.1.4.1: exp_2: Verify First Order Forward Sweep
       3.1.3.1: exp_2: Verify Zero Order Forward Sweep
       3.1.7: exp_2: Second Order Reverse Mode
       3.1.6: exp_2: Second Order Forward Mode
       3.1.5: exp_2: First Order Reverse Mode
       3.1.4: exp_2: First Order Forward Mode
       3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
       3.1.2: exp_2: Test
       3.1.1: exp_2: Implementation
exp_apx 3.3: Correctness Tests For Exponential Approximation in Introduction
exp_eps 3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
        3.2.5.1: exp_eps: Verify First Order Reverse Sweep
        3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
        3.2.2: exp_eps: Test of exp_eps
        3.2.1: exp_eps: Implementation
        3.2: An Epsilon Accurate Exponential Approximation
exp_eps: 3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
         3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
         3.2.6.1: exp_eps: Verify Second Order Forward Sweep
         3.2.5.1: exp_eps: Verify First Order Reverse Sweep
         3.2.4.1: exp_eps: Verify First Order Forward Sweep
         3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
         3.2.7: exp_eps: Second Order Reverse Sweep
         3.2.6: exp_eps: Second Order Forward Mode
         3.2.5: exp_eps: First Order Reverse Sweep
         3.2.4: exp_eps: First Order Forward Sweep
         3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
         3.2.2: exp_eps: Test of exp_eps
         3.2.1: exp_eps: Implementation
expansion 12.9.2: Compute Determinant using Expansion by Minors
          11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
          11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
          11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
          11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
          11.3.1: Double Speed: Determinant by Minor Expansion
          11.2.3.1: Determinant Using Expansion by Minors: Example and Test
          11.2.3: Determinant Using Expansion by Minors
          11.2.1: Determinant Using Expansion by Lu Factorization
          11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
          10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
          10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
          3.2.6.a: exp_eps: Second Order Forward Mode: Second Order Expansion
          3.2.4.a: exp_eps: First Order Forward Sweep: First Order Expansion
          3.1.6.a: exp_2: Second Order Forward Mode: Second Order Expansion
          3.1.4.a: exp_2: First Order Forward Mode: First Order Expansion
          3.1.3.b: exp_2: Operation Sequence and Zero Order Forward Mode: Zero Order Expansion
explicit 4.1.c.b: AD Constructors: x.explicit
         2.2: Using CMake to Configure CppAD
expm1 12.3.2.1: Exponential Function Reverse Mode Theory
      12.3.1.1: Exponential Function Forward Mode Theory
      4.7.9.3.l: Enable use of AD<Base> where Base is Adolc's adouble Type: erf, asinh, acosh, atanh, expm1, log1p
      4.7.9.1.p: Example AD<Base> Where Base Constructor Allocates Memory: erf, asinh, acosh, atanh, expm1, log1p
      4.7.5.d: Base Type Requirements for Standard Math Functions: erf, asinh, acosh, atanh, expm1, log1p
      4.4.2.19: The Exponential Function Minus One: expm1
exponent 8.12: The Integer Power Function
         4.4.3.2: The AD Power Function
exponent: 8.12.1: The Pow Integer Exponent: Example and Test
exponential 12.3.2.1: Exponential Function Reverse Mode Theory
            12.3.1.1: Exponential Function Forward Mode Theory
            4.4.2.19: The Exponential Function Minus One: expm1
            4.4.2.7: The Exponential Function: log
            4.4.2.6: The Exponential Function: exp
            3.3: Correctness Tests For Exponential Approximation in Introduction
            3.2: An Epsilon Accurate Exponential Approximation
            3.1: Second Order Exponential Approximation
expression 5.3.9.1: Number of Variables That Can be Skipped: Example and Test
expressions 5.7.6: Example Optimization and Nested Conditional Expressions
            5.7.5: Example Optimization and Conditional Expressions
            4.7.2: Base Type Requirements for Conditional Expressions
            4.4.4: AD Conditional Expressions
expressions: 4.4.4.1: Conditional Expressions: Example and Test
extended 4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
extending 4.7.7: Extending to_string To Another Floating Point Type
external 2.2.6.1.d: Download and Install Sacado in Build Directory: External Directory
         2.2.5.1.d: Download and Install Ipopt in Build Directory: External Directory
         2.2.4.1.d: Download and Install Fadbad in Build Directory: External Directory
         2.2.3.1.d: Download and Install Eigen in Build Directory: External Directory
         2.2.2.5.d: Download and Install ColPack in Build Directory: External Directory
         2.2.1.1.e: Download and Install Adolc in Build Directory: External Directory
extra 12.8.11.5.1.d: Define Matrix Multiply as a User Atomic Operation: Extra Call Information
      8.23.8.b.a: Free Memory Currently Available for Quick Use by a Thread: Purpose.Extra Memory
extraction 2.1.i: Download The CppAD Source Code: Windows File Extraction and Testing
F
ForSparseHes 5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
ForSparseJac 5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
Forward 5.3.8: Controlling Taylor Coefficients Memory Allocation
FunCheck 5.9.1: ADFun Check and Re-Tape: Example and Test
f(x) 5.8.1.f.b: Create An Abs-normal Representation of a Function: Abs-normal Approximation.Approximating f(x)
f.forward(4.3.6.c: Printing AD Values During Forward Mode: f.Forward(0, x)
f_3.2.7.j: exp_eps: Second Order Reverse Sweep: Index 2: f_1
    3.2.5.j: exp_eps: First Order Reverse Sweep: Index 2: f_1
    3.1.7.g: exp_2: Second Order Reverse Mode: Index 2: f_1
    3.1.5.g: exp_2: First Order Reverse Mode: Index 2: f_1
f_3.2.7.i: exp_eps: Second Order Reverse Sweep: Index 3: f_2
    3.2.5.i: exp_eps: First Order Reverse Sweep: Index 3: f_2
    3.1.7.f: exp_2: Second Order Reverse Mode: Index 3: f_2
    3.1.5.f: exp_2: First Order Reverse Mode: Index 3: f_2
f_3.2.7.h: exp_eps: Second Order Reverse Sweep: Index 4: f_3
    3.2.5.h: exp_eps: First Order Reverse Sweep: Index 4: f_3
    3.1.7.e: exp_2: Second Order Reverse Mode: Index 4: f_3
    3.1.5.e: exp_2: First Order Reverse Mode: Index 4: f_3
f_3.2.7.g: exp_eps: Second Order Reverse Sweep: Index 5: f_4
    3.2.5.g: exp_eps: First Order Reverse Sweep: Index 5: f_4
    3.1.7.d: exp_2: Second Order Reverse Mode: Index 5: f_4
    3.1.5.d: exp_2: First Order Reverse Mode: Index 5: f_4
f_3.2.7.f: exp_eps: Second Order Reverse Sweep: Index 6: f_5
    3.2.5.f: exp_eps: First Order Reverse Sweep: Index 6: f_5
    3.1.7.c: exp_2: Second Order Reverse Mode: f_5
    3.1.5.c: exp_2: First Order Reverse Mode: f_5
f_3.2.7.e: exp_eps: Second Order Reverse Sweep: Index 7: f_6
    3.2.5.e: exp_eps: First Order Reverse Sweep: Index 7: f_6
f_3.2.7.d: exp_eps: Second Order Reverse Sweep: f_7
    3.2.5.d: exp_eps: First Order Reverse Sweep: f_7
f_8.18.e.d: A 3rd and 4th Order Rosenbrock ODE Solver: Fun.f_t
f_8.21.f.d: An Error Controller for Gear's Ode Solvers: Fun.f_x
    8.20.d.d: An Arbitrary Order Gear Method: Fun.f_x
    8.18.e.e: A 3rd and 4th Order Rosenbrock ODE Solver: Fun.f_x
fabs 11.2.7.d.b: Evaluate a Function Defined in Terms of an ODE: Float.fabs
     8.17.l.a: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Scalar.fabs
     4.4.2.14.1: AD Absolute Value Function: Example and Test
     4.4.2.14: AD Absolute Value Functions: abs, fabs
factor 12.10.3.h.e: LU Factorization of A Square Matrix and Stability Calculation: LU.Factor
       11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
       11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
       11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
       11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
       11.3.2: Double Speed: Determinant Using Lu Factorization
       11.2.1: Determinant Using Expansion by Lu Factorization
       10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
       10.3.3: Lu Factor and Solve with Recorded Pivoting
       8.14.2.h.e: LU Factorization of A Square Matrix: LU.Factor
       8.14.1.d: Compute Determinant and Solve Linear Equations: Factor and Invert
       8.14: Compute Determinants and Solve Equations by LU Factorization
       4.4.7.2.18.1.b.a: AD Theory for Cholesky Factorization: Notation.Cholesky Factor
factored 8.14.3: Invert an LU Factored Equation
factorization 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
              11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
              11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
              11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
              11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
              11.3.2: Double Speed: Determinant Using Lu Factorization
              11.2.1: Determinant Using Expansion by Lu Factorization
              11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
              8.14.2: LU Factorization of A Square Matrix
              8.14: Compute Determinants and Solve Equations by LU Factorization
              4.4.7.2.18.2: Atomic Eigen Cholesky Factorization Class
              4.4.7.2.18.1: AD Theory for Cholesky Factorization
factorization: 11.2.1.1: Determinant Using Lu Factorization: Example and Test
               10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
               10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
               4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test
fadbad 11.6.7: fadbad Speed: sparse_jacobian
       11.6.6: Fadbad Speed: Sparse Hessian
       11.6.5: Fadbad Speed: Second Derivative of a Polynomial
       11.6.4: Fadbad Speed: Ode
       11.6.3: Fadbad Speed: Matrix Multiplication
       11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
       11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
       11.6: Speed Test Derivatives Using Fadbad
       2.2.4.1: Download and Install Fadbad in Build Directory
       2.2.4: Including the FADBAD Speed Tests
       2.2.n.a: Using CMake to Configure CppAD: cppad_profile_flag.Eigen and Fadbad
     download and install 2.2.4.1: Download and Install Fadbad in Build Directory
fadbad_dir 12.8.13.q: Autotools Unix Test and Installation: fadbad_dir
fadbad_prefix 11.6.b: Speed Test Derivatives Using Fadbad: fadbad_prefix
              2.2.4.b: Including the FADBAD Speed Tests: fadbad_prefix
false 5.5.6.i.a: Hessian Sparsity Pattern: Reverse Mode: h.transpose false
      5.5.4.i.a: Jacobian Sparsity Pattern: Reverse Mode: s.transpose false
      5.5.4.h.a: Jacobian Sparsity Pattern: Reverse Mode: r.transpose false
      5.5.2.i.a: Jacobian Sparsity Pattern: Forward Mode: s.transpose false
      5.5.2.h.a: Jacobian Sparsity Pattern: Forward Mode: r.transpose false
      4.4.2.20.d.b: The Logarithm of One Plus Argument: log1p: CPPAD_USE_CPLUSPLUS_2011.false
      4.4.2.19.d.b: The Exponential Function Minus One: expm1: CPPAD_USE_CPLUSPLUS_2011.false
      4.4.2.18.d.b: The Error Function: CPPAD_USE_CPLUSPLUS_2011.false
      4.4.2.17.d.b: The Inverse Hyperbolic Tangent Function: atanh: CPPAD_USE_CPLUSPLUS_2011.false
      4.4.2.16.d.b: The Inverse Hyperbolic Sine Function: asinh: CPPAD_USE_CPLUSPLUS_2011.false
      4.4.2.15.d.b: The Inverse Hyperbolic Cosine Function: acosh: CPPAD_USE_CPLUSPLUS_2011.false
fast 12.8.10.3: Speed Test for Both Simple and Fast Representations
     12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
     12.8.10.2.3.1: ODE Fitting Using Fast Representation
     12.8.10.2.3: ODE Fitting Using Fast Representation
     8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
     8.23: A Fast Multi-Threading Memory Allocator
faster 4.4.7.1.c.b: Checkpointing Functions: Purpose.Faster Recording
features 12.8: CppAD Deprecated API Features
fg 9.l.c: Use Ipopt to Solve a Nonlinear Programming Problem: fg_eval.fg
fg(x) 12.8.10.f: Nonlinear Programming Using the CppAD Interface to Ipopt: fg(x)
fg_eval 9.l: Use Ipopt to Solve a Nonlinear Programming Problem: fg_eval
fg_info 12.8.10.s: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info
fg_info.domain_size 12.8.10.s.d: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.domain_size
fg_info.eval_12.8.10.s.b: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.eval_r
fg_info.index 12.8.10.s.g: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.index
fg_info.number_functions 12.8.10.s.a: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.number_functions
fg_info.number_terms 12.8.10.s.f: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.number_terms
fg_info.range_size 12.8.10.s.e: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.range_size
fg_info.retape 12.8.10.s.c: Nonlinear Programming Using the CppAD Interface to Ipopt: fg_info.fg_info.retape
fields 2.4.c: CppAD pkg-config Files: Defined Fields
file 12.8.11.5.b: Old Matrix Multiply as a User Atomic Operation: Example and Test: Include File
     12.8.5.e: Routines That Track Use of New and Delete: file
     9.c: Use Ipopt to Solve a Nonlinear Programming Problem: Include File
     8.2.1.a: NearEqual Function: Example and Test: File Name
     8.1.h: Replacing the CppAD Error Handler: file
     7.2.10.a: Multi-Threaded Newton Method Example / Test: Source File
     7.2.9.a: Multi-Threading User Atomic Example / Test: Source File
     7.2.8.a: Multi-Threading Harmonic Summation Example / Test: Source File
     5.10.g.b: Check an ADFun Object For Nan Results: get_check_for_nan.file
     4.7.9.1.b: Example AD<Base> Where Base Constructor Allocates Memory: Include File
     2.1.i: Download The CppAD Source Code: Windows File Extraction and Testing
     d: cppad-20171217: A Package for Differentiation of C++ Algorithms: Include File
file_name 5.10.f.b: Check an ADFun Object For Nan Results: Error Message.file_name
files 12.11.c: CppAD Addons: Library Files
      12.11.b: CppAD Addons: Include Files
      12.8.1: Deprecated Include Files
      10.2.4.d: Enable Use of Eigen Linear Algebra Package with CppAD: Include Files
      8.2.i: Determine if Two Values Are Nearly Equal: Include Files
      4.7.9.3.c: Enable use of AD<Base> where Base is Adolc's adouble Type: Include Files
      2.4.d: CppAD pkg-config Files: CppAD Configuration Files
      2.4: CppAD pkg-config Files
first 10.2.10.c.a: Using Multiple Levels of AD: Procedure.First Start AD<double>
      8.4.f: Run One Speed Test and Print Results: first
      5.8.3: abs_normal: Evaluate First Order Approximation
      5.4.3.h: Any Order Reverse Mode: First Order
      5.4.2.g.a: Second Order Reverse Mode: dw.First Order Partials
      5.4.1.1: First Order Reverse Mode: Example and Test
      5.4.1: First Order Reverse Mode
      5.3.4.n: Multiple Order Forward Mode: First Order
      5.3.2: First Order Forward Mode: Derivative Values
      5.2.4.1: First Order Derivative Driver: Example and Test
      5.2.4: First Order Derivative: Driver Routine
      5.2.3.1: First Order Partial Driver: Example and Test
      5.2.3: First Order Partial Derivative: Driver Routine
      5.2.1: Jacobian: Driver Routine
      5.2: First and Second Order Derivatives: Easy Drivers
      3.2.6.1: exp_eps: Verify Second Order Forward Sweep
      3.2.5.1: exp_eps: Verify First Order Reverse Sweep
      3.2.4.1: exp_eps: Verify First Order Forward Sweep
      3.2.6.d.d: exp_eps: Second Order Forward Mode: Operation Sequence.First
      3.2.5: exp_eps: First Order Reverse Sweep
      3.2.4.c.e: exp_eps: First Order Forward Sweep: Operation Sequence.First Order
      3.2.4.a: exp_eps: First Order Forward Sweep: First Order Expansion
      3.2.4: exp_eps: First Order Forward Sweep
      3.1.5.1: exp_2: Verify First Order Reverse Sweep
      3.1.4.1: exp_2: Verify First Order Forward Sweep
      3.1.6.d.d: exp_2: Second Order Forward Mode: Operation Sequence.First
      3.1.5: exp_2: First Order Reverse Mode
      3.1.4.d.e: exp_2: First Order Forward Mode: Operation Sequence.First Order
      3.1.4.a: exp_2: First Order Forward Mode: First Order Expansion
      3.1.4: exp_2: First Order Forward Mode
      2.3.d: Checking the CppAD Examples and Tests: First Level
fitting 12.8.10.2.3.1: ODE Fitting Using Fast Representation
        12.8.10.2.2.1: ODE Fitting Using Simple Representation
        12.8.10.2.3: ODE Fitting Using Fast Representation
        12.8.10.2.2: ODE Fitting Using Simple Representation
flag 12.9.4.b: Correctness Test of det_by_minor Routine: flag
     12.8.7.f: Memory Leak Detection: flag
     12.8.6.11.f: Check If A Memory Allocation is Efficient for Another Use: flag
     12.8.6.2.d: Is The Current Execution in OpenMP Parallel Mode: flag
     8.23.4.c: Is The Current Execution in Parallel Mode: flag
flags 2.2: Using CMake to Configure CppAD
float 12.8.8.d: Machine Epsilon For AD Types: Float
      11.2.9.d: Evaluate a Function That Has a Sparse Hessian: Float
      11.2.8.d: Evaluate a Function That Has a Sparse Jacobian: Float
      11.2.7.d: Evaluate a Function Defined in Terms of an ODE: Float
      8.25.e.b: Convert Certain Types to a String: s.Float
      8.25.d.b: Convert Certain Types to a String: value.Float
      8.16.l: Multi-dimensional Romberg Integration: Float
      8.15.k: One DimensionalRomberg Integration: Float
      8.14.2.k: LU Factorization of A Square Matrix: Float
      8.14.1.m: Compute Determinant and Solve Linear Equations: Float
      4.7.9.4: Enable use of AD<Base> where Base is float
      4.4.6.d: Numeric Limits For an AD and Base Types: Float
floating 4.7.7: Extending to_string To Another Floating Point Type
floatvector 11.2.9.e: Evaluate a Function That Has a Sparse Hessian: FloatVector
            11.2.8.e: Evaluate a Function That Has a Sparse Jacobian: FloatVector
            8.16.m: Multi-dimensional Romberg Integration: FloatVector
            8.14.2.j: LU Factorization of A Square Matrix: FloatVector
            8.14.1.n: Compute Determinant and Solve Linear Equations: FloatVector
for_jac_sparse 12.8.11.p: User Defined Atomic AD Functions: for_jac_sparse
for_sparse_hes 4.4.7.2.16.1.g.f: Atomic Eigen Matrix Multiply Class: Private.for_sparse_hes
               4.4.7.2.14.k.e: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.for_sparse_hes
               4.4.7.2.14.h: Atomic Sparsity with Set Patterns: Example and Test: for_sparse_hes
               4.4.7.2.8.1.h: Atomic Forward Hessian Sparsity: Example and Test: for_sparse_hes
for_sparse_jac 4.4.7.2.19.1.l: Matrix Multiply as an Atomic Operation: for_sparse_jac
               4.4.7.2.19.c.f: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.for_sparse_jac
               4.4.7.2.16.1.g.d: Atomic Eigen Matrix Multiply Class: Private.for_sparse_jac
               4.4.7.2.15.k.e: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.for_sparse_jac
               4.4.7.2.15.g: Tan and Tanh as User Atomic Operations: Example and Test: for_sparse_jac
               4.4.7.2.14.k.c: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.for_sparse_jac
               4.4.7.2.14.f: Atomic Sparsity with Set Patterns: Example and Test: for_sparse_jac
               4.4.7.2.13.k.e: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.for_sparse_jac
               4.4.7.2.13.g: Reciprocal as an Atomic Operation: Example and Test: for_sparse_jac
               4.4.7.2.12.k.e: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.for_sparse_jac
               4.4.7.2.12.g: Atomic Euclidean Norm Squared: Example and Test: for_sparse_jac
               4.4.7.2.9.1.f: Atomic Reverse Hessian Sparsity: Example and Test: for_sparse_jac
               4.4.7.2.8.1.f: Atomic Forward Hessian Sparsity: Example and Test: for_sparse_jac
               4.4.7.2.6.1.f: Atomic Forward Jacobian Sparsity: Example and Test: for_sparse_jac
form 3.2.7.b: exp_eps: Second Order Reverse Sweep: Mathematical Form
     3.2.6.c: exp_eps: Second Order Forward Mode: Mathematical Form
     3.2.5.b: exp_eps: First Order Reverse Sweep: Mathematical Form
     3.2.4.b: exp_eps: First Order Forward Sweep: Mathematical Form
     3.2.3.a: exp_eps: Operation Sequence and Zero Order Forward Sweep: Mathematical Form
     3.1.7.b: exp_2: Second Order Reverse Mode: Mathematical Form
     3.1.6.c: exp_2: Second Order Forward Mode: Mathematical Form
     3.1.5.b: exp_2: First Order Reverse Mode: Mathematical Form
     3.1.4.c: exp_2: First Order Forward Mode: Mathematical Form
     3.1.3.a: exp_2: Operation Sequence and Zero Order Forward Mode: Mathematical Form
     3.1.c: Second Order Exponential Approximation: Mathematical Form
formula 12.3.1.c.b: The Theory of Forward Mode: Standard Math Functions.Taylor Coefficients Recursion Formula
forone 5.2.3.h: First Order Partial Derivative: Driver Routine: ForOne Uses Forward
forsparsejac 5.5.6.2.b: Sparsity Patterns For a Subset of Variables: Example and Test: ForSparseJac
fortwo 5.2.5.j: Forward Mode Second Partial Derivative Driver: ForTwo Uses Forward
forward 12.8.11.n: User Defined Atomic AD Functions: forward
        12.8.11.l.a: User Defined Atomic AD Functions: ty.forward
        12.8.10.2.1.b: An ODE Inverse Problem Example: Forward Problem
        12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem
        12.8.3: Comparison Changes During Zero Order Forward Mode
        12.6.l: The CppAD Wish List: Forward Mode Recomputation
        12.3.1.9: Error Function Forward Taylor Polynomial Theory
        12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
        12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
        12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
        12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
        12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
        12.3.1.3: Square Root Function Forward Mode Theory
        12.3.1.2: Logarithm Function Forward Mode Theory
        12.3.1.1: Exponential Function Forward Mode Theory
        12.3.1: The Theory of Forward Mode
        12.1.h: Frequently Asked Questions and Answers: Mode: Forward or Reverse
        10.2.14.d: Taylor's Ode Solver: An Example and Test: Forward Mode
        9.3.b: ODE Inverse Problem Definitions: Source Code: Forward Problem
        5.9.k: Check an ADFun Sequence of Operations: FunCheck Uses Forward
        5.7.4: Example Optimization and Print Forward Operators
        5.7.1: Example Optimization and Forward Activity Analysis
        5.6.5.i: Compute Sparse Jacobians Using Subgraphs: Uses Forward
        5.6.4.n: Sparse Hessian: Uses Forward
        5.6.3.m: Computing Sparse Hessians: Uses Forward
        5.6.2.m: Sparse Jacobian: Uses Forward
        5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
        5.6.1.o: Computing Sparse Jacobians: Uses Forward
        5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
        5.5.8: Hessian Sparsity Pattern: Forward Mode
        5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
        5.5.7: Forward Mode Hessian Sparsity Patterns
        5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
        5.5.2: Jacobian Sparsity Pattern: Forward Mode
        5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
        5.5.1: Forward Mode Jacobian Sparsity Patterns
        5.3.7: Comparison Changes Between Taping and Zero Order Forward
        5.3.6.f: Number Taylor Coefficient Orders Currently Stored: Forward
        5.3.5.1: Forward Mode: Example and Test of Multiple Directions
        5.3.5: Multiple Directions Forward Mode
        5.3.4.2: Forward Mode: Example and Test of Multiple Orders
        5.3.4.1: Forward Mode: Example and Test
        5.3.4: Multiple Order Forward Mode
        5.3.3: Second Order Forward Mode: Derivative Values
        5.3.2: First Order Forward Mode: Derivative Values
        5.3.1: Zero Order Forward Mode: Function Values
        5.2.6.j: Reverse Mode Second Partial Derivative Driver: RevTwo Uses Forward
        5.2.5.j: Forward Mode Second Partial Derivative Driver: ForTwo Uses Forward
        5.2.5: Forward Mode Second Partial Derivative Driver
        5.2.4.h: First Order Derivative: Driver Routine: RevOne Uses Forward
        5.2.3.h: First Order Partial Derivative: Driver Routine: ForOne Uses Forward
        5.2.2.i: Hessian: Easy Driver: Hessian Uses Forward
        5.2.1.g: Jacobian: Driver Routine: Forward or Reverse
        5.1.3.h: Stop Recording and Store Operation Sequence: Forward
        5.3: Forward Mode
        4.4.7.2.19.1.j: Matrix Multiply as an Atomic Operation: forward
        4.4.7.2.19.1.h: Matrix Multiply as an Atomic Operation: Forward Matrix Multiply
        4.4.7.2.19.c.c: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.forward
        4.4.7.2.18.2.d.b: Atomic Eigen Cholesky Factorization Class: Private.forward
        4.4.7.2.18.1.c: AD Theory for Cholesky Factorization: Forward Mode
        4.4.7.2.17.1.f.b: Atomic Eigen Matrix Inversion Class: Private.forward
        4.4.7.2.17.1.c.a: Atomic Eigen Matrix Inversion Class: Theory.Forward
        4.4.7.2.16.1.g.b: Atomic Eigen Matrix Multiply Class: Private.forward
        4.4.7.2.16.1.d.a: Atomic Eigen Matrix Multiply Class: Theory.Forward
        4.4.7.2.15.k.c: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.forward
        4.4.7.2.15.e: Tan and Tanh as User Atomic Operations: Example and Test: forward
        4.4.7.2.14.e: Atomic Sparsity with Set Patterns: Example and Test: forward
        4.4.7.2.13.k.c: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.forward
        4.4.7.2.13.e: Reciprocal as an Atomic Operation: Example and Test: forward
        4.4.7.2.12.k.c: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.forward
        4.4.7.2.12.e: Atomic Euclidean Norm Squared: Example and Test: forward
        4.4.7.2.11.f.c: Getting Started with Atomic Operations: Example and Test: Use Atomic Function.forward
        4.4.7.2.11.d: Getting Started with Atomic Operations: Example and Test: forward
        4.4.7.2.9.1.e: Atomic Reverse Hessian Sparsity: Example and Test: forward
        4.4.7.2.8.1.e: Atomic Forward Hessian Sparsity: Example and Test: forward
        4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test
        4.4.7.2.7.1.e: Atomic Reverse Jacobian Sparsity: Example and Test: forward
        4.4.7.2.6.1.e: Atomic Forward Jacobian Sparsity: Example and Test: forward
        4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test
        4.4.7.2.5.1.e: Atomic Reverse: Example and Test: forward
        4.4.7.2.4.1.e: Atomic Forward: Example and Test: forward
        4.4.7.2.8: Atomic Forward Hessian Sparsity Patterns
        4.4.7.2.6: Atomic Forward Jacobian Sparsity Patterns
        4.4.7.2.4: Atomic Forward Mode
        4.4.7.1.c.c: Checkpointing Functions: Purpose.Repeating Forward
        4.3.6.2: Print During Zero Order Forward Mode: Example and Test
        4.3.6.1: Printing During Forward Mode: Example and Test
        4.3.6: Printing AD Values During Forward Mode
        3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
        3.2.6.1: exp_eps: Verify Second Order Forward Sweep
        3.2.4.1: exp_eps: Verify First Order Forward Sweep
        3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
        3.2.6: exp_eps: Second Order Forward Mode
        3.2.4: exp_eps: First Order Forward Sweep
        3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
        3.1.8: exp_2: CppAD Forward and Reverse Sweeps
        3.1.6.1: exp_2: Verify Second Order Forward Sweep
        3.1.4.1: exp_2: Verify First Order Forward Sweep
        3.1.3.1: exp_2: Verify Zero Order Forward Sweep
        3.1.6: exp_2: Second Order Forward Mode
        3.1.4: exp_2: First Order Forward Mode
        3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
        3.b.b: An Introduction by Example to Algorithmic Differentiation: Preface.Forward Mode
forward: 4.4.7.2.4.1: Atomic Forward: Example and Test
fp 11.2.9.k: Evaluate a Function That Has a Sparse Hessian: fp
   11.2.8.l: Evaluate a Function That Has a Sparse Jacobian: fp
   11.2.7.g: Evaluate a Function Defined in Terms of an ODE: fp
free 12.8.11.b.c: User Defined Atomic AD Functions: Syntax Function.Free Static Memory
     12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
     11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
     8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
     8.23.8: Free Memory Currently Available for Quick Use by a Thread
     4.4.7.2.10: Free Static Variables
free_available 12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
               8.23.9.d: Control When Thread Alloc Retains Memory For Future Use: free_available
               8.23.8: Free Memory Currently Available for Quick Use by a Thread
freeing 5.3.8.d.b: Controlling Taylor Coefficients Memory Allocation: c.Freeing Memory
frequently 12.1: Frequently Asked Questions and Answers
from 8.7.c: Definition of a Numeric Type: Constructor From Integer
     4.3.7: Convert an AD Variable to a Parameter
     4.3.2.1: Convert From AD to Integer: Example and Test
     4.3.2: Convert From AD to Integer
     4.3.1.1: Convert From AD to its Base Type: Example and Test
     4.3.1: Convert From an AD Type to its Base Type
     4.3: Conversion and I/O of AD Objects
fun 12.10.2.h: Jacobian and Hessian of Optimal Values: Fun
    12.10.1.g: Computing Jacobian and Hessian of Bender's Reduced Objective: fun
    8.21.f: An Error Controller for Gear's Ode Solvers: Fun
    8.20.d: An Arbitrary Order Gear Method: Fun
    8.18.e: A 3rd and 4th Order Rosenbrock ODE Solver: Fun
    8.17.f: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Fun
    7.2.10.5.g: A Multi-Threaded Newton's Method: fun
    4.4.2.d: The Unary Standard Math Functions: fun
fun.dy 12.10.1.g.c: Computing Jacobian and Hessian of Bender's Reduced Objective: fun.fun.dy
fun.ell 12.10.2.h.b: Jacobian and Hessian of Optimal Values: Fun.fun.ell
fun.12.10.1.g.a: Computing Jacobian and Hessian of Bender's Reduced Objective: fun.fun.f
fun.12.10.1.g.b: Computing Jacobian and Hessian of Bender's Reduced Objective: fun.fun.h
fun.12.10.2.h.c: Jacobian and Hessian of Optimal Values: Fun.fun.s
fun.sy 12.10.2.h.d: Jacobian and Hessian of Optimal Values: Fun.fun.sy
fun::ad_vector 12.10.2.h.a: Jacobian and Hessian of Optimal Values: Fun.Fun::ad_vector
funcheck 5.9.k: Check an ADFun Sequence of Operations: FunCheck Uses Forward
function 12.8.11.5.1.j: Define Matrix Multiply as a User Atomic Operation: Declare mat_mul Function
         12.8.11.3: Using AD to Compute Atomic Function Derivatives
         12.8.11.2: Using AD to Compute Atomic Function Derivatives
         12.8.11.t.c: User Defined Atomic AD Functions: Example.Tangent Function
         12.8.11.b.a: User Defined Atomic AD Functions: Syntax Function.Use Function
         12.8.11.b: User Defined Atomic AD Functions: Syntax Function
         12.8.10.2.3.b: ODE Fitting Using Fast Representation: Objective Function
         12.8.10.2.2.c: ODE Fitting Using Simple Representation: Objective Function
         12.4.d: Glossary: Base Function
         12.4.a: Glossary: AD Function
         12.3.2.9: Error Function Reverse Mode Theory
         12.3.2.3: Square Root Function Reverse Mode Theory
         12.3.2.2: Logarithm Function Reverse Mode Theory
         12.3.2.1: Exponential Function Reverse Mode Theory
         12.3.1.9: Error Function Forward Taylor Polynomial Theory
         12.3.1.3: Square Root Function Forward Mode Theory
         12.3.1.2: Logarithm Function Forward Mode Theory
         12.3.1.1: Exponential Function Forward Mode Theory
         11.2.9.k.a: Evaluate a Function That Has a Sparse Hessian: fp.Function
         11.2.9: Evaluate a Function That Has a Sparse Hessian
         11.2.8.l.a: Evaluate a Function That Has a Sparse Jacobian: fp.Function
         11.2.8: Evaluate a Function That Has a Sparse Jacobian
         11.2.7.g.a: Evaluate a Function Defined in Terms of an ODE: fp.Function
         11.2.7: Evaluate a Function Defined in Terms of an ODE
         10.2.10.c.f: Using Multiple Levels of AD: Procedure.Derivatives of Outer Function
         10.2.10.c.e: Using Multiple Levels of AD: Procedure.Outer Function
         10.2.10.c.c: Using Multiple Levels of AD: Procedure.Inner Function
         10.1.b: Getting Started Using CppAD to Compute Derivatives: Function
         8.12: The Integer Power Function
         5.8.1: Create An Abs-normal Representation of a Function
         5.6.4.2.c: Computing Sparse Hessian for a Subset of Variables: Function
         5.5.11.d: Subgraph Dependency Sparsity Patterns: Atomic Function
         5.3.4.b.a: Multiple Order Forward Mode: Purpose.Function Values
         5.3.1: Zero Order Forward Mode: Function Values
         4.4.7.2.19.c: User Atomic Matrix Multiply: Example and Test: Use Atomic Function
         4.4.7.2.18.c: Atomic Eigen Cholesky Factorization: Example and Test: Use Atomic Function
         4.4.7.2.17.c: Atomic Eigen Matrix Inverse: Example and Test: Use Atomic Function
         4.4.7.2.16.c: Atomic Eigen Matrix Multiply: Example and Test: Use Atomic Function
         4.4.7.2.15.k: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function
         4.4.7.2.14.k: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function
         4.4.7.2.14.a: Atomic Sparsity with Set Patterns: Example and Test: function
         4.4.7.2.13.k: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function
         4.4.7.2.12.k: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function
         4.4.7.2.11.f: Getting Started with Atomic Operations: Example and Test: Use Atomic Function
         4.4.7.2.9.1.i: Atomic Reverse Hessian Sparsity: Example and Test: Use Atomic Function
         4.4.7.2.9.1.b: Atomic Reverse Hessian Sparsity: Example and Test: function
         4.4.7.2.8.1.i: Atomic Forward Hessian Sparsity: Example and Test: Use Atomic Function
         4.4.7.2.8.1.b: Atomic Forward Hessian Sparsity: Example and Test: function
         4.4.7.2.7.1.g: Atomic Reverse Jacobian Sparsity: Example and Test: Use Atomic Function
         4.4.7.2.7.1.b: Atomic Reverse Jacobian Sparsity: Example and Test: function
         4.4.7.2.6.1.g: Atomic Forward Jacobian Sparsity: Example and Test: Use Atomic Function
         4.4.7.2.6.1.b: Atomic Forward Jacobian Sparsity: Example and Test: function
         4.4.7.2.5.1.g: Atomic Reverse: Example and Test: Use Atomic Function
         4.4.7.2.5.1.b: Atomic Reverse: Example and Test: function
         4.4.7.2.4.1.f: Atomic Forward: Example and Test: Use Atomic Function
         4.4.7.2.4.1.b: Atomic Forward: Example and Test: function
         4.4.7.2.3: Using AD Version of Atomic Function
         4.4.7.2.2: Set Atomic Function Options
         4.4.7.2.1: Atomic Function Constructor
         4.4.7.2.e.b: User Defined Atomic AD Functions: Examples.Scalar Function
         4.4.3.2: The AD Power Function
         4.4.3.1: AD Two Argument Inverse Tangent Function
         4.4.2.19: The Exponential Function Minus One: expm1
         4.4.2.18: The Error Function
         3.2.c: An Epsilon Accurate Exponential Approximation: Mathematical Function
function: 8.10.1: The CheckSimpleVector Function: Example and Test
          8.8.1: The CheckNumericType Function: Example and Test
          8.2.1: NearEqual Function: Example and Test
          4.4.3.2.1: The AD Power Function: Example and Test
          4.4.3.1.1: The AD atan2 Function: Example and Test
          4.4.2.21.1: Sign Function: Example and Test
          4.4.2.20.1: The AD log1p Function: Example and Test
          4.4.2.19.1: The AD exp Function: Example and Test
          4.4.2.18.1: The AD erf Function: Example and Test
          4.4.2.17.1: The AD atanh Function: Example and Test
          4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh
          4.4.2.16.1: The AD asinh Function: Example and Test
          4.4.2.16: The Inverse Hyperbolic Sine Function: asinh
          4.4.2.15.1: The AD acosh Function: Example and Test
          4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh
          4.4.2.14.1: AD Absolute Value Function: Example and Test
          4.4.2.13.1: The AD tanh Function: Example and Test
          4.4.2.12.1: The AD tan Function: Example and Test
          4.4.2.11.1: The AD sqrt Function: Example and Test
          4.4.2.10.1: The AD sinh Function: Example and Test
          4.4.2.9.1: The AD sin Function: Example and Test
          4.4.2.8.1: The AD log10 Function: Example and Test
          4.4.2.7.1: The AD log Function: Example and Test
          4.4.2.6.1: The AD exp Function: Example and Test
          4.4.2.5.1: The AD cosh Function: Example and Test
          4.4.2.4.1: The AD cos Function: Example and Test
          4.4.2.3.1: The AD atan Function: Example and Test
          4.4.2.2.1: The AD asin Function: Example and Test
          4.4.2.1.1: The AD acos Function: Example and Test
          4.4.2.13: The Hyperbolic Tangent Function: tanh
          4.4.2.12: The Tangent Function: tan
          4.4.2.11: The Square Root Function: sqrt
          4.4.2.10: The Hyperbolic Sine Function: sinh
          4.4.2.9: The Sine Function: sin
          4.4.2.8: The Base 10 Logarithm Function: log10
          4.4.2.7: The Exponential Function: log
          4.4.2.6: The Exponential Function: exp
          4.4.2.5: The Hyperbolic Cosine Function: cosh
          4.4.2.4: The Cosine Function: cos
          4.4.2.3: Inverse Tangent Function: atan
          4.4.2.2: Inverse Sine Function: asin
          4.4.2.1: Inverse Sine Function: acos
functions 12.8.11.5.1.i: Define Matrix Multiply as a User Atomic Operation: CppAD User Atomic Callback Functions
          12.8.11: User Defined Atomic AD Functions
          12.8.2: ADFun Object Deprecated Member Functions
          12.8.c: CppAD Deprecated API Features: Atomic Functions
          12.3.2.c: The Theory of Reverse Mode: Standard Math Functions
          12.3.1.c: The Theory of Forward Mode: Standard Math Functions
          11.3: Speed Test of Functions in Double
          11.1.j: Running the Speed Test Program: Link Functions
          5.8: Abs-normal Representation of Non-Smooth Functions
          5.7.h: Optimize an ADFun Object Tape: Atomic Functions
          4.7.5: Base Type Requirements for Standard Math Functions
          4.7.3.b.c: Base Type Requirements for Identically Equal Comparisons: Identical.Identical Functions
          4.7.1: Required Base Class Member Functions
          4.5.3: AD Boolean Functions
          4.5: Bool Valued Operations and Functions with AD Arguments
          4.4.7.2.c: User Defined Atomic AD Functions: Virtual Functions
          4.4.7.2: User Defined Atomic AD Functions
          4.4.7.1: Checkpointing Functions
          4.4.7: Atomic AD Functions
          4.4.5: Discrete AD Functions
          4.4.3: The Binary Math Functions
          4.4.2: The Unary Standard Math Functions
          4.4: AD Valued Operations and Functions
functions: 4.5.4.1: AD Parameter and Variable Functions: Example and Test
           4.5.3.1: AD Boolean Functions: Example and Test
           4.4.2.14: AD Absolute Value Functions: abs, fabs
future 8.23.9: Control When Thread Alloc Retains Memory For Future Use
       4.4.7.2.10.c: Free Static Variables: Future Use
G
Gear 8.21: An Error Controller for Gear's Ode Solvers
12.10.1.h: Computing Jacobian and Hessian of Bender's Reduced Objective: g
  12.8.10.t.e: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.g
  11.2.5.e: Check Gradient of Determinant of 3 by 3 matrix: g
  9.m.e: Use Ipopt to Solve a Nonlinear Programming Problem: solution.g
  5.9.d: Check an ADFun Sequence of Operations: g
  5.8.11.i: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: g
  5.8.10.l: abs_normal: Minimize a Linear Abs-normal Approximation: g
  5.8.9.m: abs_normal: Solve a Quadratic Program With Box Constraints: G
  5.8.9.l: abs_normal: Solve a Quadratic Program With Box Constraints: g
  5.8.8.k: Solve a Quadratic Program Using Interior Point Method: G
  5.8.8.j: Solve a Quadratic Program Using Interior Point Method: g
  5.8.7.i: Non-Smooth Optimization Using Abs-normal Linear Approximations: g
  5.8.6.l: abs_normal: Minimize a Linear Abs-normal Approximation: g
  5.8.3.j: abs_normal: Evaluate First Order Approximation: g
  5.8.1.d: Create An Abs-normal Representation of a Function: g
  4.4.7.2.5.h: Atomic Reverse Mode: G, H
g_hat 5.8.10.m: abs_normal: Minimize a Linear Abs-normal Approximation: g_hat
      5.8.6.m: abs_normal: Minimize a Linear Abs-normal Approximation: g_hat
      5.8.3.k: abs_normal: Evaluate First Order Approximation: g_hat
g_jac 5.8.10.n: abs_normal: Minimize a Linear Abs-normal Approximation: g_jac
      5.8.6.n: abs_normal: Minimize a Linear Abs-normal Approximation: g_jac
      5.8.3.l: abs_normal: Evaluate First Order Approximation: g_jac
g_12.8.10.q: Nonlinear Programming Using the CppAD Interface to Ipopt: g_l
g_tilde 5.8.3.n: abs_normal: Evaluate First Order Approximation: g_tilde
g_12.8.10.r: Nonlinear Programming Using the CppAD Interface to Ipopt: g_u
gear 8.20: An Arbitrary Order Gear Method
gear'8.21: An Error Controller for Gear's Ode Solvers
       8.20.o: An Arbitrary Order Gear Method: Gear's Method
general 12.8.12.d.a: zdouble: An AD Base Type With Absolute Zero: Motivation.General
        10.2: General Examples
        8.c: Some General Purpose Utilities: General Numerical Routines
        8: Some General Purpose Utilities
        5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
        4.4.7.2.f: User Defined Atomic AD Functions: General Case
generator 2.2.e: Using CMake to Configure CppAD: generator
get 12.8.6.4: Get At Least A Specified Amount of Memory
    12.8.6.3: Get the Current OpenMP Thread Number
    12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
    8.23.6: Get At Least A Specified Amount of Memory
    8.23.5: Get the Current Thread Number
    8.23.3: Get Number of Threads
    2.2.6: Including the Sacado Speed Tests
    2.2.5: Including the cppad_ipopt Library and Tests
    2.2.4: Including the FADBAD Speed Tests
    2.2.3: Including the Eigen Examples and Tests
    2.2.2: Including the ColPack Sparsity Calculations
    2.2.1: Including the ADOL-C Examples and Tests
get_adolc 2.2.1.g: Including the ADOL-C Examples and Tests: get_adolc
get_check_for_nan 5.10.g: Check an ADFun Object For Nan Results: get_check_for_nan
get_colpack 2.2.2.e: Including the ColPack Sparsity Calculations: get_colpack
get_eigen 2.2.3.e: Including the Eigen Examples and Tests: get_eigen
get_fadbad 2.2.4.d: Including the FADBAD Speed Tests: get_fadbad
get_ipopt 2.2.5.d: Including the cppad_ipopt Library and Tests: get_ipopt
get_max_num_threads 12.8.6.1.f: Set and Get Maximum Number of Threads for omp_alloc Allocator: get_max_num_threads
get_sacado 2.2.6.d: Including the Sacado Speed Tests: get_sacado
get_started 10.b: Examples: get_started
            9.n.a: Use Ipopt to Solve a Nonlinear Programming Problem: Example.get_started
get_thread_num 12.8.6.3: Get the Current OpenMP Thread Number
getting 10.1: Getting Started Using CppAD to Compute Derivatives
        5.8.1.1: abs_normal Getting Started: Example and Test
        4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
        4.4.7.2.e.a: User Defined Atomic AD Functions: Examples.Getting Started
git 2.1.g.a: Download The CppAD Source Code: Source Code Control.Git
github 2.1.f.b: Download The CppAD Source Code: Compressed Archives.Github
gl 9.j: Use Ipopt to Solve a Nonlinear Programming Problem: gl
global 11.1.f: Running the Speed Test Program: Global Options
gradient 11.7.4: Sacado Speed: Gradient of Ode Solution
         11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
         11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
         11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
         11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
         11.5.4: CppAD Speed: Gradient of Ode Solution
         11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
         11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
         11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
         11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
         11.2.7.g.b: Evaluate a Function Defined in Terms of an ODE: fp.Gradient
         11.2.5: Check Gradient of Determinant of 3 by 3 matrix
         11.1.2.h: Speed Testing Gradient of Determinant by Minor Expansion: gradient
         11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
         11.1.1.h: Speed Testing Gradient of Determinant Using Lu Factorization: gradient
         11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
         10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
         10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
         10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
         10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
         10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test
grid 12.8.10.2.1.e.a: An ODE Inverse Problem Example: Trapezoidal Approximation.Trapezoidal Time Grid
group 8.6.c: Object that Runs a Group of Tests: group
      8.6: Object that Runs a Group of Tests
group_max 5.6.1.h: Computing Sparse Jacobians: group_max
gu 9.k: Use Ipopt to Solve a Nonlinear Programming Problem: gu
guidelines 12.6.o: The CppAD Wish List: Software Guidelines
gx 12.10.1.i: Computing Jacobian and Hessian of Bender's Reduced Objective: gx
gxx 12.10.1.j: Computing Jacobian and Hessian of Bender's Reduced Objective: gxx
H
Hessian 5.2.2.1: Hessian: Example and Test
5.5.8.g: Hessian Sparsity Pattern: Forward Mode: h
  5.5.6.i: Hessian Sparsity Pattern: Reverse Mode: h
  4.4.7.2.8.d.d: Atomic Forward Hessian Sparsity Patterns: Implementation.h
  4.4.7.2.5.h: Atomic Reverse Mode: G, H
handler 8.1.2.i: CppAD Assertions During Execution: Error Handler
        8.1.1: Replacing The CppAD Error Handler: Example and Test
        8.1.e: Replacing the CppAD Error Handler: handler
        8.1: Replacing the CppAD Error Handler
        8.d.a: Some General Purpose Utilities: Miscellaneous.Error Handler
handler: 8.1.1: Replacing The CppAD Error Handler: Example and Test
harmonic 7.2.8: Multi-Threading Harmonic Summation Example / Test
         7.2.i: Run Multi-Threading Examples and Speed Tests: harmonic
has 11.2.9: Evaluate a Function That Has a Sparse Hessian
    11.2.8: Evaluate a Function That Has a Sparse Jacobian
hash 4.7.8: Base Type Requirements for Hash Coding Values
hash_code 4.7.9.3.r: Enable use of AD<Base> where Base is Adolc's adouble Type: hash_code
          4.7.9.1.u: Example AD<Base> Where Base Constructor Allocates Memory: hash_code
hasnan 8.11.e: Obtain Nan or Determine if a Value is Nan: hasnan
head 12.8.5.i: Routines That Track Use of New and Delete: head newptr
here 6.b: CppAD API Preprocessor Symbols: Documented Here
hes 12.10.2.j: Jacobian and Hessian of Optimal Values: hes
    5.6.4.h: Sparse Hessian: hes
    5.2.2.g: Hessian: Easy Driver: hes
hes2jac 11.1.f.e: Running the Speed Test Program: Global Options.hes2jac
hessian 12.10.2: Jacobian and Hessian of Optimal Values
        12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
        11.7.6: Sacado Speed: Sparse Hessian
        11.6.6: Fadbad Speed: Sparse Hessian
        11.5.6: CppAD Speed: Sparse Hessian
        11.4.6: Adolc Speed: Sparse Hessian
        11.3.6: Double Speed: Sparse Hessian
        11.2.9.k.b: Evaluate a Function That Has a Sparse Hessian: fp.Hessian
        11.2.9: Evaluate a Function That Has a Sparse Hessian
        11.1.6.h: Speed Testing Sparse Hessian: hessian
        11.1.6: Speed Testing Sparse Hessian
        5.8.10.o: abs_normal: Minimize a Linear Abs-normal Approximation: hessian
        5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
        5.6.4.2: Computing Sparse Hessian for a Subset of Variables
        5.6.4.p: Sparse Hessian: Subset Hessian
        5.6.4: Sparse Hessian
        5.6.3.o: Computing Sparse Hessians: Subset Hessian
        5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
        5.5.8: Hessian Sparsity Pattern: Forward Mode
        5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
        5.5.7.k: Forward Mode Hessian Sparsity Patterns: Sparsity for Entire Hessian
        5.5.7: Forward Mode Hessian Sparsity Patterns
        5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
        5.5.6: Hessian Sparsity Pattern: Reverse Mode
        5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
        5.5.5.l: Reverse Mode Hessian Sparsity Patterns: Sparsity for Entire Hessian
        5.5.5: Reverse Mode Hessian Sparsity Patterns
        5.4.2.2: Hessian Times Direction: Example and Test
        5.4.2.i: Second Order Reverse Mode: Hessian Times Direction
        5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
        5.2.2.i: Hessian: Easy Driver: Hessian Uses Forward
        4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test
        4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test
        4.4.7.2.9: Atomic Reverse Hessian Sparsity Patterns
        4.4.7.2.8: Atomic Forward Hessian Sparsity Patterns
        4.4.7.2.e.d: User Defined Atomic AD Functions: Examples.Hessian Sparsity Patterns
        2.2.2.4: ColPack: Sparse Hessian Example and Test
        2.2.2.3: ColPack: Sparse Hessian Example and Test
hessian: 5.6.4.3: Subset of a Sparse Hessian: Example and Test
         5.6.4.1: Sparse Hessian: Example and Test
         5.6.3.1: Computing Sparse Hessian: Example and Test
         5.2.2.1: Hessian: Example and Test
         5.2.2: Hessian: Easy Driver
hessians 5.6.3: Computing Sparse Hessians
hold 8.23.9: Control When Thread Alloc Retains Memory For Future Use
hold_memory 7.d: Using CppAD in a Multi-Threading Environment: hold_memory
hyperbolic 12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
           12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
           12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
           12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
           12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
           12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
           12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
           12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
           12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
           12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
           4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh
           4.4.2.16: The Inverse Hyperbolic Sine Function: asinh
           4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh
           4.4.2.13: The Hyperbolic Tangent Function: tanh
           4.4.2.10: The Hyperbolic Sine Function: sinh
           4.4.2.5: The Hyperbolic Cosine Function: cosh
I
Integer 4.3.2.1: Convert From AD to Integer: Example and Test
i(k
     0) 12.8.10.2.3.d.a: ODE Fitting Using Fast Representation: Trapezoidal Approximation.Range Indices I(k,0)
     0) 12.8.10.2.3.c.a: ODE Fitting Using Fast Representation: Initial Condition.Range Indices I(k,0)
     0) 12.8.10.2.3.b.a: ODE Fitting Using Fast Representation: Objective Function.Range Indices I(k,0)
i/4.3: Conversion and I/O of AD Objects
id 12.8.11.g: User Defined Atomic AD Functions: id
identical 4.7.9.6.f: Enable use of AD<Base> where Base is std::complex<double>: Identical
          4.7.9.5.d: Enable use of AD<Base> where Base is double: Identical
          4.7.9.4.d: Enable use of AD<Base> where Base is float: Identical
          4.7.9.3.g: Enable use of AD<Base> where Base is Adolc's adouble Type: Identical
          4.7.9.1.j: Example AD<Base> Where Base Constructor Allocates Memory: Identical
          4.7.3.b.c: Base Type Requirements for Identically Equal Comparisons: Identical.Identical Functions
          4.7.3.b: Base Type Requirements for Identically Equal Comparisons: Identical
identically 4.7.3: Base Type Requirements for Identically Equal Comparisons
            4.5.5: Check if Two Value are Identically Equal
identicalpar 4.7.3.b.a: Base Type Requirements for Identically Equal Comparisons: Identical.IdenticalPar
identity 12.3.3: An Important Reverse Mode Identity
if 12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
   8.11: Obtain Nan or Determine if a Value is Nan
   8.2: Determine if Two Values Are Nearly Equal
   4.5.5: Check if Two Value are Identically Equal
if_false 4.4.4.h: AD Conditional Expressions: if_false
if_true 4.4.4.g: AD Conditional Expressions: if_true
implementation 12.8.11.d: User Defined Atomic AD Functions: Partial Implementation
               11.7.5.b: Sacado Speed: Second Derivative of a Polynomial: Implementation
               11.7.4.b: Sacado Speed: Gradient of Ode Solution: Implementation
               11.7.3.b: Sacado Speed: Matrix Multiplication: Implementation
               11.7.2.b: Sacado Speed: Gradient of Determinant Using Lu Factorization: Implementation
               11.7.1.b: Sacado Speed: Gradient of Determinant by Minor Expansion: Implementation
               11.6.5.b: Fadbad Speed: Second Derivative of a Polynomial: Implementation
               11.6.4.b: Fadbad Speed: Ode: Implementation
               11.6.3.b: Fadbad Speed: Matrix Multiplication: Implementation
               11.6.2.b: Fadbad Speed: Gradient of Determinant Using Lu Factorization: Implementation
               11.6.1.b: Fadbad Speed: Gradient of Determinant by Minor Expansion: Implementation
               11.5.7.b: CppAD Speed: Sparse Jacobian: Implementation
               11.5.6.b: CppAD Speed: Sparse Hessian: Implementation
               11.5.5.b: CppAD Speed: Second Derivative of a Polynomial: Implementation
               11.5.4.b: CppAD Speed: Gradient of Ode Solution: Implementation
               11.5.3.b: CppAD Speed, Matrix Multiplication: Implementation
               11.5.2.b: CppAD Speed: Gradient of Determinant Using Lu Factorization: Implementation
               11.5.1.b: CppAD Speed: Gradient of Determinant by Minor Expansion: Implementation
               11.4.7.b: adolc Speed: Sparse Jacobian: Implementation
               11.4.6.b: Adolc Speed: Sparse Hessian: Implementation
               11.4.5.b: Adolc Speed: Second Derivative of a Polynomial: Implementation
               11.4.4.b: Adolc Speed: Ode: Implementation
               11.4.3.b: Adolc Speed: Matrix Multiplication: Implementation
               11.4.2.b: Adolc Speed: Gradient of Determinant Using Lu Factorization: Implementation
               11.4.1.b: Adolc Speed: Gradient of Determinant by Minor Expansion: Implementation
               11.3.7.b: Double Speed: Sparse Jacobian: Implementation
               11.3.6.b: Double Speed: Sparse Hessian: Implementation
               11.3.5.b: Double Speed: Evaluate a Polynomial: Implementation
               11.3.4.b: Double Speed: Ode Solution: Implementation
               11.3.3.b: CppAD Speed: Matrix Multiplication (Double Version): Implementation
               11.3.2.b: Double Speed: Determinant Using Lu Factorization: Implementation
               11.3.1.b: Double Speed: Determinant by Minor Expansion: Implementation
               7.2.11.3: Pthread Implementation of a Team of AD Threads
               7.2.11.2: Boost Thread Implementation of a Team of AD Threads
               7.2.11.1: OpenMP Implementation of a Team of AD Threads
               7.2.11.k: Specifications for A Team of AD Threads: Speed Test of Implementation
               7.2.11.j: Specifications for A Team of AD Threads: Example Implementation
               7.2.8.5: Multi-Threaded Implementation of Summation of 1/i
               4.4.7.2.9.d: Atomic Reverse Hessian Sparsity Patterns: Implementation
               4.4.7.2.8.d: Atomic Forward Hessian Sparsity Patterns: Implementation
               4.4.7.2.7.d: Atomic Reverse Jacobian Sparsity Patterns: Implementation
               4.4.7.2.6.d: Atomic Forward Jacobian Sparsity Patterns: Implementation
               4.4.7.2.5.c: Atomic Reverse Mode: Implementation
               4.4.7.2.4.c: Atomic Forward Mode: Implementation
               4.4.7.2.1.b.c: Atomic Function Constructor: atomic_user.Implementation
               3.2.1: exp_eps: Implementation
               3.2.i: An Epsilon Accurate Exponential Approximation: Implementation
               3.1.1: exp_2: Implementation
               3.1.i: Second Order Exponential Approximation: Implementation
implementations 7.2.k: Run Multi-Threading Examples and Speed Tests: Team Implementations
implicit 10.6: Suppress Suspect Implicit Conversion Warnings
         4.1.c.a: AD Constructors: x.implicit
         2.2: Using CMake to Configure CppAD
important 12.3.3: An Important Reverse Mode Identity
in_parallel 12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
            8.23.2.e: Setup thread_alloc For Use in Multi-Threading Environment: in_parallel
inactive 12.4.k.b: Glossary: Tape.Inactive
inc 8.4.h: Run One Speed Test and Print Results: inc
include 12.11.b: CppAD Addons: Include Files
        12.10.3.c: LU Factorization of A Square Matrix and Stability Calculation: Include
        12.8.11.5.b: Old Matrix Multiply as a User Atomic Operation: Example and Test: Include File
        12.8.6.c: A Quick OpenMP Memory Allocator Used by CppAD: Include
        12.8.5.d: Routines That Track Use of New and Delete: Include
        12.8.1: Deprecated Include Files
        10.2.4.d: Enable Use of Eigen Linear Algebra Package with CppAD: Include Files
        9.c: Use Ipopt to Solve a Nonlinear Programming Problem: Include File
        8.23.c: A Fast Multi-Threading Memory Allocator: Include
        8.22.c: The CppAD::vector Template Class: Include
        8.21.c: An Error Controller for Gear's Ode Solvers: Include
        8.20.c: An Arbitrary Order Gear Method: Include
        8.19.c: An Error Controller for ODE Solvers: Include
        8.18.c: A 3rd and 4th Order Rosenbrock ODE Solver: Include
        8.17.d: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Include
        8.16.c: Multi-dimensional Romberg Integration: Include
        8.15.c: One DimensionalRomberg Integration: Include
        8.14.3.c: Invert an LU Factored Equation: Include
        8.14.2.c: LU Factorization of A Square Matrix: Include
        8.14.1.c: Compute Determinant and Solve Linear Equations: Include
        8.13.c: Evaluate a Polynomial or its Derivative: Include
        8.12.d: The Integer Power Function: Include
        8.11.c: Obtain Nan or Determine if a Value is Nan: Include
        8.10.e: Check Simple Vector Concept: Include
        8.8.c: Check NumericType Class Concept: Include
        8.5.d: Determine Amount of Time to Execute a Test: Include
        8.4.d: Run One Speed Test and Print Results: Include
        8.3.d: Run One Speed Test and Return Results: Include
        8.2.i: Determine if Two Values Are Nearly Equal: Include Files
        4.7.9.6.b: Enable use of AD<Base> where Base is std::complex<double>: Include Order
        4.7.9.3.c: Enable use of AD<Base> where Base is Adolc's adouble Type: Include Files
        4.7.9.1.b: Example AD<Base> Where Base Constructor Allocates Memory: Include File
        4.7.e: AD<Base> Requirements for a CppAD Base Type: Include Order
        3.2.d: An Epsilon Accurate Exponential Approximation: include
        3.1.d: Second Order Exponential Approximation: include
        2.2: Using CMake to Configure CppAD
        d: cppad-20171217: A Package for Differentiation of C++ Algorithms: Include File
including 2.2.6: Including the Sacado Speed Tests
          2.2.5: Including the cppad_ipopt Library and Tests
          2.2.4: Including the FADBAD Speed Tests
          2.2.3: Including the Eigen Examples and Tests
          2.2.2: Including the ColPack Sparsity Calculations
          2.2.1: Including the ADOL-C Examples and Tests
inclusion 11.2.10.c: Simulate a [0,1] Uniform Random Variate: Inclusion
          11.2.9.c: Evaluate a Function That Has a Sparse Hessian: Inclusion
          11.2.8.c: Evaluate a Function That Has a Sparse Jacobian: Inclusion
          11.2.7.c: Evaluate a Function Defined in Terms of an ODE: Inclusion
          11.2.6.c: Sum Elements of a Matrix Times Itself: Inclusion
          11.2.5.c: Check Gradient of Determinant of 3 by 3 matrix: Inclusion
          11.2.4.c: Check Determinant of 3 by 3 matrix: Inclusion
          11.2.3.b: Determinant Using Expansion by Minors: Inclusion
          11.2.2.b: Determinant of a Minor: Inclusion
          11.2.1.b: Determinant Using Expansion by Lu Factorization: Inclusion
ind 8.24.c: Returns Indices that Sort a Vector: ind
independent 12.8.4.e: OpenMP Parallel Setup: Independent
            12.4.k.c: Glossary: Tape.Independent Variable
            12.4.g.d: Glossary: Operation.Independent
            12.1.f: Frequently Asked Questions and Answers: Independent Variables
            12.1.a: Frequently Asked Questions and Answers: Assignment and Independent
            5.1.1.1: Independent and ADFun Constructor: Example and Test
            5.1.1: Declare Independent Variables and Start Recording
index 12.8.10.f.a: Nonlinear Programming Using the CppAD Interface to Ipopt: fg(x).Index Vector
      12.4.j.a: Glossary: Sparsity Pattern.Row and Column Index Vectors
      8.27: Row and Column Index Sparsity Patterns
      8.24.1: Index Sort: Example and Test
      5.10.f.c: Check an ADFun Object For Nan Results: Error Message.index
      4.6.1: AD Vectors that Record Index Operations: Example and Test
      4.6: AD Vectors that Record Index Operations
      4.4.7.2.19.1.g: Matrix Multiply as an Atomic Operation: Result Element Index
      4.4.7.2.19.1.f: Matrix Multiply as an Atomic Operation: Right Operand Element Index
      4.4.7.2.19.1.e: Matrix Multiply as an Atomic Operation: Left Operand Element Index
      4.4.5.1: Taping Array Index Operation: Example and Test
      3.2.7.j: exp_eps: Second Order Reverse Sweep: Index 2: f_1
      3.2.7.i: exp_eps: Second Order Reverse Sweep: Index 3: f_2
      3.2.7.h: exp_eps: Second Order Reverse Sweep: Index 4: f_3
      3.2.7.g: exp_eps: Second Order Reverse Sweep: Index 5: f_4
      3.2.7.f: exp_eps: Second Order Reverse Sweep: Index 6: f_5
      3.2.7.e: exp_eps: Second Order Reverse Sweep: Index 7: f_6
      3.2.6.d.a: exp_eps: Second Order Forward Mode: Operation Sequence.Index
      3.2.5.j: exp_eps: First Order Reverse Sweep: Index 2: f_1
      3.2.5.i: exp_eps: First Order Reverse Sweep: Index 3: f_2
      3.2.5.h: exp_eps: First Order Reverse Sweep: Index 4: f_3
      3.2.5.g: exp_eps: First Order Reverse Sweep: Index 5: f_4
      3.2.5.f: exp_eps: First Order Reverse Sweep: Index 6: f_5
      3.2.5.e: exp_eps: First Order Reverse Sweep: Index 7: f_6
      3.2.4.c.a: exp_eps: First Order Forward Sweep: Operation Sequence.Index
      3.2.3.b.c: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Index
      3.1.7.g: exp_2: Second Order Reverse Mode: Index 2: f_1
      3.1.7.f: exp_2: Second Order Reverse Mode: Index 3: f_2
      3.1.7.e: exp_2: Second Order Reverse Mode: Index 4: f_3
      3.1.7.d: exp_2: Second Order Reverse Mode: Index 5: f_4
      3.1.6.d.a: exp_2: Second Order Forward Mode: Operation Sequence.Index
      3.1.5.g: exp_2: First Order Reverse Mode: Index 2: f_1
      3.1.5.f: exp_2: First Order Reverse Mode: Index 3: f_2
      3.1.5.e: exp_2: First Order Reverse Mode: Index 4: f_3
      3.1.5.d: exp_2: First Order Reverse Mode: Index 5: f_4
      3.1.4.d.a: exp_2: First Order Forward Mode: Operation Sequence.Index
      3.1.3.c.a: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence.Index
index_sort 8.24.1: Index Sort: Example and Test
           8.24: Returns Indices that Sort a Vector
indexing 12.8.11.5.1.e: Define Matrix Multiply as a User Atomic Operation: Matrix Indexing
         4.6.i: AD Vectors that Record Index Operations: AD Indexing
         4.6.h: AD Vectors that Record Index Operations: size_t Indexing
indices 12.8.10.2.3.d.b: ODE Fitting Using Fast Representation: Trapezoidal Approximation.Domain Indices J(k,0)
        12.8.10.2.3.d.a: ODE Fitting Using Fast Representation: Trapezoidal Approximation.Range Indices I(k,0)
        12.8.10.2.3.c.b: ODE Fitting Using Fast Representation: Initial Condition.Domain Indices J(k,0)
        12.8.10.2.3.c.a: ODE Fitting Using Fast Representation: Initial Condition.Range Indices I(k,0)
        12.8.10.2.3.b.b: ODE Fitting Using Fast Representation: Objective Function.Domain Indices J(k,0)
        12.8.10.2.3.b.a: ODE Fitting Using Fast Representation: Objective Function.Range Indices I(k,0)
        8.24: Returns Indices that Sort a Vector
        8.d.d: Some General Purpose Utilities: Miscellaneous.Sorting Indices
indices: 5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
info 8.1.d: Replacing the CppAD Error Handler: info
information 12.8.11.5.1.d: Define Matrix Multiply as a User Atomic Operation: Extra Call Information
            7.2.9.2: Multi-Threaded User Atomic Common Information
initial 12.8.10.2.3.c: ODE Fitting Using Fast Representation: Initial Condition
        12.8.10.2.2.d: ODE Fitting Using Simple Representation: Initial Condition Constraint
initialization 7.f: Using CppAD in a Multi-Threading Environment: Initialization
initialize 8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
injection 12.8.10.f.c: Nonlinear Programming Using the CppAD Interface to Ipopt: fg(x).Injection
inline 4.7.8.f: Base Type Requirements for Hash Coding Values: inline
inner 10.2.10.c.c: Using Multiple Levels of AD: Procedure.Inner Function
input 8.24.c.a: Returns Indices that Sort a Vector: ind.Input
      4.3.4.1: AD Output Operator: Example and Test
      4.3.4: AD Output Stream Operator
inside 12.8.11.3: Using AD to Compute Atomic Function Derivatives
       12.8.11.2: Using AD to Compute Atomic Function Derivatives
install 12.8.13.v: Autotools Unix Test and Installation: make install
        2.2.6.1: Download and Install Sacado in Build Directory
        2.2.5.1: Download and Install Ipopt in Build Directory
        2.2.4.1: Download and Install Fadbad in Build Directory
        2.2.3.1: Download and Install Eigen in Build Directory
        2.2.2.5: Download and Install ColPack in Build Directory
        2.2.1.1: Download and Install Adolc in Build Directory
        2.2: Using CMake to Configure CppAD
        2.1.j: Download The CppAD Source Code: Install Instructions
        2: CppAD Download, Test, and Install Instructions
     fadbad 2.2.4.1: Download and Install Fadbad in Build Directory
     ipopt 2.2.5.1: Download and Install Ipopt in Build Directory
     sacado 2.2.6.1: Download and Install Sacado in Build Directory
installation 12.8.13: Autotools Unix Test and Installation
             2.a.d: CppAD Download, Test, and Install Instructions: Instructions.Step 4: Installation
instructions 2.1.j: Download The CppAD Source Code: Install Instructions
             2.a: CppAD Download, Test, and Install Instructions: Instructions
             2: CppAD Download, Test, and Install Instructions
int 8.12.1: The Pow Integer Exponent: Example and Test
    8.7: Definition of a Numeric Type
integer 9.f.e: Use Ipopt to Solve a Nonlinear Programming Problem: options.Integer
        8.25.e.a: Convert Certain Types to a String: s.Integer
        8.25.d.a: Convert Certain Types to a String: value.Integer
        8.12.1: The Pow Integer Exponent: Example and Test
        8.12: The Integer Power Function
        8.7.c: Definition of a Numeric Type: Constructor From Integer
        4.7.9.6.h: Enable use of AD<Base> where Base is std::complex<double>: Integer
        4.7.9.5.e: Enable use of AD<Base> where Base is double: Integer
        4.7.9.4.e: Enable use of AD<Base> where Base is float: Integer
        4.7.9.3.h: Enable use of AD<Base> where Base is Adolc's adouble Type: Integer
        4.7.9.1.l: Example AD<Base> Where Base Constructor Allocates Memory: Integer
        4.7.h: AD<Base> Requirements for a CppAD Base Type: Integer
        4.3.2: Convert From AD to Integer
integer: 4.3.2.1: Convert From AD to Integer: Example and Test
integrate 8.16: Multi-dimensional Romberg Integration
          8.15: One DimensionalRomberg Integration
integration 8.16: Multi-dimensional Romberg Integration
            8.15: One DimensionalRomberg Integration
integration: 8.16.1: One Dimensional Romberg Integration: Example and Test
             8.15.1: One Dimensional Romberg Integration: Example and Test
interface 12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
          12.6.m: The CppAD Wish List: Iterator Interface
          10.2.7: Interfacing to C: Example and Test
          10.2.1: Creating Your Own Interface to an ADFun Object
interfacing 10.2.7: Interfacing to C: Example and Test
interior 5.8.8: Solve a Quadratic Program Using Interior Point Method
internal 2.2: Using CMake to Configure CppAD
internal_bool 5.5.7.i: Forward Mode Hessian Sparsity Patterns: internal_bool
              5.5.5.j: Reverse Mode Hessian Sparsity Patterns: internal_bool
              5.5.3.i: Reverse Mode Jacobian Sparsity Patterns: internal_bool
              5.5.1.i: Forward Mode Jacobian Sparsity Patterns: internal_bool
interpolate 4.4.5.3: Interpolation With Retaping: Example and Test
            4.4.5.2: Interpolation With Out Retaping: Example and Test
interpolation 4.4.5.3: Interpolation With Retaping: Example and Test
              4.4.5.2: Interpolation With Out Retaping: Example and Test
interpreter 10.2.15: Example Differentiating a Stack Machine Interpreter
introduction 12.8.9.c: Choosing The Vector Testing Template Class: Introduction
             12.7.15.a: Changes and Additions to CppAD During 2003: Introduction
             12.7.14.a: Changes and Additions to CppAD During 2004: Introduction
             12.7.12.a: Changes and Additions to CppAD During 2006: Introduction
             12.7.11.a: Changes and Additions to CppAD During 2007: Introduction
             12.7.10.a: Changes and Additions to CppAD During 2008: Introduction
             12.7.9.a: Changes and Additions to CppAD During 2009: Introduction
             12.7.8.a: Changes and Additions to CppAD During 2010: Introduction
             12.7.7.a: Changes and Additions to CppAD During 2011: Introduction
             12.7.6.a: CppAD Changes and Additions During 2012: Introduction
             12.7.5.a: CppAD Changes and Additions During 2013: Introduction
             12.7.4.a: CppAD Changes and Additions During 2014: Introduction
             12.7.3.a: CppAD Changes and Additions During 2015: Introduction
             12.7.2.a: Changes and Additions to CppAD During 2016: Introduction
             12.7.a: Changes and Additions to CppAD: Introduction
             10.a: Examples: Introduction
             3.3: Correctness Tests For Exponential Approximation in Introduction
             3: An Introduction by Example to Algorithmic Differentiation
             b: cppad-20171217: A Package for Differentiation of C++ Algorithms: Introduction
inuse 12.8.7.g: Memory Leak Detection: inuse
      12.8.6.7: Amount of Memory a Thread is Currently Using
      8.23.10: Amount of Memory a Thread is Currently Using
invalid 4.7.9.6.l: Enable use of AD<Base> where Base is std::complex<double>: Invalid Unary Math
inverse 12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
        12.8.10.2.1.d: An ODE Inverse Problem Example: Inverse Problem
        12.8.10.2.1: An ODE Inverse Problem Example
        12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem
        12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
        12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
        12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
        12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
        12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
        12.1.g: Frequently Asked Questions and Answers: Matrix Inverse
        9.3.d: ODE Inverse Problem Definitions: Source Code: Inverse Problem
        9.3: ODE Inverse Problem Definitions: Source Code
        4.4.3.1: AD Two Argument Inverse Tangent Function
        4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh
        4.4.2.16: The Inverse Hyperbolic Sine Function: asinh
        4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh
        4.4.2.3: Inverse Tangent Function: atan
        4.4.2.2: Inverse Sine Function: asin
        4.4.2.1: Inverse Sine Function: acos
inverse: 4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test
inversion 4.4.7.2.17.1: Atomic Eigen Matrix Inversion Class
invert 8.14.3: Invert an LU Factored Equation
       8.14.1.d: Compute Determinant and Solve Linear Equations: Factor and Invert
ip 12.10.3.f: LU Factorization of A Square Matrix and Stability Calculation: ip
   8.14.3.e: Invert an LU Factored Equation: ip
   8.14.2.f: LU Factorization of A Square Matrix: ip
ipopt 12.8.10.2.4: Driver for Running the Ipopt ODE Example
      12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
      12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
      9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
      9: Use Ipopt to Solve a Nonlinear Programming Problem
      2.2.5.1: Download and Install Ipopt in Build Directory
      2.2.5: Including the cppad_ipopt Library and Tests
     download and install 2.2.5.1: Download and Install Ipopt in Build Directory
ipopt: 12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
       9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
ipopt_cppad_nlp 12.7.10: Changes and Additions to CppAD During 2008
ipopt_dir 12.8.13.r: Autotools Unix Test and Installation: ipopt_dir
ipopt_library_paths 12.8.10.e: Nonlinear Programming Using the CppAD Interface to Ipopt: ipopt_library_paths
ipopt_prefix 2.2.5.b: Including the cppad_ipopt Library and Tests: ipopt_prefix
ipopt_solve 9.2: Nonlinear Programming Retaping: Example and Test
is 12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
   12.8.6.7: Amount of Memory a Thread is Currently Using
   12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
   8.23.10: Amount of Memory a Thread is Currently Using
   8.23.4: Is The Current Execution in Parallel Mode
   8.11: Obtain Nan or Determine if a Value is Nan
   4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
   4.7.9.5: Enable use of AD<Base> where Base is double
   4.7.9.4: Enable use of AD<Base> where Base is float
   4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
   4.5.4: Is an AD Object a Parameter or Variable
   4.3.4.c: AD Output Stream Operator: is
isnan 8.11.d: Obtain Nan or Determine if a Value is Nan: isnan
      4.7.9.6.j: Enable use of AD<Base> where Base is std::complex<double>: isnan
      4.7.5.g: Base Type Requirements for Standard Math Functions: isnan
iteration 12.8.10.2.1.f.a: An ODE Inverse Problem Example: Black Box Method.Two levels of Iteration
          5.8.10.t.c: abs_normal: Minimize a Linear Abs-normal Approximation: Method.Iteration
          5.8.6.s.c: abs_normal: Minimize a Linear Abs-normal Approximation: Method.Iteration
iterator 12.6.m: The CppAD Wish List: Iterator Interface
its 8.23.13: Deallocate An Array and Call Destructor for its Elements
    8.23.12: Allocate An Array and Call Default Constructor for its Elements
    8.13: Evaluate a Polynomial or its Derivative
    4.3.1.1: Convert From AD to its Base Type: Example and Test
    4.3.1: Convert From an AD Type to its Base Type
itself 11.2.6: Sum Elements of a Matrix Times Itself
J
Jacobian 5.2.1.1: Jacobian: Example and Test
         5.2.1: Jacobian: Driver Routine
5.2.6.f: Reverse Mode Second Partial Derivative Driver: j
  5.2.5.e: Forward Mode Second Partial Derivative Driver: j
  5.2.3.e: First Order Partial Derivative: Driver Routine: j
j(k
     0) 12.8.10.2.3.d.b: ODE Fitting Using Fast Representation: Trapezoidal Approximation.Domain Indices J(k,0)
     0) 12.8.10.2.3.c.b: ODE Fitting Using Fast Representation: Initial Condition.Domain Indices J(k,0)
     0) 12.8.10.2.3.b.b: ODE Fitting Using Fast Representation: Objective Function.Domain Indices J(k,0)
jac 12.10.2.i: Jacobian and Hessian of Optimal Values: jac
    5.6.2.g: Sparse Jacobian: jac
    5.2.1.e: Jacobian: Driver Routine: jac
jacobian 12.10.2: Jacobian and Hessian of Optimal Values
         12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
         11.5.7: CppAD Speed: Sparse Jacobian
         11.4.7: adolc Speed: Sparse Jacobian
         11.3.7: Double Speed: Sparse Jacobian
         11.2.8.l.b: Evaluate a Function That Has a Sparse Jacobian: fp.Jacobian
         11.2.8: Evaluate a Function That Has a Sparse Jacobian
         11.1.7.i: Speed Testing Sparse Jacobian: jacobian
         11.1.7: Speed Testing Sparse Jacobian
         11.1.4.i: Speed Testing the Jacobian of Ode Solution: jacobian
         11.1.4: Speed Testing the Jacobian of Ode Solution
         10.2.10.2: Computing a Jacobian With Constants that Change
         5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
         5.6.2: Sparse Jacobian
         5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
         5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
         5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
         5.5.4: Jacobian Sparsity Pattern: Reverse Mode
         5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
         5.5.3.k: Reverse Mode Jacobian Sparsity Patterns: Sparsity for Entire Jacobian
         5.5.3: Reverse Mode Jacobian Sparsity Patterns
         5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
         5.5.2: Jacobian Sparsity Pattern: Forward Mode
         5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
         5.5.1.k: Forward Mode Jacobian Sparsity Patterns: Sparsity for Entire Jacobian
         5.5.1: Forward Mode Jacobian Sparsity Patterns
         4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test
         4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test
         4.4.7.2.7: Atomic Reverse Jacobian Sparsity Patterns
         4.4.7.2.6: Atomic Forward Jacobian Sparsity Patterns
         2.2.2.2: ColPack: Sparse Jacobian Example and Test
         2.2.2.1: ColPack: Sparse Jacobian Example and Test
jacobian: 5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
          5.6.2.1: Sparse Jacobian: Example and Test
          5.2.1.1: Jacobian: Example and Test
          5.2.1: Jacobian: Driver Routine
jacobians 5.6.5: Compute Sparse Jacobians Using Subgraphs
          5.6.1: Computing Sparse Jacobians
jp 12.10.3.g: LU Factorization of A Square Matrix and Stability Calculation: jp
   8.14.3.f: Invert an LU Factored Equation: jp
   8.14.2.g: LU Factorization of A Square Matrix: jp
K
Kutta 8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
12.8.11.h: User Defined Atomic AD Functions: k
  8.28.k.a: Sparse Matrix Row, Column, Value Representation: set.k
  8.27.j.a: Row and Column Index Sparsity Patterns: set.k
  8.13.d: Evaluate a Polynomial or its Derivative: k
  5.2.5.f: Forward Mode Second Partial Derivative Driver: k
  4.4.7.2.18.1.f.b: AD Theory for Cholesky Factorization: Reverse Mode.Case k > 0
  4.4.7.2.18.1.f.a: AD Theory for Cholesky Factorization: Reverse Mode.Case k = 0
keys 8.24.b: Returns Indices that Sort a Vector: keys
kkt 5.8.9.s: abs_normal: Solve a Quadratic Program With Box Constraints: KKT Conditions
    5.8.8.s: Solve a Quadratic Program Using Interior Point Method: KKT Conditions
known 8.1.2.e: CppAD Assertions During Execution: Known
      8.1.f: Replacing the CppAD Error Handler: known
L
Lu 8.14.1: Compute Determinant and Solve Linear Equations
LuFactor 8.14.2.1: LuFactor: Example and Test
         8.14.2: LU Factorization of A Square Matrix
LuInvert 8.14.3.1: LuInvert: Example and Test
         8.14.3: Invert an LU Factored Equation
LuRatio 12.10.3.1: LuRatio: Example and Test
        12.10.3: LU Factorization of A Square Matrix and Stability Calculation
LuSolve 8.14.1: Compute Determinant and Solve Linear Equations
LuVecAD 10.3.3: Lu Factor and Solve with Recorded Pivoting
12.10.3.h.c: LU Factorization of A Square Matrix and Stability Calculation: LU.L
  8.14.3.g.a: Invert an LU Factored Equation: LU.L
  8.14.2.h.c: LU Factorization of A Square Matrix: LU.L
  5.2.2.e: Hessian: Easy Driver: l
l.f. 12.5.e: Bibliography: Shampine, L.F.
lagrangian 5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
lambda 12.8.10.t.f: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.lambda
       9.m.f: Use Ipopt to Solve a Nonlinear Programming Problem: solution.lambda
language 12.5.b: Bibliography: The C++ Programming Language
languages 10.2.2: Example and Test Linking CppAD to Languages Other than C++
large 4.4.7.2.15.k.h: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.Large x Values
last 8.4.g: Run One Speed Test and Print Results: last
leak 12.8.7: Memory Leak Detection
least 12.8.6.4: Get At Least A Specified Amount of Memory
      8.23.6: Get At Least A Specified Amount of Memory
left 8.26.d: Union of Standard Sets: left
     4.4.7.2.19.1.e: Matrix Multiply as an Atomic Operation: Left Operand Element Index
     4.4.4.e: AD Conditional Expressions: left
lemma 4.4.7.2.18.1.e: AD Theory for Cholesky Factorization: Lemma 2
      4.4.7.2.18.1.d: AD Theory for Cholesky Factorization: Lemma 1
leqzero 8.14.1.o: Compute Determinant and Solve Linear Equations: LeqZero
level 10.2.10.2: Computing a Jacobian With Constants that Change
      10.2.10.1: Multiple Level of AD: Example and Test
      5.8.11.g: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: level
      5.8.10.h: abs_normal: Minimize a Linear Abs-normal Approximation: level
      5.8.9.g: abs_normal: Solve a Quadratic Program With Box Constraints: level
      5.8.8.g: Solve a Quadratic Program Using Interior Point Method: level
      5.8.7.h: Non-Smooth Optimization Using Abs-normal Linear Approximations: level
      5.8.6.h: abs_normal: Minimize a Linear Abs-normal Approximation: level
      5.8.5.f: abs_normal: Solve a Linear Program With Box Constraints: level
      5.8.4.f: abs_normal: Solve a Linear Program Using Simplex Method: level
      4.4.7.1.c.e: Checkpointing Functions: Purpose.Multiple Level AD
      2.3.d: Checking the CppAD Examples and Tests: First Level
levels 12.8.10.2.1.f.a: An ODE Inverse Problem Example: Black Box Method.Two levels of Iteration
       10.2.10: Using Multiple Levels of AD
       4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
lib 2.2: Using CMake to Configure CppAD
library 12.11.c: CppAD Addons: Library Files
        11.2.c: Speed Testing Utilities: Library Routines
        2.2.5: Including the cppad_ipopt Library and Tests
license 12.12: Your License for the CppAD Software
        2.1.e: Download The CppAD Source Code: License
limitations 7.2.9.1.e: Defines a User Atomic Operation that Computes Square Root: Limitations
limits 4.7.6: Base Type Requirements for Numeric Limits
       4.4.6: Numeric Limits For an AD and Base Types
limits: 4.4.6.1: Numeric Limits: Example and Test
line 12.8.5.f: Routines That Track Use of New and Delete: line
     8.1.g: Replacing the CppAD Error Handler: line
linear 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
       10.3.3: Lu Factor and Solve with Recorded Pivoting
       10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
       8.14.3: Invert an LU Factored Equation
       8.14.2: LU Factorization of A Square Matrix
       8.14.1: Compute Determinant and Solve Linear Equations
       8.14: Compute Determinants and Solve Equations by LU Factorization
       5.8.10: abs_normal: Minimize a Linear Abs-normal Approximation
       5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations
       5.8.6: abs_normal: Minimize a Linear Abs-normal Approximation
       5.8.5: abs_normal: Solve a Linear Program With Box Constraints
       5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
link 11.1.j: Running the Speed Test Program: Link Functions
     10.2.2: Example and Test Linking CppAD to Languages Other than C++
link_det_lu 11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
            11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
            11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
            11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
            11.3.2: Double Speed: Determinant Using Lu Factorization
            11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
link_det_minor 11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
               11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
               11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
               11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
               11.3.1: Double Speed: Determinant by Minor Expansion
               11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
link_mat_mul 11.7.3: Sacado Speed: Matrix Multiplication
             11.6.3: Fadbad Speed: Matrix Multiplication
             11.5.3: CppAD Speed, Matrix Multiplication
             11.4.3: Adolc Speed: Matrix Multiplication
             11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
             11.1.3: Speed Testing Derivative of Matrix Multiply
link_ode 11.7.4: Sacado Speed: Gradient of Ode Solution
         11.6.4: Fadbad Speed: Ode
         11.5.4: CppAD Speed: Gradient of Ode Solution
         11.4.4: Adolc Speed: Ode
         11.3.4: Double Speed: Ode Solution
         11.1.4: Speed Testing the Jacobian of Ode Solution
link_poly 11.7.5: Sacado Speed: Second Derivative of a Polynomial
          11.6.5: Fadbad Speed: Second Derivative of a Polynomial
          11.5.5: CppAD Speed: Second Derivative of a Polynomial
          11.4.5: Adolc Speed: Second Derivative of a Polynomial
          11.3.5: Double Speed: Evaluate a Polynomial
          11.1.5: Speed Testing Second Derivative of a Polynomial
link_sparse_hessian 11.5.6: CppAD Speed: Sparse Hessian
                    11.4.6: Adolc Speed: Sparse Hessian
                    11.3.6: Double Speed: Sparse Hessian
                    11.1.6: Speed Testing Sparse Hessian
link_sparse_jacobian 11.5.7: CppAD Speed: Sparse Jacobian
                     11.4.7: adolc Speed: Sparse Jacobian
                     11.3.7: Double Speed: Sparse Jacobian
                     11.1.7: Speed Testing Sparse Jacobian
linking 11.1.8.d: Microsoft Version of Elapsed Number of Seconds: Linking
        10.2.2: Example and Test Linking CppAD to Languages Other than C++
linux 12.8.13.n.a: Autotools Unix Test and Installation: adolc_dir.Linux
list 12.8.10.v: Nonlinear Programming Using the CppAD Interface to Ipopt: Wish List
     12.6: The CppAD Wish List
     10.4: List All (Except Deprecated) CppAD Examples
literature 5.8.1.g: Create An Abs-normal Representation of a Function: Correspondence to Literature
log 4.4.2.7.1: The AD log Function: Example and Test
    4.4.2.7: The Exponential Function: log
log10 4.4.2.8.1: The AD log10 Function: Example and Test
      4.4.2.8: The Base 10 Logarithm Function: log10
log1p 12.3.2.2: Logarithm Function Reverse Mode Theory
      12.3.1.2: Logarithm Function Forward Mode Theory
      4.7.9.3.l: Enable use of AD<Base> where Base is Adolc's adouble Type: erf, asinh, acosh, atanh, expm1, log1p
      4.7.9.1.p: Example AD<Base> Where Base Constructor Allocates Memory: erf, asinh, acosh, atanh, expm1, log1p
      4.7.5.d: Base Type Requirements for Standard Math Functions: erf, asinh, acosh, atanh, expm1, log1p
      4.4.2.20.1: The AD log1p Function: Example and Test
      4.4.2.20: The Logarithm of One Plus Argument: log1p
logarithm 12.3.2.2: Logarithm Function Reverse Mode Theory
          12.3.1.2: Logarithm Function Forward Mode Theory
          4.4.2.20: The Logarithm of One Plus Argument: log1p
          4.4.2.8: The Base 10 Logarithm Function: log10
logdet 10.3.3.i: Lu Factor and Solve with Recorded Pivoting: logdet
       8.14.1.l: Compute Determinant and Solve Linear Equations: logdet
low 7.2.10.3.c: Do One Thread's Work for Multi-Threaded Newton Method: low
lower 5.3.5.j: Multiple Directions Forward Mode: Non-Zero Lower Orders
      4.4.7.2.18.1.b.c: AD Theory for Cholesky Factorization: Notation.Lower Triangular Part
lp_box 5.8.5.2: lp_box Source Code
lp_box: 5.8.5.1: abs_normal lp_box: Example and Test
lu 12.10.3.h: LU Factorization of A Square Matrix and Stability Calculation: LU
   12.10.3: LU Factorization of A Square Matrix and Stability Calculation
   11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
   11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
   11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
   11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
   11.3.2: Double Speed: Determinant Using Lu Factorization
   11.2.1.1: Determinant Using Lu Factorization: Example and Test
   11.2.1: Determinant Using Expansion by Lu Factorization
   11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
   10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
   10.3.3: Lu Factor and Solve with Recorded Pivoting
   10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
   10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
   8.14.3.g: Invert an LU Factored Equation: LU
   8.14.3: Invert an LU Factored Equation
   8.14.2.h: LU Factorization of A Square Matrix: LU
   8.14.2: LU Factorization of A Square Matrix
   8.14: Compute Determinants and Solve Equations by LU Factorization
lufactor 8.14.2.2: Source: LuFactor
lufactor: 8.14.2.1: LuFactor: Example and Test
luinvert 8.14.3.2: Source: LuInvert
luinvert: 8.14.3.1: LuInvert: Example and Test
luratio: 12.10.3.1: LuRatio: Example and Test
lusolve 12.1.g.a: Frequently Asked Questions and Answers: Matrix Inverse.LuSolve
        8.14.1.2: Source: LuSolve
        8.14.1.1: LuSolve With Complex Arguments: Example and Test
M
12.9.2.d: Compute Determinant using Expansion by Minors: m
  12.9.1.e: Determinant of a Minor: m
  12.8.11.j: User Defined Atomic AD Functions: m
  12.8.10.m: Nonlinear Programming Using the CppAD Interface to Ipopt: m
  11.4.8.c: Adolc Test Utility: Allocate and Free Memory For a Matrix: m
  11.2.8.g: Evaluate a Function That Has a Sparse Jacobian: m
  11.2.2.f: Determinant of a Minor: m
  11.1.7.e: Speed Testing Sparse Jacobian: m
  10.3.3.e: Lu Factor and Solve with Recorded Pivoting: m
  8.21.g: An Error Controller for Gear's Ode Solvers: M
  8.20.e: An Arbitrary Order Gear Method: m
  8.18.f: A 3rd and 4th Order Rosenbrock ODE Solver: M
  8.17.g: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: M
  8.16.d: Multi-dimensional Romberg Integration: m
  8.14.1.h: Compute Determinant and Solve Linear Equations: m
  5.8.11.h.b: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: f.m
  5.8.10.j: abs_normal: Minimize a Linear Abs-normal Approximation: m
  5.8.7.g.b: Non-Smooth Optimization Using Abs-normal Linear Approximations: f.m
  5.8.6.j: abs_normal: Minimize a Linear Abs-normal Approximation: m
  5.8.3.h: abs_normal: Evaluate First Order Approximation: m
  5.8.1.b.b: Create An Abs-normal Representation of a Function: f.m
  5.3.5.d.b: Multiple Directions Forward Mode: Notation.m
  5.3.4.c.b: Multiple Order Forward Mode: Notation.m
machine 12.8.8: Machine Epsilon For AD Types
        10.2.15: Example Differentiating a Stack Machine Interpreter
macro 12.8.5.n.a: Routines That Track Use of New and Delete: TrackCount.Macro
      12.8.5.m.a: Routines That Track Use of New and Delete: TrackExtend.Macro
      12.8.5.l.a: Routines That Track Use of New and Delete: TrackDelVec.Macro
      12.8.5.k.a: Routines That Track Use of New and Delete: TrackNewVec.Macro
      8.1.2: CppAD Assertions During Execution
      4.7.9.1.e: Example AD<Base> Where Base Constructor Allocates Memory: Boolean Operator Macro
      4.7.9.1.d: Example AD<Base> Where Base Constructor Allocates Memory: Binary Operator Macro
      4.7.9.1.c: Example AD<Base> Where Base Constructor Allocates Memory: Compound Assignment Macro
macros 8.11.c.a: Obtain Nan or Determine if a Value is Nan: Include.Macros
main 12.9.8: Main Program For Comparing C and C++ Speed
     11.2.a: Speed Testing Utilities: Speed Main Program
     3.3: Correctness Tests For Exponential Approximation in Introduction
make 12.8.13.v: Autotools Unix Test and Installation: make install
     12.8.13.e: Autotools Unix Test and Installation: make
     2.3.c: Checking the CppAD Examples and Tests: Subsets of make check
     2.2.c: Using CMake to Configure CppAD: make check
makefile 2.2: Using CMake to Configure CppAD
management 10.2.13.g: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Memory Management
           4.7.9.3.1.b: Using Adolc with Multiple Levels of Taping: Example and Test: Memory Management
mat 11.4.8.e: Adolc Test Utility: Allocate and Free Memory For a Matrix: mat
    5.8.2.g: abs_normal: Print a Vector or Matrix: mat
mat_mul 12.8.11.5.1.j: Define Matrix Multiply as a User Atomic Operation: Declare mat_mul Function
mat_sum_sq 11.2.6.2: Source: mat_sum_sq
           11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
           11.2.6: Sum Elements of a Matrix Times Itself
math 12.8.12.c.d: zdouble: An AD Base Type With Absolute Zero: Syntax.Standard Math
     12.3.2.c: The Theory of Reverse Mode: Standard Math Functions
     12.3.1.c: The Theory of Forward Mode: Standard Math Functions
     4.7.9.6.l: Enable use of AD<Base> where Base is std::complex<double>: Invalid Unary Math
     4.7.9.6.k: Enable use of AD<Base> where Base is std::complex<double>: Valid Unary Math
     4.7.9.5.h: Enable use of AD<Base> where Base is double: Unary Standard Math
     4.7.9.4.h: Enable use of AD<Base> where Base is float: Unary Standard Math
     4.7.9.3.k: Enable use of AD<Base> where Base is Adolc's adouble Type: Unary Standard Math
     4.7.9.1.o: Example AD<Base> Where Base Constructor Allocates Memory: Unary Standard Math
     4.7.5.b: Base Type Requirements for Standard Math Functions: Unary Standard Math
     4.7.5: Base Type Requirements for Standard Math Functions
     4.4.3: The Binary Math Functions
     4.4.2: The Unary Standard Math Functions
mathematical 3.2.7.b: exp_eps: Second Order Reverse Sweep: Mathematical Form
             3.2.6.c: exp_eps: Second Order Forward Mode: Mathematical Form
             3.2.5.b: exp_eps: First Order Reverse Sweep: Mathematical Form
             3.2.4.b: exp_eps: First Order Forward Sweep: Mathematical Form
             3.2.3.a: exp_eps: Operation Sequence and Zero Order Forward Sweep: Mathematical Form
             3.2.c: An Epsilon Accurate Exponential Approximation: Mathematical Function
             3.1.7.b: exp_2: Second Order Reverse Mode: Mathematical Form
             3.1.6.c: exp_2: Second Order Forward Mode: Mathematical Form
             3.1.5.b: exp_2: First Order Reverse Mode: Mathematical Form
             3.1.4.c: exp_2: First Order Forward Mode: Mathematical Form
             3.1.3.a: exp_2: Operation Sequence and Zero Order Forward Mode: Mathematical Form
             3.1.c: Second Order Exponential Approximation: Mathematical Form
matrices 8.d.g: Some General Purpose Utilities: Miscellaneous.Sparse Matrices
         4.4.7.2.17.1.c.b: Atomic Eigen Matrix Inversion Class: Theory.Product of Three Matrices
         4.4.7.2.16.1.d.b: Atomic Eigen Matrix Multiply Class: Theory.Product of Two Matrices
matrix 12.10.3.d: LU Factorization of A Square Matrix and Stability Calculation: Matrix Storage
       12.10.3: LU Factorization of A Square Matrix and Stability Calculation
       12.8.11.5.1.f: Define Matrix Multiply as a User Atomic Operation: One Matrix Multiply
       12.8.11.5.1.e: Define Matrix Multiply as a User Atomic Operation: Matrix Indexing
       12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
       12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
       12.8.11.t.d: User Defined Atomic AD Functions: Example.Matrix Multiplication
       12.1.g: Frequently Asked Questions and Answers: Matrix Inverse
       11.7.3: Sacado Speed: Matrix Multiplication
       11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
       11.6.3: Fadbad Speed: Matrix Multiplication
       11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
       11.5.3: CppAD Speed, Matrix Multiplication
       11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
       11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
       11.4.3: Adolc Speed: Matrix Multiplication
       11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
       11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
       11.3.2: Double Speed: Determinant Using Lu Factorization
       11.2.6: Sum Elements of a Matrix Times Itself
       11.2.5: Check Gradient of Determinant of 3 by 3 matrix
       11.2.4: Check Determinant of 3 by 3 matrix
       11.2.2: Determinant of a Minor
       11.1.3: Speed Testing Derivative of Matrix Multiply
       11.1.2.g: Speed Testing Gradient of Determinant by Minor Expansion: matrix
       11.1.1.g: Speed Testing Gradient of Determinant Using Lu Factorization: matrix
       10.3.3.f: Lu Factor and Solve with Recorded Pivoting: Matrix
       8.28.f: Sparse Matrix Row, Column, Value Representation: matrix
       8.28: Sparse Matrix Row, Column, Value Representation
       8.14.3.d: Invert an LU Factored Equation: Matrix Storage
       8.14.2.d: LU Factorization of A Square Matrix: Matrix Storage
       8.14.2: LU Factorization of A Square Matrix
       8.14.1.e: Compute Determinant and Solve Linear Equations: Matrix Storage
       8.14: Compute Determinants and Solve Equations by LU Factorization
       5.8.2: abs_normal: Print a Vector or Matrix
       4.4.7.2.19.1.i: Matrix Multiply as an Atomic Operation: Reverse Matrix Multiply
       4.4.7.2.19.1.h: Matrix Multiply as an Atomic Operation: Forward Matrix Multiply
       4.4.7.2.19.1.b: Matrix Multiply as an Atomic Operation: Matrix Dimensions
       4.4.7.2.19.1: Matrix Multiply as an Atomic Operation
       4.4.7.2.19: User Atomic Matrix Multiply: Example and Test
       4.4.7.2.17.1.b: Atomic Eigen Matrix Inversion Class: Matrix Dimensions
       4.4.7.2.17.1: Atomic Eigen Matrix Inversion Class
       4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test
       4.4.7.2.16.1.c: Atomic Eigen Matrix Multiply Class: Matrix Dimensions
       4.4.7.2.16.1: Atomic Eigen Matrix Multiply Class
       4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test
matrix: 11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
matrix_out 5.6.5.m: Compute Sparse Jacobians Using Subgraphs: matrix_out
max 4.4.6.g: Numeric Limits For an AD and Base Types: max
max_itr 7.2.10.5.l: A Multi-Threaded Newton's Method: max_itr
        7.2.10.2.h: Set Up Multi-Threaded Newton Method: max_itr
max_num_threads 12.8.13.j: Autotools Unix Test and Installation: max_num_threads
                12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
max_threads 7.2.j.b: Run Multi-Threading Examples and Speed Tests: multi_newton.max_threads
            7.2.i.b: Run Multi-Threading Examples and Speed Tests: harmonic.max_threads
maxabs 8.21.q: An Error Controller for Gear's Ode Solvers: maxabs
       8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
       8.19.p: An Error Controller for ODE Solvers: maxabs
maximum 12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
        12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
        7: Using CppAD in a Multi-Threading Environment
        2.2: Using CMake to Configure CppAD
maxitr 5.8.11.l: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: maxitr
       5.8.10.r: abs_normal: Minimize a Linear Abs-normal Approximation: maxitr
       5.8.9.o: abs_normal: Solve a Quadratic Program With Box Constraints: maxitr
       5.8.8.m: Solve a Quadratic Program Using Interior Point Method: maxitr
       5.8.7.l: Non-Smooth Optimization Using Abs-normal Linear Approximations: maxitr
       5.8.6.q: abs_normal: Minimize a Linear Abs-normal Approximation: maxitr
       5.8.5.k: abs_normal: Solve a Linear Program With Box Constraints: maxitr
       5.8.4.j: abs_normal: Solve a Linear Program Using Simplex Method: maxitr
measurement 12.8.10.2.1.c.c: An ODE Inverse Problem Example: Measurements.Simulated Measurement Values
            9.3.c.c: ODE Inverse Problem Definitions: Source Code: Measurements.Simulated Measurement Values
measurements 12.8.10.2.1.c: An ODE Inverse Problem Example: Measurements
             9.3.c: ODE Inverse Problem Definitions: Source Code: Measurements
mega_sum 7.2.8.6.h: Timing Test of Multi-Threaded Summation of 1/i: mega_sum
         7.2.i.c: Run Multi-Threading Examples and Speed Tests: harmonic.mega_sum
member 12.8.2: ADFun Object Deprecated Member Functions
       4.7.1: Required Base Class Member Functions
memory 12.8.11.b.c: User Defined Atomic AD Functions: Syntax Function.Free Static Memory
       12.8.7: Memory Leak Detection
       12.8.6.13: OpenMP Memory Allocator: Example and Test
       12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
       12.8.6.10: Return A Raw Array to The Available Memory for a Thread
       12.8.6.9: Allocate Memory and Create A Raw Array
       12.8.6.8: Amount of Memory Available for Quick Use by a Thread
       12.8.6.7: Amount of Memory a Thread is Currently Using
       12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
       12.8.6.5: Return Memory to omp_alloc
       12.8.6.4: Get At Least A Specified Amount of Memory
       12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
       12.8.5: Routines That Track Use of New and Delete
       12.8.2.e: ADFun Object Deprecated Member Functions: Memory
       12.6.g.c: The CppAD Wish List: Optimization.Memory
       12.1.k: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
       12.1.j.c: Frequently Asked Questions and Answers: Speed.Memory Allocation
       11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
       11.1.f.b: Running the Speed Test Program: Global Options.memory
       10.2.13.g: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Memory Management
       8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
       8.23.11: Amount of Memory Available for Quick Use by a Thread
       8.23.10: Amount of Memory a Thread is Currently Using
       8.23.9: Control When Thread Alloc Retains Memory For Future Use
       8.23.8.b.a: Free Memory Currently Available for Quick Use by a Thread: Purpose.Extra Memory
       8.23.8: Free Memory Currently Available for Quick Use by a Thread
       8.23.7: Return Memory to thread_alloc
       8.23.6: Get At Least A Specified Amount of Memory
       8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
       8.23: A Fast Multi-Threading Memory Allocator
       8.22.n: The CppAD::vector Template Class: Memory and Parallel Mode
       8.22.m.a: The CppAD::vector Template Class: vectorBool.Memory
       8.d.c: Some General Purpose Utilities: Miscellaneous.Multi-Threading Memory Allocation
       5.7: Optimize an ADFun Object Tape
       5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
       5.3.8.d.b: Controlling Taylor Coefficients Memory Allocation: c.Freeing Memory
       5.3.8.d.a: Controlling Taylor Coefficients Memory Allocation: c.Pre-Allocating Memory
       5.3.8: Controlling Taylor Coefficients Memory Allocation
       4.7.9.3.1.b: Using Adolc with Multiple Levels of Taping: Example and Test: Memory Management
       4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
       4.6.k: AD Vectors that Record Index Operations: Speed and Memory
       4.4.7.1.c.a: Checkpointing Functions: Purpose.Reduce Memory
       2.2: Using CMake to Configure CppAD
memory_leak 12.8.7: Memory Leak Detection
memory_ok 8.6.g: Object that Runs a Group of Tests: memory_ok
message 12.8.7.j: Memory Leak Detection: Error Message
        5.10.f: Check an ADFun Object For Nan Results: Error Message
method 12.8.10.2.1.g: An ODE Inverse Problem Example: Simultaneous Method
       12.8.10.2.1.f: An ODE Inverse Problem Example: Black Box Method
       11.1.7.b: Speed Testing Sparse Jacobian: Method
       11.1.6.b: Speed Testing Sparse Hessian: Method
       11.1.5.c: Speed Testing Second Derivative of a Polynomial: Method
       11.1.4.c: Speed Testing the Jacobian of Ode Solution: Method
       11.1.2.c: Speed Testing Gradient of Determinant by Minor Expansion: Method
       11.1.1.c: Speed Testing Gradient of Determinant Using Lu Factorization: Method
       10.2.13.e: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Taylor's Method Using AD
       10.2.12.e: Taylor's Ode Solver: A Multi-Level AD Example and Test: Taylor's Method Using AD
       9.3.f: ODE Inverse Problem Definitions: Source Code: Solution Method
       8.20.o: An Arbitrary Order Gear Method: Gear's Method
       8.20: An Arbitrary Order Gear Method
       8.19.f: An Error Controller for ODE Solvers: Method
       7.2.10.6: Timing Test of Multi-Threaded Newton Method
       7.2.10.5.d: A Multi-Threaded Newton's Method: Method
       7.2.10.5: A Multi-Threaded Newton's Method
       7.2.10.4: Take Down Multi-threaded Newton Method
       7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
       7.2.10.2: Set Up Multi-Threaded Newton Method
       7.2.10.1: Common Variables use by Multi-Threaded Newton Method
       7.2.10: Multi-Threaded Newton Method Example / Test
       5.8.10.t: abs_normal: Minimize a Linear Abs-normal Approximation: Method
       5.8.8: Solve a Quadratic Program Using Interior Point Method
       5.8.6.s: abs_normal: Minimize a Linear Abs-normal Approximation: Method
       5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
       5.6.5.c: Compute Sparse Jacobians Using Subgraphs: Method
       5.5.11.c: Subgraph Dependency Sparsity Patterns: Method
       4.4.7.1.d: Checkpointing Functions: Method
       4.4.2.8.c: The Base 10 Logarithm Function: log10: Method
microsoft 11.1.8: Microsoft Version of Elapsed Number of Seconds
          8.5.1.d: Returns Elapsed Number of Seconds: Microsoft Systems
min 4.4.6.f: Numeric Limits For an AD and Base Types: min
min_bytes 12.8.6.4.d: Get At Least A Specified Amount of Memory: min_bytes
          8.23.6.c: Get At Least A Specified Amount of Memory: min_bytes
min_nso_linear 5.8.7.2: min_nso_linear Source Code
min_nso_linear: 5.8.7.1: abs_normal min_nso_linear: Example and Test
min_nso_quad 5.8.11.2: min_nso_quad Source Code
min_nso_quad: 5.8.11.1: abs_normal min_nso_quad: Example and Test
minimize 5.8.10: abs_normal: Minimize a Linear Abs-normal Approximation
         5.8.6: abs_normal: Minimize a Linear Abs-normal Approximation
minor 12.9.1: Determinant of a Minor
      11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
      11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
      11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
      11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
      11.3.1: Double Speed: Determinant by Minor Expansion
      11.2.2: Determinant of a Minor
      11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
minor: 11.2.2.1: Determinant of a Minor: Example and Test
minors 12.9.2: Compute Determinant using Expansion by Minors
       11.2.3.1: Determinant Using Expansion by Minors: Example and Test
       11.2.3: Determinant Using Expansion by Minors
       11.2.1.1: Determinant Using Lu Factorization: Example and Test
       10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
       10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
minors: 11.2.3.1: Determinant Using Expansion by Minors: Example and Test
        10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
        10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
minus 4.4.2.19: The Exponential Function Minus One: expm1
      4.4.1.4: AD Compound Assignment Operators
      4.4.1.3.2: AD Binary Subtraction: Example and Test
      4.4.1.3: AD Binary Arithmetic Operators
      4.4.1.2.1: AD Unary Minus Operator: Example and Test
      4.4.1.2: AD Unary Minus Operator
miscellaneous 8.d: Some General Purpose Utilities: Miscellaneous
mode 12.8.11.m.c: User Defined Atomic AD Functions: afun.Parallel Mode
     12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
     12.8.3: Comparison Changes During Zero Order Forward Mode
     12.6.l: The CppAD Wish List: Forward Mode Recomputation
     12.3.3: An Important Reverse Mode Identity
     12.3.2.9: Error Function Reverse Mode Theory
     12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
     12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
     12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
     12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
     12.3.2.3: Square Root Function Reverse Mode Theory
     12.3.2.2: Logarithm Function Reverse Mode Theory
     12.3.2.1: Exponential Function Reverse Mode Theory
     12.3.2: The Theory of Reverse Mode
     12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
     12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
     12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
     12.3.1.3: Square Root Function Forward Mode Theory
     12.3.1.2: Logarithm Function Forward Mode Theory
     12.3.1.1: Exponential Function Forward Mode Theory
     12.3.1: The Theory of Forward Mode
     10.2.14.d: Taylor's Ode Solver: An Example and Test: Forward Mode
     8.23.4: Is The Current Execution in Parallel Mode
     8.22.n: The CppAD::vector Template Class: Memory and Parallel Mode
     8.18.m: A 3rd and 4th Order Rosenbrock ODE Solver: Parallel Mode
     8.17.n: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Parallel Mode
     8.10.f: Check Simple Vector Concept: Parallel Mode
     8.8.d: Check NumericType Class Concept: Parallel Mode
     8.1.b.a: Replacing the CppAD Error Handler: Constructor.Parallel Mode
     7.1: Enable AD Calculations During Parallel Mode
     7: Using CppAD in a Multi-Threading Environment
     5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
     5.5.8: Hessian Sparsity Pattern: Forward Mode
     5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
     5.5.7: Forward Mode Hessian Sparsity Patterns
     5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
     5.5.6: Hessian Sparsity Pattern: Reverse Mode
     5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
     5.5.5: Reverse Mode Hessian Sparsity Patterns
     5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
     5.5.4: Jacobian Sparsity Pattern: Reverse Mode
     5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
     5.5.3: Reverse Mode Jacobian Sparsity Patterns
     5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
     5.5.2: Jacobian Sparsity Pattern: Forward Mode
     5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
     5.5.1: Forward Mode Jacobian Sparsity Patterns
     5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
     5.4.4: Reverse Mode Using Subgraphs
     5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
     5.4.3: Any Order Reverse Mode
     5.4.2: Second Order Reverse Mode
     5.4.1: First Order Reverse Mode
     5.3.5.c: Multiple Directions Forward Mode: Reverse Mode
     5.3.5: Multiple Directions Forward Mode
     5.3.4: Multiple Order Forward Mode
     5.2.6: Reverse Mode Second Partial Derivative Driver
     5.2.5: Forward Mode Second Partial Derivative Driver
     5.1.3.i: Stop Recording and Store Operation Sequence: Parallel Mode
     5.1.2.j: Construct an ADFun Object and Stop Recording: Parallel Mode
     5.1.1.h: Declare Independent Variables and Start Recording: Parallel Mode
     5.4: Reverse Mode
     5.3: Forward Mode
     4.4.7.2.18.1.f: AD Theory for Cholesky Factorization: Reverse Mode
     4.4.7.2.18.1.c: AD Theory for Cholesky Factorization: Forward Mode
     4.4.7.2.5: Atomic Reverse Mode
     4.4.7.2.4: Atomic Forward Mode
     4.4.5.l: Discrete AD Functions: Parallel Mode
     4.3.6.1: Printing During Forward Mode: Example and Test
     4.3.6: Printing AD Values During Forward Mode
     3.2.6: exp_eps: Second Order Forward Mode
     3.2.5: exp_eps: First Order Reverse Sweep
     3.1.7.1: exp_2: Verify Second Order Reverse Sweep
     3.1.5.1: exp_2: Verify First Order Reverse Sweep
     3.1.7: exp_2: Second Order Reverse Mode
     3.1.6: exp_2: Second Order Forward Mode
     3.1.5: exp_2: First Order Reverse Mode
     3.1.4: exp_2: First Order Forward Mode
     3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
     3.b.c: An Introduction by Example to Algorithmic Differentiation: Preface.Reverse Mode
     3.b.b: An Introduction by Example to Algorithmic Differentiation: Preface.Forward Mode
mode: 12.1.h: Frequently Asked Questions and Answers: Mode: Forward or Reverse
      5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
      5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
      5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
      5.4.3.1: Third Order Reverse Mode: Example and Test
      5.4.1.1: First Order Reverse Mode: Example and Test
      5.3.5.1: Forward Mode: Example and Test of Multiple Directions
      5.3.4.2: Forward Mode: Example and Test of Multiple Orders
      5.3.4.1: Forward Mode: Example and Test
      5.3.3: Second Order Forward Mode: Derivative Values
      5.3.2: First Order Forward Mode: Derivative Values
      5.3.1: Zero Order Forward Mode: Function Values
      4.3.6.2: Print During Zero Order Forward Mode: Example and Test
      4.3.6.1: Printing During Forward Mode: Example and Test
modeexample 5.4.2.1: Second Order Reverse ModeExample and Test
monthly 2.1.h: Download The CppAD Source Code: Monthly Versions
more 4.7.3.a.b: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq.More Complicated Cases
motivation 12.8.12.d: zdouble: An AD Base Type With Absolute Zero: Motivation
           10.2.10.b: Using Multiple Levels of AD: Motivation
           8.5.c: Determine Amount of Time to Execute a Test: Motivation
           8.4.c: Run One Speed Test and Print Results: Motivation
           8.3.c: Run One Speed Test and Return Results: Motivation
           4.5.5.c: Check if Two Value are Identically Equal: Motivation
move 8.22.e.c: The CppAD::vector Template Class: Assignment.Move Semantics
ms 12.8.9.d: Choosing The Vector Testing Template Class: MS Windows
msg 8.1.2.h: CppAD Assertions During Execution: Msg
    8.1.j: Replacing the CppAD Error Handler: msg
mul_level
     checkpoint 4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
multi 12.8.6.13: OpenMP Memory Allocator: Example and Test
      12.8.5: Routines That Track Use of New and Delete
      8.16: Multi-dimensional Romberg Integration
multi-dimensional 8.16: Multi-dimensional Romberg Integration
multi-level 10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
            10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
multi-thread 8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
multi-threaded 7.2.10.6: Timing Test of Multi-Threaded Newton Method
               7.2.10.5: A Multi-Threaded Newton's Method
               7.2.10.4: Take Down Multi-threaded Newton Method
               7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
               7.2.10.2: Set Up Multi-Threaded Newton Method
               7.2.10.1: Common Variables use by Multi-Threaded Newton Method
               7.2.10: Multi-Threaded Newton Method Example / Test
               7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation
               7.2.9.6: Run Multi-Threaded User Atomic Calculation
               7.2.9.5: Multi-Threaded User Atomic Take Down
               7.2.9.4: Multi-Threaded User Atomic Worker
               7.2.9.3: Multi-Threaded User Atomic Set Up
               7.2.9.2: Multi-Threaded User Atomic Common Information
               7.2.8.6: Timing Test of Multi-Threaded Summation of 1/i
               7.2.8.5: Multi-Threaded Implementation of Summation of 1/i
multi-threading 12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
                12.8.5.o: Routines That Track Use of New and Delete: Multi-Threading
                12.6.a: The CppAD Wish List: Multi-Threading
                8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
                8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
                8.23: A Fast Multi-Threading Memory Allocator
                8.d.c: Some General Purpose Utilities: Miscellaneous.Multi-Threading Memory Allocation
                7.2.9: Multi-Threading User Atomic Example / Test
                7.2.8.4: Take Down Multi-threading Sum of 1/i
                7.2.8.2: Set Up Multi-threading Sum of 1/i
                7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i
                7.2.8: Multi-Threading Harmonic Summation Example / Test
                7.2: Run Multi-Threading Examples and Speed Tests
                7: Using CppAD in a Multi-Threading Environment
multi_newton 7.2.j: Run Multi-Threading Examples and Speed Tests: multi_newton
multiple 11.1.3: Speed Testing Derivative of Matrix Multiply
         10.2.10.2: Computing a Jacobian With Constants that Change
         10.2.10.1: Multiple Level of AD: Example and Test
         10.2.10: Using Multiple Levels of AD
         5.3.5.1: Forward Mode: Example and Test of Multiple Directions
         5.3.5: Multiple Directions Forward Mode
         5.3.4.2: Forward Mode: Example and Test of Multiple Orders
         5.3.4.k.b: Multiple Order Forward Mode: yq.Multiple Orders
         5.3.4.g.b: Multiple Order Forward Mode: xq.Multiple Orders
         5.3.4: Multiple Order Forward Mode
         5.4.a: Reverse Mode: Multiple Directions
         4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
         4.4.7.1.c.e: Checkpointing Functions: Purpose.Multiple Level AD
         4.4.1.4: AD Compound Assignment Operators
multiple-levels 4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test
multiplication 12.8.11.t.d: User Defined Atomic AD Functions: Example.Matrix Multiplication
               12.3.2.b.c: The Theory of Reverse Mode: Binary Operators.Multiplication
               12.3.1.b.c: The Theory of Forward Mode: Binary Operators.Multiplication
               11.7.3: Sacado Speed: Matrix Multiplication
               11.6.3: Fadbad Speed: Matrix Multiplication
               11.5.3: CppAD Speed, Matrix Multiplication
               11.4.3: Adolc Speed: Matrix Multiplication
               11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
               4.4.3.3: Absolute Zero Multiplication
               4.4.1.4.j.c: AD Compound Assignment Operators: Derivative.Multiplication
               4.4.1.3.j.c: AD Binary Arithmetic Operators: Derivative.Multiplication
multiplication: 4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test
                4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
                4.4.1.3.3: AD Binary Multiplication: Example and Test
multiply 12.8.11.5.1.f: Define Matrix Multiply as a User Atomic Operation: One Matrix Multiply
         12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
         12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
         11.7.3: Sacado Speed: Matrix Multiplication
         11.6.3: Fadbad Speed: Matrix Multiplication
         11.5.3: CppAD Speed, Matrix Multiplication
         11.4.3: Adolc Speed: Matrix Multiplication
         11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
         11.2.6: Sum Elements of a Matrix Times Itself
         11.1.3: Speed Testing Derivative of Matrix Multiply
         4.4.7.2.19.1.i: Matrix Multiply as an Atomic Operation: Reverse Matrix Multiply
         4.4.7.2.19.1.h: Matrix Multiply as an Atomic Operation: Forward Matrix Multiply
         4.4.7.2.19.1: Matrix Multiply as an Atomic Operation
         4.4.7.2.16.1: Atomic Eigen Matrix Multiply Class
         4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
         4.4.1.4: AD Compound Assignment Operators
         4.4.1.3.3: AD Binary Multiplication: Example and Test
         4.4.1.3: AD Binary Arithmetic Operators
multiply: 4.4.7.2.19: User Atomic Matrix Multiply: Example and Test
          4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test
N
NDEBUG 12.8.5: Routines That Track Use of New and Delete
       8.9: Definition of a Simple Vector
       5.7: Optimize an ADFun Object Tape
NearEqual 8.2: Determine if Two Values Are Nearly Equal
          4.5.2: Compare AD and Base Objects for Nearly Equal
NearEqualExt 4.5.2.1: Compare AD with Base Objects: Example and Test
NULL 6: CppAD API Preprocessor Symbols
NumericType 8.8.1: The CheckNumericType Function: Example and Test
            8.7.1: The NumericType: Example and Test
n_sweep 11.1.7.j: Speed Testing Sparse Jacobian: n_sweep
        11.1.6.i: Speed Testing Sparse Hessian: n_sweep
        11.1.i.a: Running the Speed Test Program: Speed Results.n_sweep
        5.6.4.j: Sparse Hessian: n_sweep
        5.6.3.l: Computing Sparse Hessians: n_sweep
        5.6.2.i: Sparse Jacobian: n_sweep
        5.6.1.n: Computing Sparse Jacobians: n_sweep
name 12.11.a: CppAD Addons: Name
     12.8.b: CppAD Deprecated API Features: Name Changes
     8.6.f: Object that Runs a Group of Tests: name
     8.4.e.c: Run One Speed Test and Print Results: Test.name
     8.2.1.a: NearEqual Function: Example and Test: File Name
     5.8.2.d: abs_normal: Print a Vector or Matrix: name
     4.4.7.2.1.c.c: Atomic Function Constructor: atomic_base.name
     4.4.7.1.h: Checkpointing Functions: name
     4.4.5.d: Discrete AD Functions: name
namespace 12.11.e: CppAD Addons: Namespace
          12.8.10.d: Nonlinear Programming Using the CppAD Interface to Ipopt: cppad_ipopt namespace
          12.1.i: Frequently Asked Questions and Answers: Namespace
          10.2.4.f: Enable Use of Eigen Linear Algebra Package with CppAD: CppAD Namespace
          f: cppad-20171217: A Package for Differentiation of C++ Algorithms: Namespace
nan 12.8.12.c.e: zdouble: An AD Base Type With Absolute Zero: Syntax.Nan
    8.19.1.a: OdeErrControl: Example and Test: Nan
    8.19.f.b: An Error Controller for ODE Solvers: Method.Nan
    8.18.e.f: A 3rd and 4th Order Rosenbrock ODE Solver: Fun.Nan
    8.11: Obtain Nan or Determine if a Value is Nan
    8.11: Obtain Nan or Determine if a Value is Nan
    5.10: Check an ADFun Object For Nan Results
    5.7.h.b: Optimize an ADFun Object Tape: Atomic Functions.nan
nan(zero) 8.11.f: Obtain Nan or Determine if a Value is Nan: nan(zero)
nan: 8.11.1: nan: Example and Test
     5.10.1: ADFun Checking For Nan: Example and Test
nc 8.28.i: Sparse Matrix Row, Column, Value Representation: nc
   8.27.g: Row and Column Index Sparsity Patterns: nc
   5.8.2.f: abs_normal: Print a Vector or Matrix: nc
ncopy 12.8.5.j: Routines That Track Use of New and Delete: ncopy
ndebug 12.8.6.11.h: Check If A Memory Allocation is Efficient for Another Use: NDEBUG
       12.8.6.5.f: Return Memory to omp_alloc: NDEBUG
       12.1.j.a: Frequently Asked Questions and Answers: Speed.NDEBUG
       8.23.7.e: Return Memory to thread_alloc: NDEBUG
       8.1.2.c: CppAD Assertions During Execution: NDEBUG
near 8.2: Determine if Two Values Are Nearly Equal
nearequal 8.2.1: NearEqual Function: Example and Test
nearly 8.2: Determine if Two Values Are Nearly Equal
       4.5.2: Compare AD and Base Objects for Nearly Equal
nested 5.7.6: Example Optimization and Nested Conditional Expressions
new 12.8.5.1: Tracking Use of New and Delete: Example and Test
    12.8.5: Routines That Track Use of New and Delete
    12.6.b.b: The CppAD Wish List: Atomic.New API
newlen 12.8.5.h: Routines That Track Use of New and Delete: newlen
newptr 12.8.5.i: Routines That Track Use of New and Delete: head newptr
newton 7.2.10.6: Timing Test of Multi-Threaded Newton Method
       7.2.10.4: Take Down Multi-threaded Newton Method
       7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
       7.2.10.2: Set Up Multi-Threaded Newton Method
       7.2.10.1: Common Variables use by Multi-Threaded Newton Method
       7.2.10: Multi-Threaded Newton Method Example / Test
       5.8.8.t: Solve a Quadratic Program Using Interior Point Method: Newton Step
newton'7.2.10.5: A Multi-Threaded Newton's Method
nnz 8.28.j: Sparse Matrix Row, Column, Value Representation: nnz
    8.27.h: Row and Column Index Sparsity Patterns: nnz
no_compare_op 5.7.d.b: Optimize an ADFun Object Tape: options.no_compare_op
no_conditional_skip 5.7.d.a: Optimize an ADFun Object Tape: options.no_conditional_skip
no_print_for_op 5.7.d.c: Optimize an ADFun Object Tape: options.no_print_for_op
non-smooth 5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations
           5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations
           5.8: Abs-normal Representation of Non-Smooth Functions
non-zero 5.3.5.j: Multiple Directions Forward Mode: Non-Zero Lower Orders
nonlinear 12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
          12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
          9.2: Nonlinear Programming Retaping: Example and Test
          9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
          9: Use Ipopt to Solve a Nonlinear Programming Problem
norm 4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
not 4.7.9: Example AD Base Types That are not AD<OtherBase>
    4.7.4.c: Base Type Requirements for Ordered Comparisons: Not Ordered
    4.7.2.c.b: Base Type Requirements for Conditional Expressions: CondExpTemplate.Not Ordered
notation 12.8.10.2.1.a: An ODE Inverse Problem Example: Notation
         12.3.3.a: An Important Reverse Mode Identity: Notation
         12.3.2.9.a: Error Function Reverse Mode Theory: Notation
         12.3.2.8.a: Tangent and Hyperbolic Tangent Reverse Mode Theory: Notation
         12.3.2.a: The Theory of Reverse Mode: Taylor Notation
         12.3.1.a: The Theory of Forward Mode: Taylor Notation
         8.21.d: An Error Controller for Gear's Ode Solvers: Notation
         8.19.d: An Error Controller for ODE Solvers: Notation
         5.5.11.b: Subgraph Dependency Sparsity Patterns: Notation
         5.4.4.c: Reverse Mode Using Subgraphs: Notation
         5.4.3.c: Any Order Reverse Mode: Notation
         5.3.5.d: Multiple Directions Forward Mode: Notation
         5.3.4.c: Multiple Order Forward Mode: Notation
         4.7.1.a: Required Base Class Member Functions: Notation
         4.4.7.2.18.1.b: AD Theory for Cholesky Factorization: Notation
nr 8.28.h: Sparse Matrix Row, Column, Value Representation: nr
   8.27.f: Row and Column Index Sparsity Patterns: nr
   5.8.2.e: abs_normal: Print a Vector or Matrix: nr
nstep 8.21.r: An Error Controller for Gear's Ode Solvers: nstep
      8.19.q: An Error Controller for ODE Solvers: nstep
num_bytes 12.8.6.11.e: Check If A Memory Allocation is Efficient for Another Use: num_bytes
          12.8.6.8.e: Amount of Memory Available for Quick Use by a Thread: num_bytes
          12.8.6.7.e: Amount of Memory a Thread is Currently Using: num_bytes
          8.23.11.d: Amount of Memory Available for Quick Use by a Thread: num_bytes
          8.23.10.d: Amount of Memory a Thread is Currently Using: num_bytes
num_itr 7.2.9.1.c.a: Defines a User Atomic Operation that Computes Square Root: au.num_itr
num_solve 7.2.9.7.f: Timing Test for Multi-Threaded User Atomic Calculation: num_solve
num_sub 7.2.10.6.i: Timing Test of Multi-Threaded Newton Method: num_sub
        7.2.10.5.h: A Multi-Threaded Newton's Method: num_sub
        7.2.10.2.d: Set Up Multi-Threaded Newton Method: num_sub
        7.2.j.d: Run Multi-Threading Examples and Speed Tests: multi_newton.num_sub
num_sum 7.2.10.6.j: Timing Test of Multi-Threaded Newton Method: num_sum
        7.2.8.5.f: Multi-Threaded Implementation of Summation of 1/i: num_sum
        7.2.8.2.d: Set Up Multi-threading Sum of 1/i: num_sum
        7.2.j.e: Run Multi-Threading Examples and Speed Tests: multi_newton.num_sum
num_threads 8.23.2.d: Setup thread_alloc For Use in Multi-Threading Environment: num_threads
            7.2.10.6.g: Timing Test of Multi-Threaded Newton Method: num_threads
            7.2.10.5.m: A Multi-Threaded Newton's Method: num_threads
            7.2.10.2.i: Set Up Multi-Threaded Newton Method: num_threads
            7.2.9.7.e: Timing Test for Multi-Threaded User Atomic Calculation: num_threads
            7.2.8.6.g: Timing Test of Multi-Threaded Summation of 1/i: num_threads
num_zero 7.2.10.6.h: Timing Test of Multi-Threaded Newton Method: num_zero
         7.2.j.c: Run Multi-Threading Examples and Speed Tests: multi_newton.num_zero
number 12.9.6: Returns Elapsed Number of Seconds
       12.9.5: Repeat det_by_minor Routine A Specified Number of Times
       12.8.6.12.d: Set Maximum Number of Threads for omp_alloc Allocator: number
       12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
       12.8.6.3: Get the Current OpenMP Thread Number
       12.8.6.1.d: Set and Get Maximum Number of Threads for omp_alloc Allocator: number
       12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
       12.8.4.d: OpenMP Parallel Setup: number
       11.1.8: Microsoft Version of Elapsed Number of Seconds
       8.23.5: Get the Current Thread Number
       8.23.3.c: Get Number of Threads: number
       8.23.3: Get Number of Threads
       8.5.1: Returns Elapsed Number of Seconds
       7: Using CppAD in a Multi-Threading Environment
       5.3.9.1: Number of Variables That Can be Skipped: Example and Test
       5.3.9: Number of Variables that Can be Skipped
       5.3.7.e: Comparison Changes Between Taping and Zero Order Forward: number
       5.3.6: Number Taylor Coefficient Orders Currently Stored
       2.2: Using CMake to Configure CppAD
number_skip 5.3.9.1: Number of Variables That Can be Skipped: Example and Test
            5.3.9: Number of Variables that Can be Skipped
numbervector 12.8.10.i: Nonlinear Programming Using the CppAD Interface to Ipopt: NumberVector
numeric 9.f.d: Use Ipopt to Solve a Nonlinear Programming Problem: options.Numeric
        8.8: Check NumericType Class Concept
        8.7: Definition of a Numeric Type
        4.7.6: Base Type Requirements for Numeric Limits
        4.7.f: AD<Base> Requirements for a CppAD Base Type: Numeric Type
        4.4.6.1: Numeric Limits: Example and Test
        4.4.6: Numeric Limits For an AD and Base Types
numeric_limits 4.7.9.6.n: Enable use of AD<Base> where Base is std::complex<double>: numeric_limits
               4.7.9.5.k: Enable use of AD<Base> where Base is double: numeric_limits
               4.7.9.4.k: Enable use of AD<Base> where Base is float: numeric_limits
               4.7.9.3.p: Enable use of AD<Base> where Base is Adolc's adouble Type: numeric_limits
               4.7.9.1.s: Example AD<Base> Where Base Constructor Allocates Memory: numeric_limits
numerical 12.10: Some Numerical AD Utilities
          12.5.d: Bibliography: Numerical Recipes
          8.c: Some General Purpose Utilities: General Numerical Routines
numerictype 8.8: Check NumericType Class Concept
numerictype: 8.7.1: The NumericType: Example and Test
numtraits 10.2.4.e: Enable Use of Eigen Linear Algebra Package with CppAD: Eigen NumTraits
O
Ode 8.20: An Arbitrary Order Gear Method
OdeErrControl 8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
              8.19: An Error Controller for ODE Solvers
OdeGear 8.20.1: OdeGear: Example and Test
        8.20: An Arbitrary Order Gear Method
OdeGearControl 8.21.1: OdeGearControl: Example and Test
               8.21: An Error Controller for Gear's Ode Solvers
OpenMP 7.2.3: A Simple Parallel Pthread Example and Test
obj_value 12.8.10.t.g: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.obj_value
          9.m.g: Use Ipopt to Solve a Nonlinear Programming Problem: solution.obj_value
object 12.8.2: ADFun Object Deprecated Member Functions
       10.2.1: Creating Your Own Interface to an ADFun Object
       8.6: Object that Runs a Group of Tests
       5.10: Check an ADFun Object For Nan Results
       5.7: Optimize an ADFun Object Tape
       5.1.2: Construct an ADFun Object and Stop Recording
       5.1: Create an ADFun Object (Record an Operation Sequence)
       4.5.4: Is an AD Object a Parameter or Variable
       4.2.1: AD Assignment: Example and Test
       4.1.1: AD Constructors: Example and Test
objective 12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
          12.8.10.2.3.b: ODE Fitting Using Fast Representation: Objective Function
          12.8.10.2.2.c: ODE Fitting Using Simple Representation: Objective Function
objects 5: ADFun Objects
        4.5.2: Compare AD and Base Objects for Nearly Equal
        4.3: Conversion and I/O of AD Objects
        4: AD Objects
objects: 4.5.2.1: Compare AD with Base Objects: Example and Test
obtain 8.11: Obtain Nan or Determine if a Value is Nan
       4.3.7: Convert an AD Variable to a Parameter
ode 12.8.10.2.4: Driver for Running the Ipopt ODE Example
    12.8.10.2.3.1: ODE Fitting Using Fast Representation
    12.8.10.2.2.1: ODE Fitting Using Simple Representation
    12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
    12.8.10.2.3: ODE Fitting Using Fast Representation
    12.8.10.2.2: ODE Fitting Using Simple Representation
    12.8.10.2.1: An ODE Inverse Problem Example
    11.7.4: Sacado Speed: Gradient of Ode Solution
    11.6.4: Fadbad Speed: Ode
    11.5.4: CppAD Speed: Gradient of Ode Solution
    11.4.4: Adolc Speed: Ode
    11.3.4: Double Speed: Ode Solution
    11.2.7: Evaluate a Function Defined in Terms of an ODE
    11.1.4: Speed Testing the Jacobian of Ode Solution
    10.2.14.c: Taylor's Ode Solver: An Example and Test: ODE Solution
    10.2.14.b: Taylor's Ode Solver: An Example and Test: ODE
    10.2.14: Taylor's Ode Solver: An Example and Test
    10.2.13.d: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Derivative of ODE Solution
    10.2.13.c: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: ODE Solution
    10.2.13.b: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: ODE
    10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
    10.2.12.d: Taylor's Ode Solver: A Multi-Level AD Example and Test: Derivative of ODE Solution
    10.2.12.c: Taylor's Ode Solver: A Multi-Level AD Example and Test: ODE Solution
    10.2.12.b: Taylor's Ode Solver: A Multi-Level AD Example and Test: ODE
    10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
    10.2.11: A Stiff Ode: Example and Test
    9.3: ODE Inverse Problem Definitions: Source Code
    8.21: An Error Controller for Gear's Ode Solvers
    8.19: An Error Controller for ODE Solvers
    8.18: A 3rd and 4th Order Rosenbrock ODE Solver
    8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
    4.4.7.1.4.e: Checkpointing an Extended ODE Solver: Example and Test: ODE
    4.4.7.1.4.d: Checkpointing an Extended ODE Solver: Example and Test: ODE Solver
    4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
    4.4.7.1.3.e: Checkpointing an ODE Solver: Example and Test: ODE
    4.4.7.1.3.d: Checkpointing an ODE Solver: Example and Test: ODE Solver
    4.4.7.1.3: Checkpointing an ODE Solver: Example and Test
ode: 10.2.11: A Stiff Ode: Example and Test
ode_evaluate 11.2.7.2: Source: ode_evaluate
             11.2.7.1: ode_evaluate: Example and test
             11.2.7: Evaluate a Function Defined in Terms of an ODE
ode_evaluate: 11.2.7.1: ode_evaluate: Example and test
ode_inverse 9.n.c: Use Ipopt to Solve a Nonlinear Programming Problem: Example.ode_inverse
odeerrcontrol: 8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
               8.19.1: OdeErrControl: Example and Test
odegear: 8.20.1: OdeGear: Example and Test
odegearcontrol: 8.21.1: OdeGearControl: Example and Test
ok 12.8.11.f: User Defined Atomic AD Functions: ok
   11.2.5.g: Check Gradient of Determinant of 3 by 3 matrix: ok
   11.2.4.g: Check Determinant of 3 by 3 matrix: ok
   8.23.14.c: Free All Memory That Was Allocated for Use by thread_alloc: ok
   8.6.h: Object that Runs a Group of Tests: ok
   7.2.11.h: Specifications for A Team of AD Threads: ok
   7.2.10.6.d: Timing Test of Multi-Threaded Newton Method: ok
   7.2.10.5.e: A Multi-Threaded Newton's Method: ok
   7.2.9.7.g: Timing Test for Multi-Threaded User Atomic Calculation: ok
   7.2.9.6.e: Run Multi-Threaded User Atomic Calculation: ok
   7.2.9.5.e: Multi-Threaded User Atomic Take Down: ok
   7.2.9.3.e: Multi-Threaded User Atomic Set Up: ok
   7.2.8.6.d: Timing Test of Multi-Threaded Summation of 1/i: ok
   7.2.8.5.d: Multi-Threaded Implementation of Summation of 1/i: ok
   5.9.i: Check an ADFun Sequence of Operations: ok
   5.8.9.r: abs_normal: Solve a Quadratic Program With Box Constraints: ok
   5.8.8.r: Solve a Quadratic Program Using Interior Point Method: ok
   5.8.5.m: abs_normal: Solve a Linear Program With Box Constraints: ok
   5.8.4.l: abs_normal: Solve a Linear Program Using Simplex Method: ok
   4.4.7.2.7.e: Atomic Reverse Jacobian Sparsity Patterns: ok
   4.4.7.2.6.e: Atomic Forward Jacobian Sparsity Patterns: ok
   4.4.7.2.5.j: Atomic Reverse Mode: ok
   4.4.7.2.4.j: Atomic Forward Mode: ok
old 12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
    12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
    12.8.11.1: Old Atomic Operation Reciprocal: Example and Test
    5.6.b: Calculating Sparse Derivatives: Old Sparsity Patterns
    5.5.b: Calculating Sparsity Patterns: Old Sparsity Patterns
old_atomic 12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
           12.8.11: User Defined Atomic AD Functions
old_mat_mul 12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
oldptr 12.8.5.g: Routines That Track Use of New and Delete: oldptr
omp_alloc 12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
          12.8.6.5: Return Memory to omp_alloc
          12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
omp_max_thread 12.8.4: OpenMP Parallel Setup
on 5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
one 12.8.11.5.1.g: Define Matrix Multiply as a User Atomic Operation: Reverse Partials One Order
    12.8.11.5.1.f: Define Matrix Multiply as a User Atomic Operation: One Matrix Multiply
    8.16.1: One Dimensional Romberg Integration: Example and Test
    8.15.1: One Dimensional Romberg Integration: Example and Test
    8.15: One DimensionalRomberg Integration
    8.4: Run One Speed Test and Print Results
    8.3: Run One Speed Test and Return Results
    7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
    7.2.8.3: Do One Thread's Work for Sum of 1/i
    5.3.4.k.a: Multiple Order Forward Mode: yq.One Order
    5.3.4.g.a: Multiple Order Forward Mode: xq.One Order
    5.3.4.e: Multiple Order Forward Mode: One Order
    5.3.2: First Order Forward Mode: Derivative Values
    4.4.2.20: The Logarithm of One Plus Argument: log1p
one: 4.4.2.19: The Exponential Function Minus One: expm1
onetape 11.1.f.a: Running the Speed Test Program: Global Options.onetape
op 4.5.1.c: AD Binary Comparison Operators: Op
   4.4.7.2.18.2.c.c: Atomic Eigen Cholesky Factorization Class: Public.op
   4.4.7.2.17.1.e.c: Atomic Eigen Matrix Inversion Class: Public.op
   4.4.7.2.16.1.f.c: Atomic Eigen Matrix Multiply Class: Public.op
   4.4.1.4.c: AD Compound Assignment Operators: Op
   4.4.1.3.c: AD Binary Arithmetic Operators: Op
op_index 5.3.7.f: Comparison Changes Between Taping and Zero Order Forward: op_index
openmp 12.8.6.13: OpenMP Memory Allocator: Example and Test
       12.8.6.3: Get the Current OpenMP Thread Number
       12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
       12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
       12.8.4: OpenMP Parallel Setup
       8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
       7.2.11.1: OpenMP Implementation of a Team of AD Threads
       7.2.4: A Simple OpenMP AD: Example and Test
       7.2.1: A Simple OpenMP Example and Test
openmp/run.sh 12.7.7.al.a: Changes and Additions to CppAD During 2011: 07-11.openmp/run.sh
openmp_flags 12.8.13.l: Autotools Unix Test and Installation: openmp_flags
operand 4.4.7.2.19.1.f: Matrix Multiply as an Atomic Operation: Right Operand Element Index
        4.4.7.2.19.1.e: Matrix Multiply as an Atomic Operation: Left Operand Element Index
operation 12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
          12.8.11.1: Old Atomic Operation Reciprocal: Example and Test
          12.8.11: User Defined Atomic AD Functions
          12.6.n: The CppAD Wish List: Operation Sequence
          12.4.g: Glossary: Operation
          12.1.g.b: Frequently Asked Questions and Answers: Matrix Inverse.Atomic Operation
          11.2.7.d.a: Evaluate a Function Defined in Terms of an ODE: Float.Operation Sequence
          8.17.c: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Operation Sequence
          8.13.j: Evaluate a Polynomial or its Derivative: Operation Sequence
          8.12.i: The Integer Power Function: Operation Sequence
          7.2.9.1: Defines a User Atomic Operation that Computes Square Root
          5.1.4: Abort Recording of an Operation Sequence
          5.1.3: Stop Recording and Store Operation Sequence
          5.1: Create an ADFun Object (Record an Operation Sequence)
          4.5.5: Check if Two Value are Identically Equal
          4.5.4.e: Is an AD Object a Parameter or Variable: Operation Sequence
          4.5.3.l: AD Boolean Functions: Operation Sequence
          4.5.2.i: Compare AD and Base Objects for Nearly Equal: Operation Sequence
          4.5.1.g: AD Binary Comparison Operators: Operation Sequence
          4.4.7.2.19.1: Matrix Multiply as an Atomic Operation
          4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
          4.4.7: Atomic AD Functions
          4.4.5.1: Taping Array Index Operation: Example and Test
          4.4.5.j: Discrete AD Functions: Operation Sequence
          4.4.4.l: AD Conditional Expressions: Operation Sequence
          4.4.3.2.g: The AD Power Function: Operation Sequence
          4.4.3.1.f: AD Two Argument Inverse Tangent Function: Operation Sequence
          4.4.1.4.h: AD Compound Assignment Operators: Operation Sequence
          4.4.1.3.h: AD Binary Arithmetic Operators: Operation Sequence
          4.4.1.2.f: AD Unary Minus Operator: Operation Sequence
          4.4.1.1.e: AD Unary Plus Operator: Operation Sequence
          4.3.5.g: AD Output Stream Operator: Operation Sequence
          4.3.4.f: AD Output Stream Operator: Operation Sequence
          4.3.2.e: Convert From AD to Integer: Operation Sequence
          4.3.1.f: Convert From an AD Type to its Base Type: Operation Sequence
          3.2.6.d.c: exp_eps: Second Order Forward Mode: Operation Sequence.Operation
          3.2.6.d: exp_eps: Second Order Forward Mode: Operation Sequence
          3.2.4.c.b: exp_eps: First Order Forward Sweep: Operation Sequence.Operation
          3.2.4.c: exp_eps: First Order Forward Sweep: Operation Sequence
          3.2.3.b.e: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Operation
          3.2.3.b: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence
          3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
          3.1.6.d.c: exp_2: Second Order Forward Mode: Operation Sequence.Operation
          3.1.6.d: exp_2: Second Order Forward Mode: Operation Sequence
          3.1.4.d.b: exp_2: First Order Forward Mode: Operation Sequence.Operation
          3.1.4.d: exp_2: First Order Forward Mode: Operation Sequence
          3.1.3.c.c: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence.Operation
          3.1.3.c: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence
          3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
          3.b.d: An Introduction by Example to Algorithmic Differentiation: Preface.Operation Count
operation: 12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
           4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test
           4.4.5.1: Taping Array Index Operation: Example and Test
operations 12.6.b.d: The CppAD Wish List: Atomic.Element-wise Operations
           8.13.h.a: Evaluate a Polynomial or its Derivative: Type.Operations
           5.9: Check an ADFun Sequence of Operations
           5.7.7: Example Optimization and Cumulative Sum Operations
           5.7: Optimize an ADFun Object Tape
           4.6: AD Vectors that Record Index Operations
           4.5: Bool Valued Operations and Functions with AD Arguments
           4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test
           4.4: AD Valued Operations and Functions
operations: 12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
            4.6.1: AD Vectors that Record Index Operations: Example and Test
            4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
            4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
operator 11: Speed Test an Operator Overloading AD Package
         5.1.2.k.c: Construct an ADFun Object and Stop Recording: Example.Assignment Operator
         5.1.2.i: Construct an ADFun Object and Stop Recording: Assignment Operator
         4.7.9.1.k: Example AD<Base> Where Base Constructor Allocates Memory: Output Operator
         4.7.9.1.e: Example AD<Base> Where Base Constructor Allocates Memory: Boolean Operator Macro
         4.7.9.1.d: Example AD<Base> Where Base Constructor Allocates Memory: Binary Operator Macro
         4.7.g: AD<Base> Requirements for a CppAD Base Type: Output Operator
         4.4.1.2: AD Unary Minus Operator
         4.4.1.1: AD Unary Plus Operator
         4.3.5: AD Output Stream Operator
         4.3.4: AD Output Stream Operator
         4.2: AD Assignment Operator
operator: 4.4.1.2.1: AD Unary Minus Operator: Example and Test
          4.4.1.1.1: AD Unary Plus Operator: Example and Test
          4.3.5.1: AD Output Operator: Example and Test
          4.3.4.1: AD Output Operator: Example and Test
operators 12.8.12.c.c: zdouble: An AD Base Type With Absolute Zero: Syntax.Arithmetic Operators
          12.8.12.c.b: zdouble: An AD Base Type With Absolute Zero: Syntax.Comparison Operators
          12.6.g.b: The CppAD Wish List: Optimization.Special Operators
          12.3.2.b: The Theory of Reverse Mode: Binary Operators
          12.3.1.b: The Theory of Forward Mode: Binary Operators
          8.7.f: Definition of a Numeric Type: Operators
          5.7.4: Example Optimization and Print Forward Operators
          5.7.3: Example Optimization and Comparison Operators
          4.7.1.h: Required Base Class Member Functions: Bool Operators
          4.7.1.g: Required Base Class Member Functions: Binary Operators
          4.7.1.f: Required Base Class Member Functions: Assignment Operators
          4.7.1.e: Required Base Class Member Functions: Unary Operators
          4.5.1: AD Binary Comparison Operators
          4.4.1.4: AD Compound Assignment Operators
          4.4.1.3: AD Binary Arithmetic Operators
          4.4.1: AD Arithmetic Operators and Compound Assignments
operators: 4.5.1.1: AD Binary Comparison Operators: Example and Test
opt_val_hes 12.10.2.1: opt_val_hes: Example and Test
            12.10.2: Jacobian and Hessian of Optimal Values
opt_val_hes: 12.10.2.1: opt_val_hes: Example and Test
optimal 12.10.2: Jacobian and Hessian of Optimal Values
optimization 12.6.g: The CppAD Wish List: Optimization
             8.18.e.h: A 3rd and 4th Order Rosenbrock ODE Solver: Fun.Optimization
             5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations
             5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations
             5.7.7: Example Optimization and Cumulative Sum Operations
             5.7.6: Example Optimization and Nested Conditional Expressions
             5.7.5: Example Optimization and Conditional Expressions
             5.7.4: Example Optimization and Print Forward Operators
             5.7.3: Example Optimization and Comparison Operators
             5.7.2: Example Optimization and Reverse Activity Analysis
             5.7.1: Example Optimization and Forward Activity Analysis
             5.7.i: Optimize an ADFun Object Tape: Checking Optimization
optimize 12.1.j.b: Frequently Asked Questions and Answers: Speed.Optimize
         11.1.f.c: Running the Speed Test Program: Global Options.optimize
         5.7: Optimize an ADFun Object Tape
         5.3.9.1: Number of Variables That Can be Skipped: Example and Test
         4.4.7.1.l: Checkpointing Functions: optimize
         4.4.4.j: AD Conditional Expressions: Optimize
option 4.4.7.2.19.c.e: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.option
       4.4.7.1.n: Checkpointing Functions: option
options 11.1.g: Running the Speed Test Program: Sparsity Options
        11.1.f: Running the Speed Test Program: Global Options
        9.f: Use Ipopt to Solve a Nonlinear Programming Problem: options
        5.7.d: Optimize an ADFun Object Tape: options
        4.4.7.2.2: Set Atomic Function Options
order 12.8.11.5.1.g: Define Matrix Multiply as a User Atomic Operation: Reverse Partials One Order
      12.8.3: Comparison Changes During Zero Order Forward Mode
      12.8.2.d: ADFun Object Deprecated Member Functions: Order
      12.3.2.9.c: Error Function Reverse Mode Theory: Order Zero Z(t)
      12.3.2.8.d: Tangent and Hyperbolic Tangent Reverse Mode Theory: Order Zero Z(t)
      8.20: An Arbitrary Order Gear Method
      8.19.f.c: An Error Controller for ODE Solvers: Method.order
      8.18: A 3rd and 4th Order Rosenbrock ODE Solver
      8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
      5.8.3: abs_normal: Evaluate First Order Approximation
      5.4.3.1: Third Order Reverse Mode: Example and Test
      5.4.3.i: Any Order Reverse Mode: Second Order
      5.4.3.h: Any Order Reverse Mode: First Order
      5.4.3: Any Order Reverse Mode
      5.4.2.1: Second Order Reverse ModeExample and Test
      5.4.2.g.b: Second Order Reverse Mode: dw.Second Order Partials
      5.4.2.g.a: Second Order Reverse Mode: dw.First Order Partials
      5.4.2: Second Order Reverse Mode
      5.4.1.1: First Order Reverse Mode: Example and Test
      5.4.1: First Order Reverse Mode
      5.3.7: Comparison Changes Between Taping and Zero Order Forward
      5.3.5.1: Forward Mode: Example and Test of Multiple Directions
      5.3.5.i: Multiple Directions Forward Mode: Zero Order
      5.3.4.o: Multiple Order Forward Mode: Second Order
      5.3.4.n: Multiple Order Forward Mode: First Order
      5.3.4.m: Multiple Order Forward Mode: Zero Order
      5.3.4.k.a: Multiple Order Forward Mode: yq.One Order
      5.3.4.g.a: Multiple Order Forward Mode: xq.One Order
      5.3.4.e: Multiple Order Forward Mode: One Order
      5.3.4: Multiple Order Forward Mode
      5.3.3: Second Order Forward Mode: Derivative Values
      5.3.2: First Order Forward Mode: Derivative Values
      5.3.1: Zero Order Forward Mode: Function Values
      5.2.6: Reverse Mode Second Partial Derivative Driver
      5.2.5.1: Subset of Second Order Partials: Example and Test
      5.2.5: Forward Mode Second Partial Derivative Driver
      5.2.4.1: First Order Derivative Driver: Example and Test
      5.2.4: First Order Derivative: Driver Routine
      5.2.3.1: First Order Partial Driver: Example and Test
      5.2.3: First Order Partial Derivative: Driver Routine
      5.2: First and Second Order Derivatives: Easy Drivers
      4.7.9.6.b: Enable use of AD<Base> where Base is std::complex<double>: Include Order
      4.7.e: AD<Base> Requirements for a CppAD Base Type: Include Order
      4.3.6.2: Print During Zero Order Forward Mode: Example and Test
      3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
      3.2.6.1: exp_eps: Verify Second Order Forward Sweep
      3.2.5.1: exp_eps: Verify First Order Reverse Sweep
      3.2.4.1: exp_eps: Verify First Order Forward Sweep
      3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
      3.2.7: exp_eps: Second Order Reverse Sweep
      3.2.6.a: exp_eps: Second Order Forward Mode: Second Order Expansion
      3.2.6: exp_eps: Second Order Forward Mode
      3.2.5: exp_eps: First Order Reverse Sweep
      3.2.4.c.e: exp_eps: First Order Forward Sweep: Operation Sequence.First Order
      3.2.4.c.c: exp_eps: First Order Forward Sweep: Operation Sequence.Zero Order
      3.2.4.a: exp_eps: First Order Forward Sweep: First Order Expansion
      3.2.4: exp_eps: First Order Forward Sweep
      3.2.3.b.f: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Zero Order
      3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
      3.1.7.1: exp_2: Verify Second Order Reverse Sweep
      3.1.6.1: exp_2: Verify Second Order Forward Sweep
      3.1.5.1: exp_2: Verify First Order Reverse Sweep
      3.1.4.1: exp_2: Verify First Order Forward Sweep
      3.1.3.1: exp_2: Verify Zero Order Forward Sweep
      3.1.7: exp_2: Second Order Reverse Mode
      3.1.6.a: exp_2: Second Order Forward Mode: Second Order Expansion
      3.1.6: exp_2: Second Order Forward Mode
      3.1.5: exp_2: First Order Reverse Mode
      3.1.4.d.e: exp_2: First Order Forward Mode: Operation Sequence.First Order
      3.1.4.d.c: exp_2: First Order Forward Mode: Operation Sequence.Zero Order
      3.1.4.a: exp_2: First Order Forward Mode: First Order Expansion
      3.1.4: exp_2: First Order Forward Mode
      3.1.3.c.d: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence.Zero Order
      3.1.3.b: exp_2: Operation Sequence and Zero Order Forward Mode: Zero Order Expansion
      3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
      3.1: Second Order Exponential Approximation
ordered 4.7.9.6.g: Enable use of AD<Base> where Base is std::complex<double>: Ordered
        4.7.9.5.g: Enable use of AD<Base> where Base is double: Ordered
        4.7.9.4.g: Enable use of AD<Base> where Base is float: Ordered
        4.7.9.3.j: Enable use of AD<Base> where Base is Adolc's adouble Type: Ordered
        4.7.9.1.n: Example AD<Base> Where Base Constructor Allocates Memory: Ordered
        4.7.4.c: Base Type Requirements for Ordered Comparisons: Not Ordered
        4.7.4.b: Base Type Requirements for Ordered Comparisons: Ordered Type
        4.7.4: Base Type Requirements for Ordered Comparisons
        4.7.2.c.b: Base Type Requirements for Conditional Expressions: CondExpTemplate.Not Ordered
        4.7.2.c.a: Base Type Requirements for Conditional Expressions: CondExpTemplate.Ordered Type
orders 12.3.2.9.b: Error Function Reverse Mode Theory: Positive Orders Z(t)
       12.3.2.8.c: Tangent and Hyperbolic Tangent Reverse Mode Theory: Positive Orders Z(t)
       5.3.6: Number Taylor Coefficient Orders Currently Stored
       5.3.5.1: Forward Mode: Example and Test of Multiple Directions
       5.3.5.j: Multiple Directions Forward Mode: Non-Zero Lower Orders
       5.3.4.2: Forward Mode: Example and Test of Multiple Orders
       5.3.4.k.b: Multiple Order Forward Mode: yq.Multiple Orders
       5.3.4.g.b: Multiple Order Forward Mode: xq.Multiple Orders
original 5.3.8.e: Controlling Taylor Coefficients Memory Allocation: Original State
os 4.3.5.d: AD Output Stream Operator: os
other 10.2.2: Example and Test Linking CppAD to Languages Other than C++
out 4.4.5.2: Interpolation With Out Retaping: Example and Test
outer 10.2.10.c.f: Using Multiple Levels of AD: Procedure.Derivatives of Outer Function
      10.2.10.c.e: Using Multiple Levels of AD: Procedure.Outer Function
outline 3.c: An Introduction by Example to Algorithmic Differentiation: Outline
output 10.1.h: Getting Started Using CppAD to Compute Derivatives: Output
       8.22.m.d: The CppAD::vector Template Class: vectorBool.Output
       8.22.i: The CppAD::vector Template Class: Output
       8.4.1.c: Example Use of SpeedTest: Output
       4.7.9.1.k: Example AD<Base> Where Base Constructor Allocates Memory: Output Operator
       4.7.g: AD<Base> Requirements for a CppAD Base Type: Output Operator
       4.3.6.1.c: Printing During Forward Mode: Example and Test: Output
       4.3.6.h: Printing AD Values During Forward Mode: Redirecting Output
       4.3.6: Printing AD Values During Forward Mode
       4.3.5.1: AD Output Operator: Example and Test
       4.3.4.1: AD Output Operator: Example and Test
       4.3.5: AD Output Stream Operator
       4.3.4: AD Output Stream Operator
overloading 11: Speed Test an Operator Overloading AD Package
own 10.2.1: Creating Your Own Interface to an ADFun Object
P
Parameter 5.1.5.1: ADFun Sequence Properties: Example and Test
Poly 8.13: Evaluate a Polynomial or its Derivative
12.10.3.h.b: LU Factorization of A Square Matrix and Stability Calculation: LU.P
  11.2.9.j: Evaluate a Function That Has a Sparse Hessian: p
  11.2.8.k: Evaluate a Function That Has a Sparse Jacobian: p
  11.2.7.f.b: Evaluate a Function Defined in Terms of an ODE: p.p = 1
  11.2.7.f.a: Evaluate a Function Defined in Terms of an ODE: p.p == 0
  11.2.7.f: Evaluate a Function Defined in Terms of an ODE: p
  8.16.j: Multi-dimensional Romberg Integration: p
  8.15.i: One DimensionalRomberg Integration: p
  8.14.3.g.c: Invert an LU Factored Equation: LU.P
  8.14.2.h.b: LU Factorization of A Square Matrix: LU.P
  8.13.g: Evaluate a Polynomial or its Derivative: p
  5.6.4.i.c: Sparse Hessian: work.p
  5.6.4.f: Sparse Hessian: p
  5.6.2.h.b: Sparse Jacobian: work.p
  5.6.2.e: Sparse Jacobian: p
  4.4.7.2.4.d: Atomic Forward Mode: p
pack_sparsity_enum 4.4.7.2.2.b.a: Set Atomic Function Options: atomic_sparsity.pack_sparsity_enum
package 11.1.c.a: Running the Speed Test Program: package.AD Package
        11.1.c: Running the Speed Test Program: package
        11: Speed Test an Operator Overloading AD Package
        10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
        2.2: Using CMake to Configure CppAD
        : cppad-20171217: A Package for Differentiation of C++ Algorithms
package_prefix 2.2.l: Using CMake to Configure CppAD: package_prefix
parallel 12.8.11.m.c: User Defined Atomic AD Functions: afun.Parallel Mode
         12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
         12.8.4: OpenMP Parallel Setup
         8.23.4: Is The Current Execution in Parallel Mode
         8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
         8.22.n: The CppAD::vector Template Class: Memory and Parallel Mode
         8.18.m: A 3rd and 4th Order Rosenbrock ODE Solver: Parallel Mode
         8.17.n: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Parallel Mode
         8.10.f: Check Simple Vector Concept: Parallel Mode
         8.8.d: Check NumericType Class Concept: Parallel Mode
         8.1.b.a: Replacing the CppAD Error Handler: Constructor.Parallel Mode
         7.2.3: A Simple Parallel Pthread Example and Test
         7.1: Enable AD Calculations During Parallel Mode
         7.h: Using CppAD in a Multi-Threading Environment: Parallel Prohibited
         7.e: Using CppAD in a Multi-Threading Environment: Parallel AD
         5.1.3.i: Stop Recording and Store Operation Sequence: Parallel Mode
         5.1.2.j: Construct an ADFun Object and Stop Recording: Parallel Mode
         5.1.1.h: Declare Independent Variables and Start Recording: Parallel Mode
         4.4.5.l: Discrete AD Functions: Parallel Mode
parallel_setup 7.c: Using CppAD in a Multi-Threading Environment: parallel_setup
parameter 12.8.10.2.1.c.b: An ODE Inverse Problem Example: Measurements.Simulation Parameter Values
          12.4.h: Glossary: Parameter
          9.3.c.b: ODE Inverse Problem Definitions: Source Code: Measurements.Simulation Parameter Values
          5.1.5.f: ADFun Sequence Properties: Parameter
          4.5.4.1: AD Parameter and Variable Functions: Example and Test
          4.5.4: Is an AD Object a Parameter or Variable
          4.4.7.2.9.1.j: Atomic Reverse Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
          4.4.7.2.8.1.j: Atomic Forward Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
          4.4.7.2.7.1.h: Atomic Reverse Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
          4.4.7.2.6.1.h: Atomic Forward Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
          4.3.7: Convert an AD Variable to a Parameter
          3.2.3.b.b: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Parameter
parameter: 4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
part 4.4.7.2.18.1.b.c: AD Theory for Cholesky Factorization: Notation.Lower Triangular Part
partial 12.8.11.d: User Defined Atomic AD Functions: Partial Implementation
        5.2.6: Reverse Mode Second Partial Derivative Driver
        5.2.5.1: Subset of Second Order Partials: Example and Test
        5.2.5: Forward Mode Second Partial Derivative Driver
        5.2.3.1: First Order Partial Driver: Example and Test
        5.2.3: First Order Partial Derivative: Driver Routine
partials 12.8.11.5.1.g: Define Matrix Multiply as a User Atomic Operation: Reverse Partials One Order
         5.4.2.g.b: Second Order Reverse Mode: dw.Second Order Partials
         5.4.2.g.a: Second Order Reverse Mode: dw.First Order Partials
         5.2.6.1: Second Partials Reverse Driver: Example and Test
partials: 5.2.5.1: Subset of Second Order Partials: Example and Test
pattern 12.4.j: Glossary: Sparsity Pattern
        8.28.e: Sparse Matrix Row, Column, Value Representation: pattern
        8.27.d: Row and Column Index Sparsity Patterns: pattern
        5.6.3.i: Computing Sparse Hessians: pattern
        5.6.1.k: Computing Sparse Jacobians: pattern
        5.5.9.b: Computing Dependency: Example and Test: Dependency Pattern
        5.5.6.k: Hessian Sparsity Pattern: Reverse Mode: Entire Sparsity Pattern
        5.5.4.k: Jacobian Sparsity Pattern: Reverse Mode: Entire Sparsity Pattern
        5.5.2.k: Jacobian Sparsity Pattern: Forward Mode: Entire Sparsity Pattern
pattern: 5.5.8: Hessian Sparsity Pattern: Forward Mode
         5.5.6: Hessian Sparsity Pattern: Reverse Mode
         5.5.4: Jacobian Sparsity Pattern: Reverse Mode
         5.5.2: Jacobian Sparsity Pattern: Forward Mode
pattern_in 5.5.3.f: Reverse Mode Jacobian Sparsity Patterns: pattern_in
           5.5.1.f: Forward Mode Jacobian Sparsity Patterns: pattern_in
pattern_out 5.5.11.k: Subgraph Dependency Sparsity Patterns: pattern_out
            5.5.7.j: Forward Mode Hessian Sparsity Patterns: pattern_out
            5.5.5.k: Reverse Mode Hessian Sparsity Patterns: pattern_out
            5.5.3.j: Reverse Mode Jacobian Sparsity Patterns: pattern_out
            5.5.1.j: Forward Mode Jacobian Sparsity Patterns: pattern_out
patterns 8.27: Row and Column Index Sparsity Patterns
         5.5.11: Subgraph Dependency Sparsity Patterns
         5.5.7: Forward Mode Hessian Sparsity Patterns
         5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test
         5.5.5: Reverse Mode Hessian Sparsity Patterns
         5.5.3: Reverse Mode Jacobian Sparsity Patterns
         5.5.1: Forward Mode Jacobian Sparsity Patterns
         5.1.2.i.b: Construct an ADFun Object and Stop Recording: Assignment Operator.Sparsity Patterns
         5.6.b: Calculating Sparse Derivatives: Old Sparsity Patterns
         5.6.a: Calculating Sparse Derivatives: Preferred Sparsity Patterns
         5.5.b: Calculating Sparsity Patterns: Old Sparsity Patterns
         5.5.a: Calculating Sparsity Patterns: Preferred Sparsity Patterns
         5.5: Calculating Sparsity Patterns
         4.4.7.2.9: Atomic Reverse Hessian Sparsity Patterns
         4.4.7.2.8: Atomic Forward Hessian Sparsity Patterns
         4.4.7.2.7: Atomic Reverse Jacobian Sparsity Patterns
         4.4.7.2.6: Atomic Forward Jacobian Sparsity Patterns
         4.4.7.2.e.d: User Defined Atomic AD Functions: Examples.Hessian Sparsity Patterns
patterns: 5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
          5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
          4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
pc 2.4: CppAD pkg-config Files
pivot 10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
pivoting 10.3.3: Lu Factor and Solve with Recorded Pivoting
pivoting: 10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
pkg-config 2.4: CppAD pkg-config Files
planes 5.8.10.t.b: abs_normal: Minimize a Linear Abs-normal Approximation: Method.Cutting Planes
       5.8.6.s.b: abs_normal: Minimize a Linear Abs-normal Approximation: Method.Cutting Planes
plus 4.4.2.20: The Logarithm of One Plus Argument: log1p
     4.4.1.4.4: AD Compound Assignment Division: Example and Test
     4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
     4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
     4.4.1.4.1: AD Compound Assignment Addition: Example and Test
     4.4.1.4: AD Compound Assignment Operators
     4.4.1.3.1: AD Binary Addition: Example and Test
     4.4.1.3: AD Binary Arithmetic Operators
     4.4.1.1.1: AD Unary Plus Operator: Example and Test
     4.4.1.1: AD Unary Plus Operator
point 5.8.8: Solve a Quadratic Program Using Interior Point Method
      4.7.7: Extending to_string To Another Floating Point Type
pointer 6: CppAD API Preprocessor Symbols
poly 10.1.e: Getting Started Using CppAD to Compute Derivatives: Poly
     8.13.2: Source: Poly
     4.7.9.6.1.a: Complex Polynomial: Example and Test: Poly
polynomial 12.3.1.9: Error Function Forward Taylor Polynomial Theory
           12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
           11.7.5: Sacado Speed: Second Derivative of a Polynomial
           11.6.5: Fadbad Speed: Second Derivative of a Polynomial
           11.5.5: CppAD Speed: Second Derivative of a Polynomial
           11.4.5: Adolc Speed: Second Derivative of a Polynomial
           11.3.5: Double Speed: Evaluate a Polynomial
           11.1.5: Speed Testing Second Derivative of a Polynomial
           8.13.1: Polynomial Evaluation: Example and Test
           8.13: Evaluate a Polynomial or its Derivative
           4.7.9.6.1: Complex Polynomial: Example and Test
polynomial: 4.7.9.6.1: Complex Polynomial: Example and Test
pos 4.3.6.d: Printing AD Values During Forward Mode: pos
positive 12.3.2.9.b: Error Function Reverse Mode Theory: Positive Orders Z(t)
         12.3.2.8.c: Tangent and Hyperbolic Tangent Reverse Mode Theory: Positive Orders Z(t)
possible 4.4.2.c: The Unary Standard Math Functions: Possible Types
postfix 2.2: Using CMake to Configure CppAD
postfix_dir 12.8.13.m: Autotools Unix Test and Installation: postfix_dir
pow 8.12.1: The Pow Integer Exponent: Example and Test
    8.12: The Integer Power Function
    4.7.9.6.m: Enable use of AD<Base> where Base is std::complex<double>: pow
    4.7.9.5.j: Enable use of AD<Base> where Base is double: pow
    4.7.9.4.j: Enable use of AD<Base> where Base is float: pow
    4.7.9.3.o: Enable use of AD<Base> where Base is Adolc's adouble Type: pow
    4.7.9.1.r: Example AD<Base> Where Base Constructor Allocates Memory: pow
    4.7.5.f: Base Type Requirements for Standard Math Functions: pow
    4.4.3.2: The AD Power Function
power 8.12: The Integer Power Function
      4.4.3.2.1: The AD Power Function: Example and Test
      4.4.3.2: The AD Power Function
pre-allocating 5.3.8.d.a: Controlling Taylor Coefficients Memory Allocation: c.Pre-Allocating Memory
preface 3.b: An Introduction by Example to Algorithmic Differentiation: Preface
preferred 5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
          5.6.a: Calculating Sparse Derivatives: Preferred Sparsity Patterns
          5.5.a: Calculating Sparsity Patterns: Preferred Sparsity Patterns
prefix 2.2.6.1.e: Download and Install Sacado in Build Directory: Prefix Directory
       2.2.6: Including the Sacado Speed Tests
       2.2.5.1.e: Download and Install Ipopt in Build Directory: Prefix Directory
       2.2.5: Including the cppad_ipopt Library and Tests
       2.2.4.1.e: Download and Install Fadbad in Build Directory: Prefix Directory
       2.2.4: Including the FADBAD Speed Tests
       2.2.3.1.e: Download and Install Eigen in Build Directory: Prefix Directory
       2.2.3: Including the Eigen Examples and Tests
       2.2.2.5.e: Download and Install ColPack in Build Directory: Prefix Directory
       2.2.2: Including the ColPack Sparsity Calculations
       2.2.1.1.f: Download and Install Adolc in Build Directory: Prefix Directory
       2.2.1: Including the ADOL-C Examples and Tests
       2.2: Using CMake to Configure CppAD
prefix_dir 12.8.13.g: Autotools Unix Test and Installation: prefix_dir
preprocessor 12.11.d: CppAD Addons: Preprocessor Symbols
             12.1.i.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
             6: CppAD API Preprocessor Symbols
             e: cppad-20171217: A Package for Differentiation of C++ Algorithms: Preprocessor Symbols
previous 12.7.c: Changes and Additions to CppAD: Previous Years
previously 12.8.5.n.b: Routines That Track Use of New and Delete: TrackCount.Previously Deprecated
           12.8.5.m.b: Routines That Track Use of New and Delete: TrackExtend.Previously Deprecated
           12.8.5.l.b: Routines That Track Use of New and Delete: TrackDelVec.Previously Deprecated
           12.8.5.k.b: Routines That Track Use of New and Delete: TrackNewVec.Previously Deprecated
print 8.4: Run One Speed Test and Print Results
      5.8.2: abs_normal: Print a Vector or Matrix
      5.7.4: Example Optimization and Print Forward Operators
      4.3.6.2: Print During Zero Order Forward Mode: Example and Test
      4.3.6.1: Printing During Forward Mode: Example and Test
      4.3.6: Printing AD Values During Forward Mode
printing 4.3.6.1: Printing During Forward Mode: Example and Test
         4.3.6: Printing AD Values During Forward Mode
private 4.4.7.2.18.2.d: Atomic Eigen Cholesky Factorization Class: Private
        4.4.7.2.17.1.f: Atomic Eigen Matrix Inversion Class: Private
        4.4.7.2.16.1.g: Atomic Eigen Matrix Multiply Class: Private
problem 12.10.1.c: Computing Jacobian and Hessian of Bender's Reduced Objective: Problem
        12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
        12.8.10.2.1.d: An ODE Inverse Problem Example: Inverse Problem
        12.8.10.2.1.b: An ODE Inverse Problem Example: Forward Problem
        12.8.10.2.1: An ODE Inverse Problem Example
        12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem
        9.3.d: ODE Inverse Problem Definitions: Source Code: Inverse Problem
        9.3.b: ODE Inverse Problem Definitions: Source Code: Forward Problem
        9.3: ODE Inverse Problem Definitions: Source Code
        9: Use Ipopt to Solve a Nonlinear Programming Problem
        5.8.9.1.a: abs_normal qp_box: Example and Test: Problem
        5.8.9.e: abs_normal: Solve a Quadratic Program With Box Constraints: Problem
        5.8.8.1.a: abs_normal qp_interior: Example and Test: Problem
        5.8.8.e: Solve a Quadratic Program Using Interior Point Method: Problem
        5.8.5.1.a: abs_normal lp_box: Example and Test: Problem
        5.8.5.d: abs_normal: Solve a Linear Program With Box Constraints: Problem
        5.8.4.1.a: abs_normal simplex_method: Example and Test: Problem
        5.8.4.d: abs_normal: Solve a Linear Program Using Simplex Method: Problem
        4.4.7.1.4.c: Checkpointing an Extended ODE Solver: Example and Test: Problem
        4.4.7.1.3.c: Checkpointing an ODE Solver: Example and Test: Problem
procedure 10.2.10.c: Using Multiple Levels of AD: Procedure
processing 5.4.3.2.c: Reverse Mode General Case (Checkpointing): Example and Test: Processing Steps
product 4.4.7.2.17.1.c.b: Atomic Eigen Matrix Inversion Class: Theory.Product of Three Matrices
        4.4.7.2.16.1.d.b: Atomic Eigen Matrix Multiply Class: Theory.Product of Two Matrices
profile 11.1.c.c: Running the Speed Test Program: package.profile
        2.2: Using CMake to Configure CppAD
profiling 12.8.13.f: Autotools Unix Test and Installation: Profiling CppAD
program 12.9.8: Main Program For Comparing C and C++ Speed
        11.2.a: Speed Testing Utilities: Speed Main Program
        11.1: Running the Speed Test Program
        10.3.2: Run the Speed Examples
        10.1.g: Getting Started Using CppAD to Compute Derivatives: Program
        8.4.1.b: Example Use of SpeedTest: Program
        8.4.1.a: Example Use of SpeedTest: Running This Program
        7.2.d: Run Multi-Threading Examples and Speed Tests: program
        5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
        5.8.8: Solve a Quadratic Program Using Interior Point Method
        5.8.5: abs_normal: Solve a Linear Program With Box Constraints
        5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
        2.2.a: Using CMake to Configure CppAD: The CMake Program
programming 12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
            12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
            12.5.b: Bibliography: The C++ Programming Language
            9.2: Nonlinear Programming Retaping: Example and Test
            9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
            9: Use Ipopt to Solve a Nonlinear Programming Problem
prohibited 7.h: Using CppAD in a Multi-Threading Environment: Parallel Prohibited
projection 12.8.10.f.b: Nonlinear Programming Using the CppAD Interface to Ipopt: fg(x).Projection
proof 12.3.3.d: An Important Reverse Mode Identity: Proof
      4.4.7.2.18.1.d.a: AD Theory for Cholesky Factorization: Lemma 1.Proof
properties 5.1.5: ADFun Sequence Properties
properties: 5.1.5.1: ADFun Sequence Properties: Example and Test
prototype 11.1.7.a: Speed Testing Sparse Jacobian: Prototype
          11.1.6.a: Speed Testing Sparse Hessian: Prototype
          11.1.5.a: Speed Testing Second Derivative of a Polynomial: Prototype
          11.1.4.a: Speed Testing the Jacobian of Ode Solution: Prototype
          11.1.3.a: Speed Testing Derivative of Matrix Multiply: Prototype
          11.1.2.a: Speed Testing Gradient of Determinant by Minor Expansion: Prototype
          11.1.1.a: Speed Testing Gradient of Determinant Using Lu Factorization: Prototype
          5.8.11.b: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: Prototype
          5.8.10.b: abs_normal: Minimize a Linear Abs-normal Approximation: Prototype
          5.8.9.b: abs_normal: Solve a Quadratic Program With Box Constraints: Prototype
          5.8.8.b: Solve a Quadratic Program Using Interior Point Method: Prototype
          5.8.7.b: Non-Smooth Optimization Using Abs-normal Linear Approximations: Prototype
          5.8.6.b: abs_normal: Minimize a Linear Abs-normal Approximation: Prototype
          5.8.5.b: abs_normal: Solve a Linear Program With Box Constraints: Prototype
          5.8.4.b: abs_normal: Solve a Linear Program Using Simplex Method: Prototype
          5.8.3.b: abs_normal: Evaluate First Order Approximation: Prototype
          5.8.2.b: abs_normal: Print a Vector or Matrix: Prototype
prototypes 4.7.3.b.b: Base Type Requirements for Identically Equal Comparisons: Identical.Prototypes
pthread 8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
        7.2.11.3: Pthread Implementation of a Team of AD Threads
        7.2.6: A Simple pthread AD: Example and Test
        7.2.3: A Simple Parallel Pthread Example and Test
pthread_exit 7.2.11.3: Pthread Implementation of a Team of AD Threads
public 4.4.7.2.18.2.c: Atomic Eigen Cholesky Factorization Class: Public
       4.4.7.2.17.1.e: Atomic Eigen Matrix Inversion Class: Public
       4.4.7.2.16.1.f: Atomic Eigen Matrix Multiply Class: Public
purpose 12.10.3.i.a: LU Factorization of A Square Matrix and Stability Calculation: ratio.Purpose
        12.10.2.d: Jacobian and Hessian of Optimal Values: Purpose
        12.10.1.d: Computing Jacobian and Hessian of Bender's Reduced Objective: Purpose
        12.9.7.b: Determine Amount of Time to Execute det_by_minor: Purpose
        12.9.6.b: Returns Elapsed Number of Seconds: Purpose
        12.9.3.b: Simulate a [0,1] Uniform Random Variate: Purpose
        12.9.2.b: Compute Determinant using Expansion by Minors: Purpose
        12.9.1.b: Determinant of a Minor: Purpose
        12.9.b: Compare Speed of C and C++: Purpose
        12.8.11.3.b: Using AD to Compute Atomic Function Derivatives: Purpose
        12.8.11.2.b: Using AD to Compute Atomic Function Derivatives: Purpose
        12.8.11.c: User Defined Atomic AD Functions: Purpose
        12.8.10.2.3.a: ODE Fitting Using Fast Representation: Purpose
        12.8.10.2.2.a: ODE Fitting Using Simple Representation: Purpose
        12.8.10.1.a: Nonlinear Programming Using CppAD and Ipopt: Example and Test: Purpose
        12.8.10.c: Nonlinear Programming Using the CppAD Interface to Ipopt: Purpose
        12.8.8.c: Machine Epsilon For AD Types: Purpose
        12.8.7.c: Memory Leak Detection: Purpose
        12.8.6.12.c: Set Maximum Number of Threads for omp_alloc Allocator: Purpose
        12.8.6.11.c: Check If A Memory Allocation is Efficient for Another Use: Purpose
        12.8.6.10.c: Return A Raw Array to The Available Memory for a Thread: Purpose
        12.8.6.9.c: Allocate Memory and Create A Raw Array: Purpose
        12.8.6.8.c: Amount of Memory Available for Quick Use by a Thread: Purpose
        12.8.6.7.c: Amount of Memory a Thread is Currently Using: Purpose
        12.8.6.6.c: Free Memory Currently Available for Quick Use by a Thread: Purpose
        12.8.6.5.c: Return Memory to omp_alloc: Purpose
        12.8.6.4.c: Get At Least A Specified Amount of Memory: Purpose
        12.8.6.3.c: Get the Current OpenMP Thread Number: Purpose
        12.8.6.2.c: Is The Current Execution in OpenMP Parallel Mode: Purpose
        12.8.6.1.c: Set and Get Maximum Number of Threads for omp_alloc Allocator: Purpose
        12.8.6.b: A Quick OpenMP Memory Allocator Used by CppAD: Purpose
        12.8.5.c: Routines That Track Use of New and Delete: Purpose
        12.8.4.c: OpenMP Parallel Setup: Purpose
        12.8.3.c: Comparison Changes During Zero Order Forward Mode: Purpose
        12.8.2.b: ADFun Object Deprecated Member Functions: Purpose
        11.7.a: Speed Test Derivatives Using Sacado: Purpose
        11.6.a: Speed Test Derivatives Using Fadbad: Purpose
        11.5.a: Speed Test Derivatives Using CppAD: Purpose
        11.4.8.b: Adolc Test Utility: Allocate and Free Memory For a Matrix: Purpose
        11.4.a: Speed Test of Derivatives Using Adolc: Purpose
        11.3.a: Speed Test of Functions in Double: Purpose
        11.2.10.b: Simulate a [0,1] Uniform Random Variate: Purpose
        11.2.9.b: Evaluate a Function That Has a Sparse Hessian: Purpose
        11.2.8.b: Evaluate a Function That Has a Sparse Jacobian: Purpose
        11.2.7.b: Evaluate a Function Defined in Terms of an ODE: Purpose
        11.2.6.b: Sum Elements of a Matrix Times Itself: Purpose
        11.2.5.b: Check Gradient of Determinant of 3 by 3 matrix: Purpose
        11.2.4.b: Check Determinant of 3 by 3 matrix: Purpose
        11.2.2.c: Determinant of a Minor: Purpose
        11.1.8.b: Microsoft Version of Elapsed Number of Seconds: Purpose
        11.1.5.b: Speed Testing Second Derivative of a Polynomial: Purpose
        11.1.4.b: Speed Testing the Jacobian of Ode Solution: Purpose
        11.1.3.b: Speed Testing Derivative of Matrix Multiply: Purpose
        11.1.2.b: Speed Testing Gradient of Determinant by Minor Expansion: Purpose
        11.1.1.b: Speed Testing Gradient of Determinant Using Lu Factorization: Purpose
        11.1.b: Running the Speed Test Program: Purpose
        11.a: Speed Test an Operator Overloading AD Package: Purpose
        10.6.b: Suppress Suspect Implicit Conversion Warnings: Purpose
        10.5.b: Using The CppAD Test Vector Template Class: Purpose
        10.3.3.b: Lu Factor and Solve with Recorded Pivoting: Purpose
        10.2.14.a: Taylor's Ode Solver: An Example and Test: Purpose
        10.2.13.a: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Purpose
        10.2.12.a: Taylor's Ode Solver: A Multi-Level AD Example and Test: Purpose
        10.2.10.2.a: Computing a Jacobian With Constants that Change: Purpose
        10.2.10.1.a: Multiple Level of AD: Example and Test: Purpose
        10.2.4.b: Enable Use of Eigen Linear Algebra Package with CppAD: Purpose
        10.2.3.a: Differentiate Conjugate Gradient Algorithm: Example and Test: Purpose
        10.1.a: Getting Started Using CppAD to Compute Derivatives: Purpose
        9.3.a: ODE Inverse Problem Definitions: Source Code: Purpose
        9.2.a: Nonlinear Programming Retaping: Example and Test: Purpose
        9.1.a: Nonlinear Programming Using CppAD and Ipopt: Example and Test: Purpose
        9.b: Use Ipopt to Solve a Nonlinear Programming Problem: Purpose
        8.26.b: Union of Standard Sets: Purpose
        8.25.c: Convert Certain Types to a String: Purpose
        8.23.14.b: Free All Memory That Was Allocated for Use by thread_alloc: Purpose
        8.23.13.b: Deallocate An Array and Call Destructor for its Elements: Purpose
        8.23.12.b: Allocate An Array and Call Default Constructor for its Elements: Purpose
        8.23.11.b: Amount of Memory Available for Quick Use by a Thread: Purpose
        8.23.10.b: Amount of Memory a Thread is Currently Using: Purpose
        8.23.9.b: Control When Thread Alloc Retains Memory For Future Use: Purpose
        8.23.8.b: Free Memory Currently Available for Quick Use by a Thread: Purpose
        8.23.7.b: Return Memory to thread_alloc: Purpose
        8.23.6.b: Get At Least A Specified Amount of Memory: Purpose
        8.23.5.b: Get the Current Thread Number: Purpose
        8.23.4.b: Is The Current Execution in Parallel Mode: Purpose
        8.23.3.b: Get Number of Threads: Purpose
        8.23.2.b: Setup thread_alloc For Use in Multi-Threading Environment: Purpose
        8.23.b: A Fast Multi-Threading Memory Allocator: Purpose
        8.21.b: An Error Controller for Gear's Ode Solvers: Purpose
        8.20.b: An Arbitrary Order Gear Method: Purpose
        8.17.b: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Purpose
        8.12.c: The Integer Power Function: Purpose
        8.11.b: Obtain Nan or Determine if a Value is Nan: Purpose
        8.10.b: Check Simple Vector Concept: Purpose
        8.8.b: Check NumericType Class Concept: Purpose
        8.6.b: Object that Runs a Group of Tests: Purpose
        8.5.1.b: Returns Elapsed Number of Seconds: Purpose
        8.5.b: Determine Amount of Time to Execute a Test: Purpose
        8.4.b: Run One Speed Test and Print Results: Purpose
        8.3.b: Run One Speed Test and Return Results: Purpose
        8.2.b: Determine if Two Values Are Nearly Equal: Purpose
        8.1.2.b: CppAD Assertions During Execution: Purpose
        8: Some General Purpose Utilities
        7.2.11.b: Specifications for A Team of AD Threads: Purpose
        7.2.10.6.b: Timing Test of Multi-Threaded Newton Method: Purpose
        7.2.10.5.b: A Multi-Threaded Newton's Method: Purpose
        7.2.10.4.b: Take Down Multi-threaded Newton Method: Purpose
        7.2.10.3.b: Do One Thread's Work for Multi-Threaded Newton Method: Purpose
        7.2.10.2.b: Set Up Multi-Threaded Newton Method: Purpose
        7.2.10.1.a: Common Variables use by Multi-Threaded Newton Method: Purpose
        7.2.9.5.b: Multi-Threaded User Atomic Take Down: Purpose
        7.2.9.4.a: Multi-Threaded User Atomic Worker: Purpose
        7.2.9.3.b: Multi-Threaded User Atomic Set Up: Purpose
        7.2.9.2.a: Multi-Threaded User Atomic Common Information: Purpose
        7.2.9.1.b: Defines a User Atomic Operation that Computes Square Root: Purpose
        7.2.8.6.b: Timing Test of Multi-Threaded Summation of 1/i: Purpose
        7.2.8.5.b: Multi-Threaded Implementation of Summation of 1/i: Purpose
        7.2.8.4.b: Take Down Multi-threading Sum of 1/i: Purpose
        7.2.8.3.b: Do One Thread's Work for Sum of 1/i: Purpose
        7.2.8.2.b: Set Up Multi-threading Sum of 1/i: Purpose
        7.2.8.1.a: Common Variables Used by Multi-threading Sum of 1/i: Purpose
        7.2.7.a: Using a Team of AD Threads: Example and Test: Purpose
        7.2.6.a: A Simple pthread AD: Example and Test: Purpose
        7.2.5.a: A Simple Boost Threading AD: Example and Test: Purpose
        7.2.4.a: A Simple OpenMP AD: Example and Test: Purpose
        7.2.3.a: A Simple Parallel Pthread Example and Test: Purpose
        7.2.2.a: A Simple Boost Thread Example and Test: Purpose
        7.2.1.a: A Simple OpenMP Example and Test: Purpose
        7.2.a: Run Multi-Threading Examples and Speed Tests: Purpose
        7.1.b: Enable AD Calculations During Parallel Mode: Purpose
        7.a: Using CppAD in a Multi-Threading Environment: Purpose
        6.a: CppAD API Preprocessor Symbols: Purpose
        5.9.b: Check an ADFun Sequence of Operations: Purpose
        5.8.11.1.a: abs_normal min_nso_quad: Example and Test: Purpose
        5.8.11.d: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: Purpose
        5.8.10.1.a: abs_min_quad: Example and Test: Purpose
        5.8.10.d: abs_normal: Minimize a Linear Abs-normal Approximation: Purpose
        5.8.9.d: abs_normal: Solve a Quadratic Program With Box Constraints: Purpose
        5.8.8.d: Solve a Quadratic Program Using Interior Point Method: Purpose
        5.8.7.1.a: abs_normal min_nso_linear: Example and Test: Purpose
        5.8.7.d: Non-Smooth Optimization Using Abs-normal Linear Approximations: Purpose
        5.8.6.1.a: abs_min_linear: Example and Test: Purpose
        5.8.6.d: abs_normal: Minimize a Linear Abs-normal Approximation: Purpose
        5.8.3.1.a: abs_eval: Example and Test: Purpose
        5.8.3.d: abs_normal: Evaluate First Order Approximation: Purpose
        5.8.2.c: abs_normal: Print a Vector or Matrix: Purpose
        5.8.1.1.a: abs_normal Getting Started: Example and Test: Purpose
        5.7.b: Optimize an ADFun Object Tape: Purpose
        5.6.5.b: Compute Sparse Jacobians Using Subgraphs: Purpose
        5.6.4.3.a: Subset of a Sparse Hessian: Example and Test: Purpose
        5.6.4.2.a: Computing Sparse Hessian for a Subset of Variables: Purpose
        5.6.4.f.a: Sparse Hessian: p.Purpose
        5.6.4.b: Sparse Hessian: Purpose
        5.6.3.b: Computing Sparse Hessians: Purpose
        5.6.2.b: Sparse Jacobian: Purpose
        5.6.1.b: Computing Sparse Jacobians: Purpose
        5.5.10.a: Preferred Sparsity Patterns: Row and Column Indices: Example and Test: Purpose
        5.5.8.b: Hessian Sparsity Pattern: Forward Mode: Purpose
        5.5.7.b: Forward Mode Hessian Sparsity Patterns: Purpose
        5.5.6.b: Hessian Sparsity Pattern: Reverse Mode: Purpose
        5.5.5.b: Reverse Mode Hessian Sparsity Patterns: Purpose
        5.5.4.b: Jacobian Sparsity Pattern: Reverse Mode: Purpose
        5.5.3.b: Reverse Mode Jacobian Sparsity Patterns: Purpose
        5.5.2.b: Jacobian Sparsity Pattern: Forward Mode: Purpose
        5.5.1.b: Forward Mode Jacobian Sparsity Patterns: Purpose
        5.4.4.b: Reverse Mode Using Subgraphs: Purpose
        5.4.3.2.b: Reverse Mode General Case (Checkpointing): Example and Test: Purpose
        5.4.3.b: Any Order Reverse Mode: Purpose
        5.4.2.b: Second Order Reverse Mode: Purpose
        5.4.1.b: First Order Reverse Mode: Purpose
        5.3.9.b: Number of Variables that Can be Skipped: Purpose
        5.3.8.b: Controlling Taylor Coefficients Memory Allocation: Purpose
        5.3.7.f.a: Comparison Changes Between Taping and Zero Order Forward: op_index.Purpose
        5.3.7.b: Comparison Changes Between Taping and Zero Order Forward: Purpose
        5.3.6.b: Number Taylor Coefficient Orders Currently Stored: Purpose
        5.3.5.b: Multiple Directions Forward Mode: Purpose
        5.3.4.b: Multiple Order Forward Mode: Purpose
        5.3.3.b: Second Order Forward Mode: Derivative Values: Purpose
        5.3.2.b: First Order Forward Mode: Derivative Values: Purpose
        5.3.1.b: Zero Order Forward Mode: Function Values: Purpose
        5.2.6.b: Reverse Mode Second Partial Derivative Driver: Purpose
        5.2.5.b: Forward Mode Second Partial Derivative Driver: Purpose
        5.2.4.b: First Order Derivative: Driver Routine: Purpose
        5.2.3.b: First Order Partial Derivative: Driver Routine: Purpose
        5.2.2.b: Hessian: Easy Driver: Purpose
        5.2.1.b: Jacobian: Driver Routine: Purpose
        5.1.5.b: ADFun Sequence Properties: Purpose
        5.1.4.b: Abort Recording of an Operation Sequence: Purpose
        5.1.3.b: Stop Recording and Store Operation Sequence: Purpose
        5.1.2.b: Construct an ADFun Object and Stop Recording: Purpose
        5.1.1.b: Declare Independent Variables and Start Recording: Purpose
        5.a: ADFun Objects: Purpose
        4.7.9.3.1.a: Using Adolc with Multiple Levels of Taping: Example and Test: Purpose
        4.7.9.2.a: Using a User Defined AD Base Type: Example and Test: Purpose
        4.7.9.1.a: Example AD<Base> Where Base Constructor Allocates Memory: Purpose
        4.7.8.b: Base Type Requirements for Hash Coding Values: Purpose
        4.7.5.a: Base Type Requirements for Standard Math Functions: Purpose
        4.7.4.a: Base Type Requirements for Ordered Comparisons: Purpose
        4.7.2.a: Base Type Requirements for Conditional Expressions: Purpose
        4.7.b: AD<Base> Requirements for a CppAD Base Type: Purpose
        4.6.b: AD Vectors that Record Index Operations: Purpose
        4.5.5.b: Check if Two Value are Identically Equal: Purpose
        4.5.4.b: Is an AD Object a Parameter or Variable: Purpose
        4.5.3.b: AD Boolean Functions: Purpose
        4.5.2.b: Compare AD and Base Objects for Nearly Equal: Purpose
        4.5.1.b: AD Binary Comparison Operators: Purpose
        4.4.7.2.18.2.a: Atomic Eigen Cholesky Factorization Class: Purpose
        4.4.7.2.17.1.a: Atomic Eigen Matrix Inversion Class: Purpose
        4.4.7.2.16.1.b: Atomic Eigen Matrix Multiply Class: Purpose
        4.4.7.2.11.a: Getting Started with Atomic Operations: Example and Test: Purpose
        4.4.7.2.9.1.a: Atomic Reverse Hessian Sparsity: Example and Test: Purpose
        4.4.7.2.8.1.a: Atomic Forward Hessian Sparsity: Example and Test: Purpose
        4.4.7.2.7.1.a: Atomic Reverse Jacobian Sparsity: Example and Test: Purpose
        4.4.7.2.6.1.a: Atomic Forward Jacobian Sparsity: Example and Test: Purpose
        4.4.7.2.5.1.a: Atomic Reverse: Example and Test: Purpose
        4.4.7.2.4.1.a: Atomic Forward: Example and Test: Purpose
        4.4.7.2.10.b: Free Static Variables: Purpose
        4.4.7.2.9.c: Atomic Reverse Hessian Sparsity Patterns: Purpose
        4.4.7.2.8.c: Atomic Forward Hessian Sparsity Patterns: Purpose
        4.4.7.2.7.c: Atomic Reverse Jacobian Sparsity Patterns: Purpose
        4.4.7.2.6.c: Atomic Forward Jacobian Sparsity Patterns: Purpose
        4.4.7.2.5.b: Atomic Reverse Mode: Purpose
        4.4.7.2.4.b: Atomic Forward Mode: Purpose
        4.4.7.2.3.b: Using AD Version of Atomic Function: Purpose
        4.4.7.2.b: User Defined Atomic AD Functions: Purpose
        4.4.7.1.3.b: Checkpointing an ODE Solver: Example and Test: Purpose
        4.4.7.1.1.a: Simple Checkpointing: Example and Test: Purpose
        4.4.7.1.c: Checkpointing Functions: Purpose
        4.4.5.b: Discrete AD Functions: Purpose
        4.4.4.b: AD Conditional Expressions: Purpose
        4.4.3.3.b: Absolute Zero Multiplication: Purpose
        4.4.3.2.c: The AD Power Function: Purpose
        4.4.3.1.b: AD Two Argument Inverse Tangent Function: Purpose
        4.4.2.b: The Unary Standard Math Functions: Purpose
        4.4.1.4.b: AD Compound Assignment Operators: Purpose
        4.4.1.3.b: AD Binary Arithmetic Operators: Purpose
        4.4.1.2.b: AD Unary Minus Operator: Purpose
        4.4.1.1.b: AD Unary Plus Operator: Purpose
        4.3.7.c: Convert an AD Variable to a Parameter: Purpose
        4.3.6.b: Printing AD Values During Forward Mode: Purpose
        4.3.5.b: AD Output Stream Operator: Purpose
        4.3.4.b: AD Output Stream Operator: Purpose
        4.3.2.b: Convert From AD to Integer: Purpose
        4.3.1.c: Convert From an AD Type to its Base Type: Purpose
        4.2.b: AD Assignment Operator: Purpose
        4.1.b: AD Constructors: Purpose
        4.a: AD Objects: Purpose
        3.2.8.a: exp_eps: CppAD Forward and Reverse Sweeps: Purpose
        3.2.7.a: exp_eps: Second Order Reverse Sweep: Purpose
        3.2.6.b: exp_eps: Second Order Forward Mode: Purpose
        3.2.5.a: exp_eps: First Order Reverse Sweep: Purpose
        3.2.b: An Epsilon Accurate Exponential Approximation: Purpose
        3.1.8.a: exp_2: CppAD Forward and Reverse Sweeps: Purpose
        3.1.7.a: exp_2: Second Order Reverse Mode: Purpose
        3.1.6.b: exp_2: Second Order Forward Mode: Purpose
        3.1.5.a: exp_2: First Order Reverse Mode: Purpose
        3.1.4.b: exp_2: First Order Forward Mode: Purpose
        3.1.b: Second Order Exponential Approximation: Purpose
        3.a: An Introduction by Example to Algorithmic Differentiation: Purpose
        2.4.a: CppAD pkg-config Files: Purpose
        2.3.a: Checking the CppAD Examples and Tests: Purpose
        2.2.7.a: Choosing the CppAD Test Vector Template Class: Purpose
        2.2.6.1.b: Download and Install Sacado in Build Directory: Purpose
        2.2.6.a: Including the Sacado Speed Tests: Purpose
        2.2.5.1.b: Download and Install Ipopt in Build Directory: Purpose
        2.2.5.a: Including the cppad_ipopt Library and Tests: Purpose
        2.2.4.1.b: Download and Install Fadbad in Build Directory: Purpose
        2.2.4.a: Including the FADBAD Speed Tests: Purpose
        2.2.3.1.b: Download and Install Eigen in Build Directory: Purpose
        2.2.3.a: Including the Eigen Examples and Tests: Purpose
        2.2.2.5.b: Download and Install ColPack in Build Directory: Purpose
        2.2.2.a: Including the ColPack Sparsity Calculations: Purpose
        2.2.1.1.b: Download and Install Adolc in Build Directory: Purpose
        2.2.1.a: Including the ADOL-C Examples and Tests: Purpose
        2.1.a: Download The CppAD Source Code: Purpose
push 8.22: The CppAD::vector Template Class
push_back 8.22.g: The CppAD::vector Template Class: push_back
push_vector 8.22.h: The CppAD::vector Template Class: push_vector
px 12.8.11.o.c: User Defined Atomic AD Functions: reverse.px
   4.4.7.2.5.i.a: Atomic Reverse Mode: py.px
py 12.8.11.o.b: User Defined Atomic AD Functions: reverse.py
   4.4.7.2.5.i: Atomic Reverse Mode: py
Q
12.8.11.r.b: User Defined Atomic AD Functions: rev_hes_sparse.q
  12.8.11.q.b: User Defined Atomic AD Functions: rev_jac_sparse.q
  12.8.11.p.b: User Defined Atomic AD Functions: for_jac_sparse.q
  5.5.6.e: Hessian Sparsity Pattern: Reverse Mode: q
  5.5.4.e: Jacobian Sparsity Pattern: Reverse Mode: q
  5.5.2.e: Jacobian Sparsity Pattern: Forward Mode: q
  5.4.4.h: Reverse Mode Using Subgraphs: q
  5.4.3.e: Any Order Reverse Mode: q
  5.3.5.f: Multiple Directions Forward Mode: q
  5.3.4.f: Multiple Order Forward Mode: q
  4.4.7.2.9.d.d: Atomic Reverse Hessian Sparsity Patterns: Implementation.q
  4.4.7.2.7.d.a: Atomic Reverse Jacobian Sparsity Patterns: Implementation.q
  4.4.7.2.6.d.a: Atomic Forward Jacobian Sparsity Patterns: Implementation.q
  4.4.7.2.5.d: Atomic Reverse Mode: q
  4.4.7.2.4.e: Atomic Forward Mode: q
qp_box 5.8.9.2: qp_box Source Code
qp_box: 5.8.9.1: abs_normal qp_box: Example and Test
qp_interior 5.8.8.2: qp_interior Source Code
qp_interior: 5.8.8.1: abs_normal qp_interior: Example and Test
quadratic 5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations
          5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
          5.8.8: Solve a Quadratic Program Using Interior Point Method
questions 12.1: Frequently Asked Questions and Answers
quick 12.8.6.8: Amount of Memory Available for Quick Use by a Thread
      12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
      12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
      8.23.11: Amount of Memory Available for Quick Use by a Thread
      8.23.8: Free Memory Currently Available for Quick Use by a Thread
quiet_nan 4.4.6.h: Numeric Limits For an AD and Base Types: quiet_NaN
quotient 4.4.1.3.4: AD Binary Division: Example and Test
R
Range 5.1.5.1: ADFun Sequence Properties: Example and Test
RevSparseHes 5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
RevSparseJac 5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
             5.5.4: Jacobian Sparsity Pattern: Reverse Mode
Romberg 8.15: One DimensionalRomberg Integration
Rosen34 8.18.1: Rosen34: Example and Test
        8.18: A 3rd and 4th Order Rosenbrock ODE Solver
Runge 8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
Runge45 8.17.2: Runge45: Example and Test
        8.17.1: Runge45: Example and Test
        8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
random 12.9.3: Simulate a [0,1] Uniform Random Variate
       11.2.10: Simulate a [0,1] Uniform Random Variate
range 12.8.10.2.3.d.a: ODE Fitting Using Fast Representation: Trapezoidal Approximation.Range Indices I(k,0)
      12.8.10.2.3.c.a: ODE Fitting Using Fast Representation: Initial Condition.Range Indices I(k,0)
      12.8.10.2.3.b.a: ODE Fitting Using Fast Representation: Objective Function.Range Indices I(k,0)
      5.1.5.e: ADFun Sequence Properties: Range
      4.4.7.2.e.c: User Defined Atomic AD Functions: Examples.Vector Range
rate 8.4.i: Run One Speed Test and Print Results: rate
rate_vec 8.3.i: Run One Speed Test and Return Results: rate_vec
ratio 12.10.3.i: LU Factorization of A Square Matrix and Stability Calculation: ratio
raw 12.8.6.10: Return A Raw Array to The Available Memory for a Thread
    12.8.6.9: Allocate Memory and Create A Raw Array
re-tape 5.3.7.1: CompareChange and Re-Tape: Example and Test
re-tape: 5.9.1: ADFun Check and Re-Tape: Example and Test
         5.3.7.1: CompareChange and Re-Tape: Example and Test
real 4.3.2.d.a: Convert From AD to Integer: x.Real Types
realistic 10.2: General Examples
recipes 12.5.d: Bibliography: Numerical Recipes
reciprocal 4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test
reciprocal: 12.8.11.1: Old Atomic Operation Reciprocal: Example and Test
recomputation 12.6.l: The CppAD Wish List: Forward Mode Recomputation
record 10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
       4.6.1: AD Vectors that Record Index Operations: Example and Test
       4.6: AD Vectors that Record Index Operations
       4.3.1.1: Convert From AD to its Base Type: Example and Test
recorded 10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
         10.3.3: Lu Factor and Solve with Recorded Pivoting
recording 10.2.10.c.b: Using Multiple Levels of AD: Procedure.Start AD< AD<double> > Recording
          5.1.4.1: Abort Current Recording: Example and Test
          5.1.4: Abort Recording of an Operation Sequence
          5.1.3: Stop Recording and Store Operation Sequence
          5.1.2: Construct an ADFun Object and Stop Recording
          5.1.1.d: Declare Independent Variables and Start Recording: Stop Recording
          5.1.1.c: Declare Independent Variables and Start Recording: Start Recording
          5.1.1: Declare Independent Variables and Start Recording
          4.4.7.2.19.c.b: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.Recording
          4.4.7.2.15.k.b: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.Recording
          4.4.7.2.14.k.b: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.Recording
          4.4.7.2.13.k.b: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.Recording
          4.4.7.2.12.k.b: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.Recording
          4.4.7.2.11.f.b: Getting Started with Atomic Operations: Example and Test: Use Atomic Function.Recording
          4.4.7.1.c.b: Checkpointing Functions: Purpose.Faster Recording
recording: 5.1.4.1: Abort Current Recording: Example and Test
recursion 12.3.1.9.b: Error Function Forward Taylor Polynomial Theory: Taylor Coefficients Recursion
          12.3.1.8.b: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory: Taylor Coefficients Recursion
          12.3.1.7.b: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory: Taylor Coefficients Recursion
          12.3.1.6.b: Inverse Sine and Hyperbolic Sine Forward Mode Theory: Taylor Coefficients Recursion
          12.3.1.5.b: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory: Taylor Coefficients Recursion
          12.3.1.2.b: Logarithm Function Forward Mode Theory: Taylor Coefficients Recursion
          12.3.1.1.b: Exponential Function Forward Mode Theory: Taylor Coefficients Recursion
          12.3.1.c.c: The Theory of Forward Mode: Standard Math Functions.Cases that Apply Recursion Above
          12.3.1.c.b: The Theory of Forward Mode: Standard Math Functions.Taylor Coefficients Recursion Formula
redirecting 4.3.6.h: Printing AD Values During Forward Mode: Redirecting Output
reduce 4.4.7.1.c.a: Checkpointing Functions: Purpose.Reduce Memory
reduced 12.10.1: Computing Jacobian and Hessian of Bender's Reduced Objective
reduction 5.8.8.t.a: Solve a Quadratic Program Using Interior Point Method: Newton Step.Elementary Row Reduction
reference 12.10.2.c: Jacobian and Hessian of Optimal Values: Reference
          8.22.e.b: The CppAD::vector Template Class: Assignment.Return Reference
          5.8.a: Abs-normal Representation of Non-Smooth Functions: Reference
          4.6: AD Vectors that Record Index Operations
          4.4.7.2.18.1.a: AD Theory for Cholesky Factorization: Reference
          3.d: An Introduction by Example to Algorithmic Differentiation: Reference
rel 4.4.4.c: AD Conditional Expressions: Rel
relative 8.2: Determine if Two Values Are Nearly Equal
release 2.2.m.b: Using CMake to Configure CppAD: cppad_cxx_flags.debug and release
        2.1.d: Download The CppAD Source Code: Release
removed 12.8.6.12.a: Set Maximum Number of Threads for omp_alloc Allocator: Removed
        12.8.6.11.a: Check If A Memory Allocation is Efficient for Another Use: Removed
repeat 12.9.5.b: Repeat det_by_minor Routine A Specified Number of Times: repeat
       12.9.5: Repeat det_by_minor Routine A Specified Number of Times
       11.1.7.d: Speed Testing Sparse Jacobian: repeat
       11.1.6.d: Speed Testing Sparse Hessian: repeat
       11.1.5.f: Speed Testing Second Derivative of a Polynomial: repeat
       11.1.4.g: Speed Testing the Jacobian of Ode Solution: repeat
       11.1.3.e: Speed Testing Derivative of Matrix Multiply: repeat
       11.1.2.f: Speed Testing Gradient of Determinant by Minor Expansion: repeat
       11.1.1.f: Speed Testing Gradient of Determinant Using Lu Factorization: repeat
       8.5.e.b: Determine Amount of Time to Execute a Test: test.repeat
       8.4.e.b: Run One Speed Test and Print Results: Test.repeat
       8.3.f.b: Run One Speed Test and Return Results: test.repeat
repeating 4.4.7.1.c.c: Checkpointing Functions: Purpose.Repeating Forward
replace 8.1: Replacing the CppAD Error Handler
replacing 8.1.1: Replacing The CppAD Error Handler: Example and Test
          8.1: Replacing the CppAD Error Handler
representation 12.8.10.2.3.1: ODE Fitting Using Fast Representation
               12.8.10.2.2.1: ODE Fitting Using Simple Representation
               12.8.10.2.3: ODE Fitting Using Fast Representation
               12.8.10.2.2: ODE Fitting Using Simple Representation
               12.8.10.g: Nonlinear Programming Using the CppAD Interface to Ipopt: Simple Representation
               12.8.10.f.d: Nonlinear Programming Using the CppAD Interface to Ipopt: fg(x).Representation
               12.4.i: Glossary: Row-major Representation
               8.28: Sparse Matrix Row, Column, Value Representation
               5.8.1: Create An Abs-normal Representation of a Function
               5.8: Abs-normal Representation of Non-Smooth Functions
representations 12.8.10.3: Speed Test for Both Simple and Fast Representations
                12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
require 4.7.3: Base Type Requirements for Identically Equal Comparisons
        4.7.2: Base Type Requirements for Conditional Expressions
        4: AD Objects
required 4.7.1: Required Base Class Member Functions
requirement 12.8.10.1.b: Nonlinear Programming Using CppAD and Ipopt: Example and Test: Configuration Requirement
            10.2.13.h: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Configuration Requirement
            9.1.b: Nonlinear Programming Using CppAD and Ipopt: Example and Test: Configuration Requirement
            4.7.9.3.1.c: Using Adolc with Multiple Levels of Taping: Example and Test: Configuration Requirement
            4.7.7.a: Extending to_string To Another Floating Point Type: Base Requirement
requirements 12.8.12.e: zdouble: An AD Base Type With Absolute Zero: Base Type Requirements
             12.6.j: The CppAD Wish List: Base Requirements
             8.9.a: Definition of a Simple Vector: Template Class Requirements
             8.7.a: Definition of a Numeric Type: Type Requirements
             4.7.8: Base Type Requirements for Hash Coding Values
             4.7.6: Base Type Requirements for Numeric Limits
             4.7.5: Base Type Requirements for Standard Math Functions
             4.7.4: Base Type Requirements for Ordered Comparisons
             4.7.3: Base Type Requirements for Identically Equal Comparisons
             4.7.2: Base Type Requirements for Conditional Expressions
             4.7: AD<Base> Requirements for a CppAD Base Type
             4.b: AD Objects: Base Type Requirements
             2.2.1.1.c: Download and Install Adolc in Build Directory: Requirements
resize 8.27.i: Row and Column Index Sparsity Patterns: resize
       8.22.j: The CppAD::vector Template Class: resize
       8.9.i: Definition of a Simple Vector: Resize
restriction 12.8.11.s.a: User Defined Atomic AD Functions: clear.Restriction
            12.8.4.f: OpenMP Parallel Setup: Restriction
            8.1.2.d: CppAD Assertions During Execution: Restriction
            7.1.f: Enable AD Calculations During Parallel Mode: Restriction
            4.4.7.2.10.d: Free Static Variables: Restriction
            4.4.7.1.q.a: Checkpointing Functions: clear.Restriction
            4.4.7.1.c.d: Checkpointing Functions: Purpose.Restriction
            4.3.1.g: Convert From an AD Type to its Base Type: Restriction
restrictions 12.8.6.12.e: Set Maximum Number of Threads for omp_alloc Allocator: Restrictions
             12.8.6.1.g: Set and Get Maximum Number of Threads for omp_alloc Allocator: Restrictions
             8.23.14.d: Free All Memory That Was Allocated for Use by thread_alloc: Restrictions
             8.23.2.g: Setup thread_alloc For Use in Multi-Threading Environment: Restrictions
             8.10.d: Check Simple Vector Concept: Restrictions
             7.2.11.c: Specifications for A Team of AD Threads: Restrictions
             5.6.4.l.a: Sparse Hessian: VectorSet.Restrictions
             5.6.2.k.a: Sparse Jacobian: VectorSet.Restrictions
             5.3.4.g.c: Multiple Order Forward Mode: xq.Restrictions
             4.4.7.2.1.c.a: Atomic Function Constructor: atomic_base.Restrictions
result 10.3.3.h: Lu Factor and Solve with Recorded Pivoting: Result
       8.26.f: Union of Standard Sets: result
       4.4.7.2.19.1.g: Matrix Multiply as an Atomic Operation: Result Element Index
       4.4.7.2.14.k.g: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.Test Result
       4.4.4.i: AD Conditional Expressions: result
       4.4.1.4.g: AD Compound Assignment Operators: Result
       4.3.5.f: AD Output Stream Operator: Result
       4.3.4.e: AD Output Stream Operator: Result
results 11.1.i: Running the Speed Test Program: Speed Results
        11.1.h: Running the Speed Test Program: Correctness Results
        8.4: Run One Speed Test and Print Results
        8.3: Run One Speed Test and Return Results
        5.10: Check an ADFun Object For Nan Results
retains 8.23.9: Control When Thread Alloc Retains Memory For Future Use
retape 12.6.h.a: The CppAD Wish List: checkpoint.Retape
       9.2: Nonlinear Programming Retaping: Example and Test
       9.n.b: Use Ipopt to Solve a Nonlinear Programming Problem: Example.retape
       9.f.a: Use Ipopt to Solve a Nonlinear Programming Problem: options.Retape
       4.4.5.3: Interpolation With Retaping: Example and Test
       4.4.5.2: Interpolation With Out Retaping: Example and Test
retaping: 9.2: Nonlinear Programming Retaping: Example and Test
          4.4.5.3: Interpolation With Retaping: Example and Test
          4.4.5.2: Interpolation With Out Retaping: Example and Test
return 12.8.6.10: Return A Raw Array to The Available Memory for a Thread
       12.8.6.5: Return Memory to omp_alloc
       11.1.5.d: Speed Testing Second Derivative of a Polynomial: Return Value
       11.1.4.e: Speed Testing the Jacobian of Ode Solution: Return Value
       11.1.3.c: Speed Testing Derivative of Matrix Multiply: Return Value
       11.1.2.d: Speed Testing Gradient of Determinant by Minor Expansion: Return Value
       11.1.1.d: Speed Testing Gradient of Determinant Using Lu Factorization: Return Value
       8.24.c.b: Returns Indices that Sort a Vector: ind.Return
       8.23.7: Return Memory to thread_alloc
       8.22.e.b: The CppAD::vector Template Class: Assignment.Return Reference
       8.3: Run One Speed Test and Return Results
       3.2.6.e: exp_eps: Second Order Forward Mode: Return Value
       3.2.4.d: exp_eps: First Order Forward Sweep: Return Value
       3.2.3.c: exp_eps: Operation Sequence and Zero Order Forward Sweep: Return Value
       3.1.6.e: exp_2: Second Order Forward Mode: Return Value
       3.1.4.e: exp_2: First Order Forward Mode: Return Value
       3.1.3.d: exp_2: Operation Sequence and Zero Order Forward Mode: Return Value
return_memory 12.8.6.5: Return Memory to omp_alloc
              8.23.7: Return Memory to thread_alloc
returns 12.9.6: Returns Elapsed Number of Seconds
        8.24: Returns Indices that Sort a Vector
        8.5.1: Returns Elapsed Number of Seconds
reuse 2.2.6.1.f: Download and Install Sacado in Build Directory: Reuse
      2.2.5.1.f: Download and Install Ipopt in Build Directory: Reuse
      2.2.3.1.f: Download and Install Eigen in Build Directory: Reuse
      2.2.2.5.f: Download and Install ColPack in Build Directory: Reuse
      2.2.1.1.g: Download and Install Adolc in Build Directory: Reuse
rev_hes_sparse 12.8.11.r: User Defined Atomic AD Functions: rev_hes_sparse
rev_jac_sparse 12.8.11.q: User Defined Atomic AD Functions: rev_jac_sparse
rev_sparse_hes 4.4.7.2.19.1.n: Matrix Multiply as an Atomic Operation: rev_sparse_hes
               4.4.7.2.19.c.h: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.rev_sparse_hes
               4.4.7.2.16.1.g.g: Atomic Eigen Matrix Multiply Class: Private.rev_sparse_hes
               4.4.7.2.15.k.g: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.rev_sparse_hes
               4.4.7.2.15.i: Tan and Tanh as User Atomic Operations: Example and Test: rev_sparse_hes
               4.4.7.2.14.k.f: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.rev_sparse_hes
               4.4.7.2.14.i: Atomic Sparsity with Set Patterns: Example and Test: rev_sparse_hes
               4.4.7.2.13.k.g: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.rev_sparse_hes
               4.4.7.2.13.i: Reciprocal as an Atomic Operation: Example and Test: rev_sparse_hes
               4.4.7.2.12.k.g: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.rev_sparse_hes
               4.4.7.2.12.i: Atomic Euclidean Norm Squared: Example and Test: rev_sparse_hes
               4.4.7.2.9.1.h: Atomic Reverse Hessian Sparsity: Example and Test: rev_sparse_hes
rev_sparse_jac 5.7.h.a: Optimize an ADFun Object Tape: Atomic Functions.rev_sparse_jac
               4.4.7.2.19.1.m: Matrix Multiply as an Atomic Operation: rev_sparse_jac
               4.4.7.2.19.c.g: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.rev_sparse_jac
               4.4.7.2.16.1.g.e: Atomic Eigen Matrix Multiply Class: Private.rev_sparse_jac
               4.4.7.2.15.k.f: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.rev_sparse_jac
               4.4.7.2.15.h: Tan and Tanh as User Atomic Operations: Example and Test: rev_sparse_jac
               4.4.7.2.14.k.d: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.rev_sparse_jac
               4.4.7.2.14.g: Atomic Sparsity with Set Patterns: Example and Test: rev_sparse_jac
               4.4.7.2.13.k.f: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.rev_sparse_jac
               4.4.7.2.13.h: Reciprocal as an Atomic Operation: Example and Test: rev_sparse_jac
               4.4.7.2.12.k.f: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.rev_sparse_jac
               4.4.7.2.12.h: Atomic Euclidean Norm Squared: Example and Test: rev_sparse_jac
               4.4.7.2.9.1.g: Atomic Reverse Hessian Sparsity: Example and Test: rev_sparse_jac
               4.4.7.2.8.1.g: Atomic Forward Hessian Sparsity: Example and Test: rev_sparse_jac
               4.4.7.2.7.1.f: Atomic Reverse Jacobian Sparsity: Example and Test: rev_sparse_jac
reverse 12.8.11.5.1.g: Define Matrix Multiply as a User Atomic Operation: Reverse Partials One Order
        12.8.11.o: User Defined Atomic AD Functions: reverse
        12.8.11.l.b: User Defined Atomic AD Functions: ty.reverse
        12.3.3.b: An Important Reverse Mode Identity: Reverse Sweep
        12.3.3: An Important Reverse Mode Identity
        12.3.2.9: Error Function Reverse Mode Theory
        12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
        12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
        12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
        12.3.2.3: Square Root Function Reverse Mode Theory
        12.3.2.2: Logarithm Function Reverse Mode Theory
        12.3.2.1: Exponential Function Reverse Mode Theory
        12.3.2: The Theory of Reverse Mode
        12.1.h: Frequently Asked Questions and Answers: Mode: Forward or Reverse
        5.7.2: Example Optimization and Reverse Activity Analysis
        5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
        5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
        5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
        5.5.6: Hessian Sparsity Pattern: Reverse Mode
        5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
        5.5.5: Reverse Mode Hessian Sparsity Patterns
        5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
        5.5.4: Jacobian Sparsity Pattern: Reverse Mode
        5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
        5.5.3: Reverse Mode Jacobian Sparsity Patterns
        5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
        5.4.4: Reverse Mode Using Subgraphs
        5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
        5.4.3.1: Third Order Reverse Mode: Example and Test
        5.4.3: Any Order Reverse Mode
        5.4.2.1: Second Order Reverse ModeExample and Test
        5.4.2: Second Order Reverse Mode
        5.4.1.1: First Order Reverse Mode: Example and Test
        5.4.1: First Order Reverse Mode
        5.3.5.c: Multiple Directions Forward Mode: Reverse Mode
        5.2.6.1: Second Partials Reverse Driver: Example and Test
        5.2.6: Reverse Mode Second Partial Derivative Driver
        5.2.1.g: Jacobian: Driver Routine: Forward or Reverse
        5.4: Reverse Mode
        4.4.7.2.19.1.k: Matrix Multiply as an Atomic Operation: reverse
        4.4.7.2.19.1.i: Matrix Multiply as an Atomic Operation: Reverse Matrix Multiply
        4.4.7.2.19.c.d: User Atomic Matrix Multiply: Example and Test: Use Atomic Function.reverse
        4.4.7.2.18.2.d.c: Atomic Eigen Cholesky Factorization Class: Private.reverse
        4.4.7.2.18.1.f: AD Theory for Cholesky Factorization: Reverse Mode
        4.4.7.2.17.1.f.c: Atomic Eigen Matrix Inversion Class: Private.reverse
        4.4.7.2.17.1.c.c: Atomic Eigen Matrix Inversion Class: Theory.Reverse
        4.4.7.2.16.1.g.c: Atomic Eigen Matrix Multiply Class: Private.reverse
        4.4.7.2.16.1.d.c: Atomic Eigen Matrix Multiply Class: Theory.Reverse
        4.4.7.2.15.k.d: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.reverse
        4.4.7.2.15.f: Tan and Tanh as User Atomic Operations: Example and Test: reverse
        4.4.7.2.13.k.d: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function.reverse
        4.4.7.2.13.f: Reciprocal as an Atomic Operation: Example and Test: reverse
        4.4.7.2.12.k.d: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function.reverse
        4.4.7.2.12.f: Atomic Euclidean Norm Squared: Example and Test: reverse
        4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test
        4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test
        4.4.7.2.5.1.f: Atomic Reverse: Example and Test: reverse
        4.4.7.2.9: Atomic Reverse Hessian Sparsity Patterns
        4.4.7.2.7: Atomic Reverse Jacobian Sparsity Patterns
        4.4.7.2.5: Atomic Reverse Mode
        3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
        3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
        3.2.5.1: exp_eps: Verify First Order Reverse Sweep
        3.2.7: exp_eps: Second Order Reverse Sweep
        3.2.5: exp_eps: First Order Reverse Sweep
        3.1.8: exp_2: CppAD Forward and Reverse Sweeps
        3.1.7.1: exp_2: Verify Second Order Reverse Sweep
        3.1.5.1: exp_2: Verify First Order Reverse Sweep
        3.1.7: exp_2: Second Order Reverse Mode
        3.1.5: exp_2: First Order Reverse Mode
        3.b.c: An Introduction by Example to Algorithmic Differentiation: Preface.Reverse Mode
reverse: 4.4.7.2.5.1: Atomic Reverse: Example and Test
revone 5.2.4.h: First Order Derivative: Driver Routine: RevOne Uses Forward
revsparsehes 5.5.6.2.c: Sparsity Patterns For a Subset of Variables: Example and Test: RevSparseHes
revsparsity 11.1.g.b: Running the Speed Test Program: Sparsity Options.revsparsity
revtwo 5.2.6.j: Reverse Mode Second Partial Derivative Driver: RevTwo Uses Forward
rhs 10.3.3.g: Lu Factor and Solve with Recorded Pivoting: Rhs
right 8.26.e: Union of Standard Sets: right
      4.4.7.2.19.1.f: Matrix Multiply as an Atomic Operation: Right Operand Element Index
      4.4.4.f: AD Conditional Expressions: right
romberg 8.16.1: One Dimensional Romberg Integration: Example and Test
        8.16: Multi-dimensional Romberg Integration
        8.15.1: One Dimensional Romberg Integration: Example and Test
root 12.3.2.3: Square Root Function Reverse Mode Theory
     12.3.1.3: Square Root Function Forward Mode Theory
     7.2.9.1: Defines a User Atomic Operation that Computes Square Root
     4.4.2.11: The Square Root Function: sqrt
rosen34: 8.18.1: Rosen34: Example and Test
rosenbrock 8.18: A 3rd and 4th Order Rosenbrock ODE Solver
routine 12.9.5: Repeat det_by_minor Routine A Specified Number of Times
        12.9.4: Correctness Test of det_by_minor Routine
        5.2.4: First Order Derivative: Driver Routine
        5.2.3: First Order Partial Derivative: Driver Routine
        5.2.1: Jacobian: Driver Routine
routines 12.8.11.b.b: User Defined Atomic AD Functions: Syntax Function.Callback Routines
         12.8.5: Routines That Track Use of New and Delete
         11.2.c: Speed Testing Utilities: Library Routines
         11.2.b: Speed Testing Utilities: Speed Utility Routines
         10.3: Utility Routines used by CppAD Examples
         8.c: Some General Purpose Utilities: General Numerical Routines
row 12.4.j.a: Glossary: Sparsity Pattern.Row and Column Index Vectors
    11.2.9.h: Evaluate a Function That Has a Sparse Hessian: row
    11.2.8.i: Evaluate a Function That Has a Sparse Jacobian: row
    11.1.7.f: Speed Testing Sparse Jacobian: row
    11.1.6.f: Speed Testing Sparse Hessian: row
    8.28.l: Sparse Matrix Row, Column, Value Representation: row
    8.28: Sparse Matrix Row, Column, Value Representation
    8.27.k: Row and Column Index Sparsity Patterns: row
    8.27: Row and Column Index Sparsity Patterns
    5.8.8.t.a: Solve a Quadratic Program Using Interior Point Method: Newton Step.Elementary Row Reduction
    5.6.4.g: Sparse Hessian: row, col
    5.6.2.f: Sparse Jacobian: row, col
    5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
row-major 12.4.i: Glossary: Row-major Representation
row_major 8.28.o: Sparse Matrix Row, Column, Value Representation: row_major
          8.27.m: Row and Column Index Sparsity Patterns: row_major
rt 4.4.7.2.7.d.b: Atomic Reverse Jacobian Sparsity Patterns: Implementation.rt
run 10.3.2: Run the Speed Examples
    8.4: Run One Speed Test and Print Results
    8.3: Run One Speed Test and Return Results
    7.2.9.6: Run Multi-Threaded User Atomic Calculation
    7.2: Run Multi-Threading Examples and Speed Tests
    3.3: Correctness Tests For Exponential Approximation in Introduction
runge-kutta 8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
runge45: 8.17.2: Runge45: Example and Test
         8.17.1: Runge45: Example and Test
running 12.8.10.2.4: Driver for Running the Ipopt ODE Example
        11.7.c: Speed Test Derivatives Using Sacado: Running Tests
        11.6.c: Speed Test Derivatives Using Fadbad: Running Tests
        11.5.b: Speed Test Derivatives Using CppAD: Running Tests
        11.4.c: Speed Test of Derivatives Using Adolc: Running Tests
        11.3.b: Speed Test of Functions in Double: Running Tests
        11.1: Running the Speed Test Program
        10.3.2.a: Run the Speed Examples: Running Tests
        10.3.1.a: CppAD Examples and Tests: Running Tests
        10.1.i: Getting Started Using CppAD to Compute Derivatives: Running
        10.c: Examples: Running Examples
        8.4.1.a: Example Use of SpeedTest: Running This Program
        7.2.e: Run Multi-Threading Examples and Speed Tests: Running Tests
        4.3.6.1.a: Printing During Forward Mode: Example and Test: Running
        3.3.a: Correctness Tests For Exponential Approximation in Introduction: Running Tests
runs 8.6: Object that Runs a Group of Tests
S
SimpleVector 8.10.1: The CheckSimpleVector Function: Example and Test
SpeedTest 8.4: Run One Speed Test and Print Results
sacado 11.7.7: sacado Speed: sparse_jacobian
       11.7.6: Sacado Speed: Sparse Hessian
       11.7.5: Sacado Speed: Second Derivative of a Polynomial
       11.7.4: Sacado Speed: Gradient of Ode Solution
       11.7.3: Sacado Speed: Matrix Multiplication
       11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
       11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
       11.7: Speed Test Derivatives Using Sacado
       2.2.6.1: Download and Install Sacado in Build Directory
       2.2.6: Including the Sacado Speed Tests
     download and install 2.2.6.1: Download and Install Sacado in Build Directory
sacado_dir 12.8.13.s: Autotools Unix Test and Installation: sacado_dir
sacado_prefix 11.7.b: Speed Test Derivatives Using Sacado: sacado_prefix
              2.2.6.b: Including the Sacado Speed Tests: sacado_prefix
same 7.g: Using CppAD in a Multi-Threading Environment: Same Thread
scalar 11.2.3.d: Determinant Using Expansion by Minors: Scalar
       11.2.2.k: Determinant of a Minor: Scalar
       11.2.1.d: Determinant Using Expansion by Lu Factorization: Scalar
       8.21.t: An Error Controller for Gear's Ode Solvers: Scalar
       8.20.j: An Arbitrary Order Gear Method: Scalar
       8.19.s: An Error Controller for ODE Solvers: Scalar
       8.18.k: A 3rd and 4th Order Rosenbrock ODE Solver: Scalar
       8.17.l: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Scalar
       8.11.g: Obtain Nan or Determine if a Value is Nan: Scalar
       4.4.7.2.e.b: User Defined Atomic AD Functions: Examples.Scalar Function
scur 8.19.l: An Error Controller for ODE Solvers: scur
second 11.7.5: Sacado Speed: Second Derivative of a Polynomial
       11.6.5: Fadbad Speed: Second Derivative of a Polynomial
       11.5.5: CppAD Speed: Second Derivative of a Polynomial
       11.4.5: Adolc Speed: Second Derivative of a Polynomial
       11.1.5: Speed Testing Second Derivative of a Polynomial
       10.2.10.c.d: Using Multiple Levels of AD: Procedure.Second Start AD< AD<double> >
       5.4.3.i: Any Order Reverse Mode: Second Order
       5.4.2.1: Second Order Reverse ModeExample and Test
       5.4.2.g.b: Second Order Reverse Mode: dw.Second Order Partials
       5.4.2: Second Order Reverse Mode
       5.3.4.o: Multiple Order Forward Mode: Second Order
       5.3.3: Second Order Forward Mode: Derivative Values
       5.2.6.1: Second Partials Reverse Driver: Example and Test
       5.2.6: Reverse Mode Second Partial Derivative Driver
       5.2.5.1: Subset of Second Order Partials: Example and Test
       5.2.5: Forward Mode Second Partial Derivative Driver
       5.2.2: Hessian: Easy Driver
       5.2: First and Second Order Derivatives: Easy Drivers
       3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
       3.2.6.1: exp_eps: Verify Second Order Forward Sweep
       3.2.7: exp_eps: Second Order Reverse Sweep
       3.2.6.d.f: exp_eps: Second Order Forward Mode: Operation Sequence.Second
       3.2.6.a: exp_eps: Second Order Forward Mode: Second Order Expansion
       3.2.6: exp_eps: Second Order Forward Mode
       3.1.7.1: exp_2: Verify Second Order Reverse Sweep
       3.1.6.1: exp_2: Verify Second Order Forward Sweep
       3.1.7: exp_2: Second Order Reverse Mode
       3.1.6.d.f: exp_2: Second Order Forward Mode: Operation Sequence.Second
       3.1.6.a: exp_2: Second Order Forward Mode: Second Order Expansion
       3.1.6: exp_2: Second Order Forward Mode
       3.1: Second Order Exponential Approximation
seconds 12.9.6: Returns Elapsed Number of Seconds
        11.1.8: Microsoft Version of Elapsed Number of Seconds
        8.5.1.1: Elapsed Seconds: Example and Test
        8.5.1: Returns Elapsed Number of Seconds
seconds: 8.5.1.1: Elapsed Seconds: Example and Test
see 12.10.2.b: Jacobian and Hessian of Optimal Values: See Also
    12.10.1.b: Computing Jacobian and Hessian of Bender's Reduced Objective: See Also
    8.25.b: Convert Certain Types to a String: See Also
    8.12.b: The Integer Power Function: See Also
    5.7.6.a: Example Optimization and Nested Conditional Expressions: See Also
    5.7.5.a: Example Optimization and Conditional Expressions: See Also
    5.7.3.a: Example Optimization and Comparison Operators: See Also
    5.6.4.3.b: Subset of a Sparse Hessian: Example and Test: See Also
    5.6.4.2.b: Computing Sparse Hessian for a Subset of Variables: See Also
    5.5.6.2.a: Sparsity Patterns For a Subset of Variables: Example and Test: See Also
    5.4.3.2.a: Reverse Mode General Case (Checkpointing): Example and Test: See Also
    5.3.9.a.a: Number of Variables that Can be Skipped: Syntax.See Also
    5.3.8.a.a: Controlling Taylor Coefficients Memory Allocation: Syntax.See Also
    5.3.6.a.a: Number Taylor Coefficient Orders Currently Stored: Syntax.See Also
    5.1.5.a.a: ADFun Sequence Properties: Syntax.See Also
    4.4.7.2.19.1.a: Matrix Multiply as an Atomic Operation: See Also
    4.4.7.2.19.a: User Atomic Matrix Multiply: Example and Test: See Also
    4.4.7.2.16.1.a: Atomic Eigen Matrix Multiply Class: See Also
    4.4.7.1.4.a: Checkpointing an Extended ODE Solver: Example and Test: See Also
    4.4.7.1.3.a: Checkpointing an ODE Solver: Example and Test: See Also
    4.4.7.1.b: Checkpointing Functions: See Also
    4.4.5.3.a: Interpolation With Retaping: Example and Test: See Also
    4.4.5.2.a: Interpolation With Out Retaping: Example and Test: See Also
    4.4.4.1.a: Conditional Expressions: Example and Test: See Also
    4.4.3.2.b: The AD Power Function: See Also
    4.3.7.b: Convert an AD Variable to a Parameter: See Also
    4.3.3.b: Convert An AD or Base Type to String: See Also
    4.3.1.b: Convert From an AD Type to its Base Type: See Also
seed 12.9.3.c: Simulate a [0,1] Uniform Random Variate: seed
     11.2.10.d: Simulate a [0,1] Uniform Random Variate: seed
     11.1.e: Running the Speed Test Program: seed
select_domain 5.6.5.k: Compute Sparse Jacobians Using Subgraphs: select_domain
              5.5.11.h: Subgraph Dependency Sparsity Patterns: select_domain
              5.5.7.g: Forward Mode Hessian Sparsity Patterns: select_domain
              5.4.4.g: Reverse Mode Using Subgraphs: select_domain
select_range 5.6.5.l: Compute Sparse Jacobians Using Subgraphs: select_range
             5.5.11.i: Subgraph Dependency Sparsity Patterns: select_range
             5.5.7.h: Forward Mode Hessian Sparsity Patterns: select_range
             5.5.5.h: Reverse Mode Hessian Sparsity Patterns: select_range
semantics 8.22.e.c: The CppAD::vector Template Class: Assignment.Move Semantics
sequence 12.6.n: The CppAD Wish List: Operation Sequence
         12.4.g.b: Glossary: Operation.Sequence
         11.2.7.d.a: Evaluate a Function Defined in Terms of an ODE: Float.Operation Sequence
         8.17.c: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Operation Sequence
         8.13.j: Evaluate a Polynomial or its Derivative: Operation Sequence
         8.12.i: The Integer Power Function: Operation Sequence
         5.9: Check an ADFun Sequence of Operations
         5.7: Optimize an ADFun Object Tape
         5.1.5.1: ADFun Sequence Properties: Example and Test
         5.1.5: ADFun Sequence Properties
         5.1.4: Abort Recording of an Operation Sequence
         5.1.3: Stop Recording and Store Operation Sequence
         5.1.2.k.a: Construct an ADFun Object and Stop Recording: Example.Sequence Constructor
         5.1.2.g: Construct an ADFun Object and Stop Recording: Sequence Constructor
         4.5.5: Check if Two Value are Identically Equal
         4.5.4.e: Is an AD Object a Parameter or Variable: Operation Sequence
         4.5.3.l: AD Boolean Functions: Operation Sequence
         4.5.2.i: Compare AD and Base Objects for Nearly Equal: Operation Sequence
         4.5.1.g: AD Binary Comparison Operators: Operation Sequence
         4.4.5.j: Discrete AD Functions: Operation Sequence
         4.4.4.l: AD Conditional Expressions: Operation Sequence
         4.4.3.2.g: The AD Power Function: Operation Sequence
         4.4.3.1.f: AD Two Argument Inverse Tangent Function: Operation Sequence
         4.4.1.4.h: AD Compound Assignment Operators: Operation Sequence
         4.4.1.3.h: AD Binary Arithmetic Operators: Operation Sequence
         4.4.1.2.f: AD Unary Minus Operator: Operation Sequence
         4.4.1.1.e: AD Unary Plus Operator: Operation Sequence
         4.3.5.g: AD Output Stream Operator: Operation Sequence
         4.3.4.f: AD Output Stream Operator: Operation Sequence
         4.3.2.e: Convert From AD to Integer: Operation Sequence
         4.3.1.f: Convert From an AD Type to its Base Type: Operation Sequence
         3.2.6.d: exp_eps: Second Order Forward Mode: Operation Sequence
         3.2.4.c: exp_eps: First Order Forward Sweep: Operation Sequence
         3.2.3.b: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence
         3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
         3.1.6.d: exp_2: Second Order Forward Mode: Operation Sequence
         3.1.4.d: exp_2: First Order Forward Mode: Operation Sequence
         3.1.3.c: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence
         3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
sequence) 5.1: Create an ADFun Object (Record an Operation Sequence)
sequential 8.23.4: Is The Current Execution in Parallel Mode
set 12.8.11.5.1.h: Define Matrix Multiply as a User Atomic Operation: Set Union
    12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
    12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
    8.28.k: Sparse Matrix Row, Column, Value Representation: set
    8.27.j: Row and Column Index Sparsity Patterns: set
    8.26.1: Set Union: Example and Test
    7.2.10.2: Set Up Multi-Threaded Newton Method
    7.2.9.3: Multi-Threaded User Atomic Set Up
    7.2.8.2: Set Up Multi-threading Sum of 1/i
    4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
    4.4.7.2.2: Set Atomic Function Options
set_max_num_threads 12.8.6.1.e: Set and Get Maximum Number of Threads for omp_alloc Allocator: set_max_num_threads
set_sparsity_enum 4.4.7.2.14.b: Atomic Sparsity with Set Patterns: Example and Test: set_sparsity_enum
                  4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test
                  4.4.7.2.2.b.c: Set Atomic Function Options: atomic_sparsity.set_sparsity_enum
set_union 8.d.f: Some General Purpose Utilities: Miscellaneous.set_union
sets 12.4.j.c: Glossary: Sparsity Pattern.Vector of Sets
     8.26: Union of Standard Sets
     2.2: Using CMake to Configure CppAD
setup 12.8.4: OpenMP Parallel Setup
      8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
shampine 12.5.e: Bibliography: Shampine, L.F.
sigma 5.8.10.t.a: abs_normal: Minimize a Linear Abs-normal Approximation: Method.sigma
      5.8.6.s.a: abs_normal: Minimize a Linear Abs-normal Approximation: Method.sigma
sign 12.10.3.e: LU Factorization of A Square Matrix and Stability Calculation: sign
     8.14.2.e: LU Factorization of A Square Matrix: sign
     4.7.9.5.i: Enable use of AD<Base> where Base is double: sign
     4.7.9.4.i: Enable use of AD<Base> where Base is float: sign
     4.7.9.3.m: Enable use of AD<Base> where Base is Adolc's adouble Type: sign
     4.7.9.1.q: Example AD<Base> Where Base Constructor Allocates Memory: sign
     4.7.5.e: Base Type Requirements for Standard Math Functions: sign
     4.4.2.21.1: Sign Function: Example and Test
     4.4.2.21: The Sign: sign
sign: 4.4.2.21: The Sign: sign
signdet 12.10.2.k: Jacobian and Hessian of Optimal Values: signdet
        8.14.1.f: Compute Determinant and Solve Linear Equations: signdet
simple 12.8.11.2.c: Using AD to Compute Atomic Function Derivatives: Simple Case
       12.8.11.t.a: User Defined Atomic AD Functions: Example.Simple
       12.8.10.3: Speed Test for Both Simple and Fast Representations
       12.8.10.2.5: Correctness Check for Both Simple and Fast Representations
       12.8.10.2.2.1: ODE Fitting Using Simple Representation
       12.8.10.2.2: ODE Fitting Using Simple Representation
       12.8.10.g: Nonlinear Programming Using the CppAD Interface to Ipopt: Simple Representation
       10.1: Getting Started Using CppAD to Compute Derivatives
       8.10: Check Simple Vector Concept
       8.9.1: Simple Vector Template Class: Example and Test
       8.9: Definition of a Simple Vector
       7.2.6: A Simple pthread AD: Example and Test
       7.2.5: A Simple Boost Threading AD: Example and Test
       7.2.4: A Simple OpenMP AD: Example and Test
       7.2.3: A Simple Parallel Pthread Example and Test
       7.2.2: A Simple Boost Thread Example and Test
       7.2.1: A Simple OpenMP Example and Test
       4.7.3.a.a: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq.The Simple Case
       4.4.7.1.1: Simple Checkpointing: Example and Test
simple_ad 7.2.g: Run Multi-Threading Examples and Speed Tests: simple_ad
simplex 5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
simplex_method 5.8.4.2: simplex_method Source Code
simplex_method: 5.8.4.1: abs_normal simplex_method: Example and Test
simulate 12.9.3: Simulate a [0,1] Uniform Random Variate
         11.2.10: Simulate a [0,1] Uniform Random Variate
simulated 12.8.10.2.1.c.c: An ODE Inverse Problem Example: Measurements.Simulated Measurement Values
          9.3.c.c: ODE Inverse Problem Definitions: Source Code: Measurements.Simulated Measurement Values
simulation 12.8.10.2.1.c.b: An ODE Inverse Problem Example: Measurements.Simulation Parameter Values
           12.8.10.2.1.c.a: An ODE Inverse Problem Example: Measurements.Simulation Analytic Solution
           9.3.c.b: ODE Inverse Problem Definitions: Source Code: Measurements.Simulation Parameter Values
           9.3.c.a: ODE Inverse Problem Definitions: Source Code: Measurements.Simulation Analytic Solution
simultaneous 12.8.10.2.1.g: An ODE Inverse Problem Example: Simultaneous Method
             12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem
sin 12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
    12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
    4.4.2.9.1: The AD sin Function: Example and Test
    4.4.2.9: The Sine Function: sin
sine 12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
     12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
     12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
     12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
     12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     4.4.2.16: The Inverse Hyperbolic Sine Function: asinh
     4.4.2.10: The Hyperbolic Sine Function: sinh
     4.4.2.9: The Sine Function: sin
     4.4.2.2: Inverse Sine Function: asin
     4.4.2.1: Inverse Sine Function: acos
sinh 12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     4.4.2.10.1: The AD sinh Function: Example and Test
     4.4.2.10: The Hyperbolic Sine Function: sinh
sini 8.21.m: An Error Controller for Gear's Ode Solvers: sini
size 12.9.5.c: Repeat det_by_minor Routine A Specified Number of Times: size
     12.8.2.f: ADFun Object Deprecated Member Functions: Size
     11.1.7.c: Speed Testing Sparse Jacobian: size
     11.1.6.c: Speed Testing Sparse Hessian: size
     11.1.5.e: Speed Testing Second Derivative of a Polynomial: size
     11.1.4.f: Speed Testing the Jacobian of Ode Solution: size
     11.1.2.e: Speed Testing Gradient of Determinant by Minor Expansion: size
     11.1.1.e: Speed Testing Gradient of Determinant Using Lu Factorization: size
     8.22.e.a: The CppAD::vector Template Class: Assignment.Check Size
     8.9.h: Definition of a Simple Vector: Size
     8.5.e.a: Determine Amount of Time to Execute a Test: test.size
     8.4.e.a: Run One Speed Test and Print Results: Test.size
     8.3.f.a: Run One Speed Test and Return Results: test.size
     4.6.g: AD Vectors that Record Index Operations: size
size_forward_bool 5.5.2.c.a: Jacobian Sparsity Pattern: Forward Mode: f.size_forward_bool
                  5.5.1.e.a: Forward Mode Jacobian Sparsity Patterns: f.size_forward_bool
size_forward_set 5.5.2.c.b: Jacobian Sparsity Pattern: Forward Mode: f.size_forward_set
                 5.5.1.e.b: Forward Mode Jacobian Sparsity Patterns: f.size_forward_set
size_min 12.8.6.9.e: Allocate Memory and Create A Raw Array: size_min
         8.23.12.d: Allocate An Array and Call Default Constructor for its Elements: size_min
size_op 5.1.5.i: ADFun Sequence Properties: size_op
size_op_arg 5.1.5.1: ADFun Sequence Properties: Example and Test
            5.1.5.j: ADFun Sequence Properties: size_op_arg
size_op_seq 5.1.5.m: ADFun Sequence Properties: size_op_seq
size_out 12.8.6.9.f: Allocate Memory and Create A Raw Array: size_out
         8.23.12.e: Allocate An Array and Call Default Constructor for its Elements: size_out
size_par 5.1.5.1: ADFun Sequence Properties: Example and Test
         5.1.5.h: ADFun Sequence Properties: size_par
size_4.6.h: AD Vectors that Record Index Operations: size_t Indexing
size_taylor 12.8.2.i: ADFun Object Deprecated Member Functions: size_taylor
size_text 5.1.5.k: ADFun Sequence Properties: size_text
size_VecAD 5.1.5.1: ADFun Sequence Properties: Example and Test
size_var 5.1.5.1: ADFun Sequence Properties: Example and Test
         5.1.5.g: ADFun Sequence Properties: size_var
         4.4.7.1.m: Checkpointing Functions: size_var
size_vec 8.3.g: Run One Speed Test and Return Results: size_vec
size_vecad 5.1.5.l: ADFun Sequence Properties: size_VecAD
sizevector 12.10.3.j: LU Factorization of A Square Matrix and Stability Calculation: SizeVector
           12.8.10.h: Nonlinear Programming Using the CppAD Interface to Ipopt: SizeVector
           8.28.b: Sparse Matrix Row, Column, Value Representation: SizeVector
           8.27.b: Row and Column Index Sparsity Patterns: SizeVector
           8.14.2.i: LU Factorization of A Square Matrix: SizeVector
           5.8.11.f: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: SizeVector
           5.8.10.f: abs_normal: Minimize a Linear Abs-normal Approximation: SizeVector
           5.8.7.f: Non-Smooth Optimization Using Abs-normal Linear Approximations: SizeVector
           5.8.6.f: abs_normal: Minimize a Linear Abs-normal Approximation: SizeVector
           5.6.5.e: Compute Sparse Jacobians Using Subgraphs: SizeVector
           5.6.3.c: Computing Sparse Hessians: SizeVector
           5.6.1.c: Computing Sparse Jacobians: SizeVector
           5.5.11.f: Subgraph Dependency Sparsity Patterns: SizeVector
           5.5.7.e: Forward Mode Hessian Sparsity Patterns: SizeVector
           5.5.5.e: Reverse Mode Hessian Sparsity Patterns: SizeVector
           5.5.3.d: Reverse Mode Jacobian Sparsity Patterns: SizeVector
           5.5.1.d: Forward Mode Jacobian Sparsity Patterns: SizeVector
           5.4.4.f: Reverse Mode Using Subgraphs: SizeVector
sizing 8.9.d: Definition of a Simple Vector: Sizing Constructor
skipped 5.3.9: Number of Variables that Can be Skipped
skipped: 5.3.9.1: Number of Variables That Can be Skipped: Example and Test
smax 8.21.l: An Error Controller for Gear's Ode Solvers: smax
     8.19.k: An Error Controller for ODE Solvers: smax
smin 8.21.k: An Error Controller for Gear's Ode Solvers: smin
     8.19.j: An Error Controller for ODE Solvers: smin
software 12.12: Your License for the CppAD Software
         12.6.o: The CppAD Wish List: Software Guidelines
solution 12.8.10.2.1.c.a: An ODE Inverse Problem Example: Measurements.Simulation Analytic Solution
         12.8.10.2: Example Simultaneous Solution of Forward and Inverse Problem
         12.8.10.t: Nonlinear Programming Using the CppAD Interface to Ipopt: solution
         11.7.4: Sacado Speed: Gradient of Ode Solution
         11.5.4: CppAD Speed: Gradient of Ode Solution
         11.3.4: Double Speed: Ode Solution
         11.1.4: Speed Testing the Jacobian of Ode Solution
         10.2.14.c: Taylor's Ode Solver: An Example and Test: ODE Solution
         10.2.13.d: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Derivative of ODE Solution
         10.2.13.c: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: ODE Solution
         10.2.12.d: Taylor's Ode Solver: A Multi-Level AD Example and Test: Derivative of ODE Solution
         10.2.12.c: Taylor's Ode Solver: A Multi-Level AD Example and Test: ODE Solution
         9.3.f: ODE Inverse Problem Definitions: Source Code: Solution Method
         9.3.c.a: ODE Inverse Problem Definitions: Source Code: Measurements.Simulation Analytic Solution
         9.m: Use Ipopt to Solve a Nonlinear Programming Problem: solution
         5.8.8.u: Solve a Quadratic Program Using Interior Point Method: Solution
         4.4.7.1.4.f: Checkpointing an Extended ODE Solver: Example and Test: Solution
         4.4.7.1.3.f: Checkpointing an ODE Solver: Example and Test: Solution
solve 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
      10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
      10.3.3: Lu Factor and Solve with Recorded Pivoting
      9: Use Ipopt to Solve a Nonlinear Programming Problem
      8.18: A 3rd and 4th Order Rosenbrock ODE Solver
      8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
      8.14.2: LU Factorization of A Square Matrix
      8.14.1: Compute Determinant and Solve Linear Equations
      8.14: Compute Determinants and Solve Equations by LU Factorization
      5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
      5.8.8: Solve a Quadratic Program Using Interior Point Method
      5.8.5: abs_normal: Solve a Linear Program With Box Constraints
      5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
solver 8.18: A 3rd and 4th Order Rosenbrock ODE Solver
       8.17: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
       4.4.7.1.4.d: Checkpointing an Extended ODE Solver: Example and Test: ODE Solver
       4.4.7.1.3.d: Checkpointing an ODE Solver: Example and Test: ODE Solver
solver: 10.2.14: Taylor's Ode Solver: An Example and Test
        10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
        10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
        4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
        4.4.7.1.3: Checkpointing an ODE Solver: Example and Test
solvers 8.21: An Error Controller for Gear's Ode Solvers
        8.19: An Error Controller for ODE Solvers
some 12.10: Some Numerical AD Utilities
     8: Some General Purpose Utilities
sort 8.24: Returns Indices that Sort a Vector
sort: 8.24.1: Index Sort: Example and Test
sorting 8.d.d: Some General Purpose Utilities: Miscellaneous.Sorting Indices
source 12.9.8.a: Main Program For Comparing C and C++ Speed: Source Code
       12.9.7.e: Determine Amount of Time to Execute det_by_minor: Source Code
       12.9.6.d: Returns Elapsed Number of Seconds: Source Code
       12.9.5.d: Repeat det_by_minor Routine A Specified Number of Times: Source Code
       12.9.4.c: Correctness Test of det_by_minor Routine: Source Code
       12.9.3.f: Simulate a [0,1] Uniform Random Variate: Source Code
       12.9.2.e: Compute Determinant using Expansion by Minors: Source Code
       12.9.1.j: Determinant of a Minor: Source Code
       12.8.11.5.1.c: Define Matrix Multiply as a User Atomic Operation: Begin Source
       12.8.10.2.3.1: ODE Fitting Using Fast Representation
       12.8.10.2.2.1: ODE Fitting Using Simple Representation
       12.8.10.2.1.1: ODE Inverse Problem Definitions: Source Code
       12.8.10.2.3.e: ODE Fitting Using Fast Representation: Source
       12.8.10.2.2.f: ODE Fitting Using Simple Representation: Source
       12.8.10.2.1.h: An ODE Inverse Problem Example: Source
       11.2.10.1: Source: uniform_01
       11.2.10.h: Simulate a [0,1] Uniform Random Variate: Source Code
       11.2.9.2: Source: sparse_hes_fun
       11.2.9.m: Evaluate a Function That Has a Sparse Hessian: Source Code
       11.2.8.2: Source: sparse_jac_fun
       11.2.8.n: Evaluate a Function That Has a Sparse Jacobian: Source Code
       11.2.7.2: Source: ode_evaluate
       11.2.7.i: Evaluate a Function Defined in Terms of an ODE: Source Code
       11.2.6.2: Source: mat_sum_sq
       11.2.6.j: Sum Elements of a Matrix Times Itself: Source Code
       11.2.5.1: Source: det_grad_33
       11.2.5.h: Check Gradient of Determinant of 3 by 3 matrix: Source Code
       11.2.4.1: Source: det_33
       11.2.4.h: Check Determinant of 3 by 3 matrix: Source Code
       11.2.3.2: Source: det_by_minor
       11.2.3.i: Determinant Using Expansion by Minors: Source Code
       11.2.2.2: Source: det_of_minor
       11.2.2.m: Determinant of a Minor: Source Code
       11.2.1.2: Source: det_by_lu
       11.2.1.i: Determinant Using Expansion by Lu Factorization: Source Code
       11.2.d: Speed Testing Utilities: Source Code
       10.2.13.i: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Source
       10.2.12.f: Taylor's Ode Solver: A Multi-Level AD Example and Test: Source
       10.2.10.1.b: Multiple Level of AD: Example and Test: Source
       10.2.4.1: Source Code for eigen_plugin.hpp
       9.3.g: ODE Inverse Problem Definitions: Source Code: Source
       9.3: ODE Inverse Problem Definitions: Source Code
       8.21.x: An Error Controller for Gear's Ode Solvers: Source Code
       8.20.m: An Arbitrary Order Gear Method: Source Code
       8.19.w: An Error Controller for ODE Solvers: Source Code
       8.18.o: A 3rd and 4th Order Rosenbrock ODE Solver: Source Code
       8.17.p: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Source Code
       8.16.o: Multi-dimensional Romberg Integration: Source Code
       8.15.m: One DimensionalRomberg Integration: Source Code
       8.14.3.2: Source: LuInvert
       8.14.3.j: Invert an LU Factored Equation: Source
       8.14.2.2: Source: LuFactor
       8.14.2.n: LU Factorization of A Square Matrix: Source
       8.14.1.2: Source: LuSolve
       8.14.1.r: Compute Determinant and Solve Linear Equations: Source
       8.13.2: Source: Poly
       8.13.l: Evaluate a Polynomial or its Derivative: Source
       7.2.11.l: Specifications for A Team of AD Threads: Source
       7.2.10.6.l: Timing Test of Multi-Threaded Newton Method: Source
       7.2.10.5.n: A Multi-Threaded Newton's Method: Source
       7.2.10.4.e: Take Down Multi-threaded Newton Method: Source
       7.2.10.3.f: Do One Thread's Work for Multi-Threaded Newton Method: Source
       7.2.10.2.j: Set Up Multi-Threaded Newton Method: Source
       7.2.10.1.b: Common Variables use by Multi-Threaded Newton Method: Source
       7.2.10.a: Multi-Threaded Newton Method Example / Test: Source File
       7.2.9.6.f: Run Multi-Threaded User Atomic Calculation: Source
       7.2.9.5.f: Multi-Threaded User Atomic Take Down: Source
       7.2.9.4.b: Multi-Threaded User Atomic Worker: Source
       7.2.9.3.f: Multi-Threaded User Atomic Set Up: Source
       7.2.9.2.b: Multi-Threaded User Atomic Common Information: Source
       7.2.9.1.f: Defines a User Atomic Operation that Computes Square Root: Source
       7.2.9.a: Multi-Threading User Atomic Example / Test: Source File
       7.2.8.6.i: Timing Test of Multi-Threaded Summation of 1/i: Source
       7.2.8.5.g: Multi-Threaded Implementation of Summation of 1/i: Source
       7.2.8.4.e: Take Down Multi-threading Sum of 1/i: Source
       7.2.8.3.f: Do One Thread's Work for Sum of 1/i: Source
       7.2.8.2.e: Set Up Multi-threading Sum of 1/i: Source
       7.2.8.1.b: Common Variables Used by Multi-threading Sum of 1/i: Source
       7.2.8.a: Multi-Threading Harmonic Summation Example / Test: Source File
       7.2.7.c: Using a Team of AD Threads: Example and Test: Source Code
       7.2.6.b: A Simple pthread AD: Example and Test: Source Code
       7.2.5.b: A Simple Boost Threading AD: Example and Test: Source Code
       7.2.4.b: A Simple OpenMP AD: Example and Test: Source Code
       7.2.3.b: A Simple Parallel Pthread Example and Test: Source Code
       7.2.2.b: A Simple Boost Thread Example and Test: Source Code
       7.2.1.b: A Simple OpenMP Example and Test: Source Code
       7.2.l: Run Multi-Threading Examples and Speed Tests: Source
       5.8.11.2: min_nso_quad Source Code
       5.8.11.1.c: abs_normal min_nso_quad: Example and Test: Source
       5.8.11.c: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: Source
       5.8.10.2: abs_min_quad Source Code
       5.8.10.1.b: abs_min_quad: Example and Test: Source
       5.8.10.c: abs_normal: Minimize a Linear Abs-normal Approximation: Source
       5.8.9.2: qp_box Source Code
       5.8.9.1.b: abs_normal qp_box: Example and Test: Source
       5.8.9.c: abs_normal: Solve a Quadratic Program With Box Constraints: Source
       5.8.8.2: qp_interior Source Code
       5.8.8.1.b: abs_normal qp_interior: Example and Test: Source
       5.8.8.c: Solve a Quadratic Program Using Interior Point Method: Source
       5.8.7.2: min_nso_linear Source Code
       5.8.7.1.c: abs_normal min_nso_linear: Example and Test: Source
       5.8.7.c: Non-Smooth Optimization Using Abs-normal Linear Approximations: Source
       5.8.6.2: abs_min_linear Source Code
       5.8.6.1.b: abs_min_linear: Example and Test: Source
       5.8.6.c: abs_normal: Minimize a Linear Abs-normal Approximation: Source
       5.8.5.2: lp_box Source Code
       5.8.5.1.b: abs_normal lp_box: Example and Test: Source
       5.8.5.c: abs_normal: Solve a Linear Program With Box Constraints: Source
       5.8.4.2: simplex_method Source Code
       5.8.4.1.b: abs_normal simplex_method: Example and Test: Source
       5.8.4.c: abs_normal: Solve a Linear Program Using Simplex Method: Source
       5.8.3.2: abs_eval Source Code
       5.8.3.1.b: abs_eval: Example and Test: Source
       5.8.3.c: abs_normal: Evaluate First Order Approximation: Source
       5.8.1.1.b: abs_normal Getting Started: Example and Test: Source
       4.7.9.3.1.d: Using Adolc with Multiple Levels of Taping: Example and Test: Source
       4.3.6.1.b: Printing During Forward Mode: Example and Test: Source Code
       3.3.b: Correctness Tests For Exponential Approximation in Introduction: Source
       2.1.g: Download The CppAD Source Code: Source Code Control
       2.1: Download The CppAD Source Code
source: 11.2.10.1: Source: uniform_01
        11.2.9.2: Source: sparse_hes_fun
        11.2.8.2: Source: sparse_jac_fun
        11.2.7.2: Source: ode_evaluate
        11.2.6.2: Source: mat_sum_sq
        11.2.5.1: Source: det_grad_33
        11.2.4.1: Source: det_33
        11.2.3.2: Source: det_by_minor
        11.2.2.2: Source: det_of_minor
        11.2.1.2: Source: det_by_lu
        8.14.3.2: Source: LuInvert
        8.14.2.2: Source: LuFactor
        8.14.1.2: Source: LuSolve
        8.13.2: Source: Poly
sout 5.8.8.q: Solve a Quadratic Program Using Interior Point Method: sout
sparse 11.7.6: Sacado Speed: Sparse Hessian
       11.6.6: Fadbad Speed: Sparse Hessian
       11.5.7: CppAD Speed: Sparse Jacobian
       11.5.6: CppAD Speed: Sparse Hessian
       11.4.7: adolc Speed: Sparse Jacobian
       11.4.6: Adolc Speed: Sparse Hessian
       11.3.7: Double Speed: Sparse Jacobian
       11.3.6: Double Speed: Sparse Hessian
       11.2.9: Evaluate a Function That Has a Sparse Hessian
       11.2.8: Evaluate a Function That Has a Sparse Jacobian
       11.1.7: Speed Testing Sparse Jacobian
       11.1.6: Speed Testing Sparse Hessian
       9.f.b: Use Ipopt to Solve a Nonlinear Programming Problem: options.Sparse
       8.28: Sparse Matrix Row, Column, Value Representation
       8.d.g: Some General Purpose Utilities: Miscellaneous.Sparse Matrices
       5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
       5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
       5.6.5: Compute Sparse Jacobians Using Subgraphs
       5.6.4.3: Subset of a Sparse Hessian: Example and Test
       5.6.4.2: Computing Sparse Hessian for a Subset of Variables
       5.6.4.1: Sparse Hessian: Example and Test
       5.6.4: Sparse Hessian
       5.6.3.1: Computing Sparse Hessian: Example and Test
       5.6.3: Computing Sparse Hessians
       5.6.2.1: Sparse Jacobian: Example and Test
       5.6.2: Sparse Jacobian
       5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
       5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
       5.6.1: Computing Sparse Jacobians
       5.5.4: Jacobian Sparsity Pattern: Reverse Mode
       5.6: Calculating Sparse Derivatives
       2.2.2.4: ColPack: Sparse Hessian Example and Test
       2.2.2.3: ColPack: Sparse Hessian Example and Test
       2.2.2.2: ColPack: Sparse Jacobian Example and Test
       2.2.2.1: ColPack: Sparse Jacobian Example and Test
sparse_hes_fun 11.2.9.2: Source: sparse_hes_fun
               11.2.9.1: sparse_hes_fun: Example and test
               11.2.9: Evaluate a Function That Has a Sparse Hessian
sparse_hes_fun: 11.2.9.1: sparse_hes_fun: Example and test
sparse_jac_for 5.6.1.e: Computing Sparse Jacobians: sparse_jac_for
sparse_jac_fun 11.2.8.2: Source: sparse_jac_fun
               11.2.8.1: sparse_jac_fun: Example and test
               11.2.8: Evaluate a Function That Has a Sparse Jacobian
sparse_jac_fun: 11.2.8.1: sparse_jac_fun: Example and test
sparse_jac_rev 5.6.1.f: Computing Sparse Jacobians: sparse_jac_rev
sparse_jacobian 11.7.7: sacado Speed: sparse_jacobian
                11.6.7: fadbad Speed: sparse_jacobian
sparse_rc: 8.27.1: sparse_rc: Example and Test
sparse_rcv: 8.28.1: sparse_rcv: Example and Test
sparsity 12.6.b.c: The CppAD Wish List: Atomic.Sparsity
         12.4.j: Glossary: Sparsity Pattern
         11.1.g: Running the Speed Test Program: Sparsity Options
         8.27: Row and Column Index Sparsity Patterns
         5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
         5.5.11: Subgraph Dependency Sparsity Patterns
         5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
         5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
         5.5.8: Hessian Sparsity Pattern: Forward Mode
         5.5.7.k: Forward Mode Hessian Sparsity Patterns: Sparsity for Entire Hessian
         5.5.7: Forward Mode Hessian Sparsity Patterns
         5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test
         5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
         5.5.6.k: Hessian Sparsity Pattern: Reverse Mode: Entire Sparsity Pattern
         5.5.6: Hessian Sparsity Pattern: Reverse Mode
         5.5.5.l: Reverse Mode Hessian Sparsity Patterns: Sparsity for Entire Hessian
         5.5.5: Reverse Mode Hessian Sparsity Patterns
         5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
         5.5.4.k: Jacobian Sparsity Pattern: Reverse Mode: Entire Sparsity Pattern
         5.5.4: Jacobian Sparsity Pattern: Reverse Mode
         5.5.3.k: Reverse Mode Jacobian Sparsity Patterns: Sparsity for Entire Jacobian
         5.5.3: Reverse Mode Jacobian Sparsity Patterns
         5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
         5.5.2.k: Jacobian Sparsity Pattern: Forward Mode: Entire Sparsity Pattern
         5.5.2: Jacobian Sparsity Pattern: Forward Mode
         5.5.1.k: Forward Mode Jacobian Sparsity Patterns: Sparsity for Entire Jacobian
         5.5.1: Forward Mode Jacobian Sparsity Patterns
         5.1.2.i.b: Construct an ADFun Object and Stop Recording: Assignment Operator.Sparsity Patterns
         5.6.b: Calculating Sparse Derivatives: Old Sparsity Patterns
         5.6.a: Calculating Sparse Derivatives: Preferred Sparsity Patterns
         5.5.b: Calculating Sparsity Patterns: Old Sparsity Patterns
         5.5.a: Calculating Sparsity Patterns: Preferred Sparsity Patterns
         5.5: Calculating Sparsity Patterns
         4.4.7.2.15.b: Tan and Tanh as User Atomic Operations: Example and Test: sparsity
         4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
         4.4.7.2.13.b: Reciprocal as an Atomic Operation: Example and Test: sparsity
         4.4.7.2.12.b: Atomic Euclidean Norm Squared: Example and Test: sparsity
         4.4.7.2.9: Atomic Reverse Hessian Sparsity Patterns
         4.4.7.2.8: Atomic Forward Hessian Sparsity Patterns
         4.4.7.2.7: Atomic Reverse Jacobian Sparsity Patterns
         4.4.7.2.6: Atomic Forward Jacobian Sparsity Patterns
         4.4.7.2.1.c.d: Atomic Function Constructor: atomic_base.sparsity
         4.4.7.2.e.d: User Defined Atomic AD Functions: Examples.Hessian Sparsity Patterns
         4.4.7.1.k: Checkpointing Functions: sparsity
         2.2.2: Including the ColPack Sparsity Calculations
         2.2: Using CMake to Configure CppAD
sparsity: 5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
          5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
          5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
          5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
          5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
          5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
          5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
          5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
          4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test
          4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test
          4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test
          4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test
special 12.6.g.b: The CppAD Wish List: Optimization.Special Operators
        12.3.1.c.d: The Theory of Forward Mode: Standard Math Functions.Special Cases
        5.3.3.j: Second Order Forward Mode: Derivative Values: Special Case
        5.3.2.h: First Order Forward Mode: Derivative Values: Special Case
        5.3.1.i: Zero Order Forward Mode: Function Values: Special Case
specifications 11.7.5.a: Sacado Speed: Second Derivative of a Polynomial: Specifications
               11.7.4.a: Sacado Speed: Gradient of Ode Solution: Specifications
               11.7.3.a: Sacado Speed: Matrix Multiplication: Specifications
               11.7.2.a: Sacado Speed: Gradient of Determinant Using Lu Factorization: Specifications
               11.7.1.a: Sacado Speed: Gradient of Determinant by Minor Expansion: Specifications
               11.6.5.a: Fadbad Speed: Second Derivative of a Polynomial: Specifications
               11.6.4.a: Fadbad Speed: Ode: Specifications
               11.6.3.a: Fadbad Speed: Matrix Multiplication: Specifications
               11.6.2.a: Fadbad Speed: Gradient of Determinant Using Lu Factorization: Specifications
               11.6.1.a: Fadbad Speed: Gradient of Determinant by Minor Expansion: Specifications
               11.5.7.a: CppAD Speed: Sparse Jacobian: Specifications
               11.5.6.a: CppAD Speed: Sparse Hessian: Specifications
               11.5.5.a: CppAD Speed: Second Derivative of a Polynomial: Specifications
               11.5.4.a: CppAD Speed: Gradient of Ode Solution: Specifications
               11.5.3.a: CppAD Speed, Matrix Multiplication: Specifications
               11.5.2.a: CppAD Speed: Gradient of Determinant Using Lu Factorization: Specifications
               11.5.1.a: CppAD Speed: Gradient of Determinant by Minor Expansion: Specifications
               11.4.7.a: adolc Speed: Sparse Jacobian: Specifications
               11.4.6.a: Adolc Speed: Sparse Hessian: Specifications
               11.4.5.a: Adolc Speed: Second Derivative of a Polynomial: Specifications
               11.4.4.a: Adolc Speed: Ode: Specifications
               11.4.3.a: Adolc Speed: Matrix Multiplication: Specifications
               11.4.2.a: Adolc Speed: Gradient of Determinant Using Lu Factorization: Specifications
               11.4.1.a: Adolc Speed: Gradient of Determinant by Minor Expansion: Specifications
               11.3.7.a: Double Speed: Sparse Jacobian: Specifications
               11.3.6.a: Double Speed: Sparse Hessian: Specifications
               11.3.5.a: Double Speed: Evaluate a Polynomial: Specifications
               11.3.4.a: Double Speed: Ode Solution: Specifications
               11.3.3.a: CppAD Speed: Matrix Multiplication (Double Version): Specifications
               11.3.2.a: Double Speed: Determinant Using Lu Factorization: Specifications
               11.3.1.a: Double Speed: Determinant by Minor Expansion: Specifications
               7.2.11: Specifications for A Team of AD Threads
specified 12.9.5: Repeat det_by_minor Routine A Specified Number of Times
          12.8.6.4: Get At Least A Specified Amount of Memory
          8.23.6: Get At Least A Specified Amount of Memory
          8.9.b: Definition of a Simple Vector: Elements of Specified Type
speed 12.9.8: Main Program For Comparing C and C++ Speed
      12.9: Compare Speed of C and C++
      12.8.10.3: Speed Test for Both Simple and Fast Representations
      12.8.6.4.g: Get At Least A Specified Amount of Memory: Allocation Speed
      12.6.i: The CppAD Wish List: Compilation Speed
      12.1.j: Frequently Asked Questions and Answers: Speed
      11.7.5: Sacado Speed: Second Derivative of a Polynomial
      11.7.4: Sacado Speed: Gradient of Ode Solution
      11.7.3: Sacado Speed: Matrix Multiplication
      11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
      11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
      11.7: Speed Test Derivatives Using Sacado
      11.6.5: Fadbad Speed: Second Derivative of a Polynomial
      11.6.4: Fadbad Speed: Ode
      11.6.3: Fadbad Speed: Matrix Multiplication
      11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
      11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
      11.6: Speed Test Derivatives Using Fadbad
      11.5.7: CppAD Speed: Sparse Jacobian
      11.5.6: CppAD Speed: Sparse Hessian
      11.5.5: CppAD Speed: Second Derivative of a Polynomial
      11.5.4: CppAD Speed: Gradient of Ode Solution
      11.5.3: CppAD Speed, Matrix Multiplication
      11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
      11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
      11.5: Speed Test Derivatives Using CppAD
      11.4.7: adolc Speed: Sparse Jacobian
      11.4.6: Adolc Speed: Sparse Hessian
      11.4.5: Adolc Speed: Second Derivative of a Polynomial
      11.4.4: Adolc Speed: Ode
      11.4.3: Adolc Speed: Matrix Multiplication
      11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
      11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
      11.4: Speed Test of Derivatives Using Adolc
      11.3.7: Double Speed: Sparse Jacobian
      11.3.6: Double Speed: Sparse Hessian
      11.3.5: Double Speed: Evaluate a Polynomial
      11.3.4: Double Speed: Ode Solution
      11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
      11.3.2: Double Speed: Determinant Using Lu Factorization
      11.3.1: Double Speed: Determinant by Minor Expansion
      11.3: Speed Test of Functions in Double
      11.2.6: Sum Elements of a Matrix Times Itself
      11.2.b: Speed Testing Utilities: Speed Utility Routines
      11.2.a: Speed Testing Utilities: Speed Main Program
      11.2: Speed Testing Utilities
      11.1.7: Speed Testing Sparse Jacobian
      11.1.6: Speed Testing Sparse Hessian
      11.1.5: Speed Testing Second Derivative of a Polynomial
      11.1.4: Speed Testing the Jacobian of Ode Solution
      11.1.3: Speed Testing Derivative of Matrix Multiply
      11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
      11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
      11.1.i: Running the Speed Test Program: Speed Results
      11.1.d.b: Running the Speed Test Program: test.speed
      11.1: Running the Speed Test Program
      11: Speed Test an Operator Overloading AD Package
      10.3.2: Run the Speed Examples
      8.23.6.f: Get At Least A Specified Amount of Memory: Allocation Speed
      8.23.2.c: Setup thread_alloc For Use in Multi-Threading Environment: Speed
      8.5: Determine Amount of Time to Execute a Test
      8.4.1: Example Use of SpeedTest
      8.4: Run One Speed Test and Print Results
      8.3: Run One Speed Test and Return Results
      7.2.11.k: Specifications for A Team of AD Threads: Speed Test of Implementation
      7.2: Run Multi-Threading Examples and Speed Tests
      5.7.g: Optimize an ADFun Object Tape: Speed Testing
      5.7: Optimize an ADFun Object Tape
      5.3.7.d.a: Comparison Changes Between Taping and Zero Order Forward: count.Speed
      4.6.k: AD Vectors that Record Index Operations: Speed and Memory
      2.2.6.c: Including the Sacado Speed Tests: Speed Tests
      2.2.6: Including the Sacado Speed Tests
      2.2.4.c: Including the FADBAD Speed Tests: Speed Tests
      2.2.4: Including the FADBAD Speed Tests
      2.2.1.d: Including the ADOL-C Examples and Tests: Speed Tests
     compare C and C++ 12.9: Compare Speed of C and C++
speed: 11.7.7: sacado Speed: sparse_jacobian
       11.7.6: Sacado Speed: Sparse Hessian
       11.7.5: Sacado Speed: Second Derivative of a Polynomial
       11.7.4: Sacado Speed: Gradient of Ode Solution
       11.7.3: Sacado Speed: Matrix Multiplication
       11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
       11.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
       11.6.7: fadbad Speed: sparse_jacobian
       11.6.6: Fadbad Speed: Sparse Hessian
       11.6.5: Fadbad Speed: Second Derivative of a Polynomial
       11.6.4: Fadbad Speed: Ode
       11.6.3: Fadbad Speed: Matrix Multiplication
       11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
       11.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
       11.5.7: CppAD Speed: Sparse Jacobian
       11.5.6: CppAD Speed: Sparse Hessian
       11.5.5: CppAD Speed: Second Derivative of a Polynomial
       11.5.4: CppAD Speed: Gradient of Ode Solution
       11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
       11.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
       11.4.7: adolc Speed: Sparse Jacobian
       11.4.6: Adolc Speed: Sparse Hessian
       11.4.5: Adolc Speed: Second Derivative of a Polynomial
       11.4.4: Adolc Speed: Ode
       11.4.3: Adolc Speed: Matrix Multiplication
       11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
       11.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
       11.3.7: Double Speed: Sparse Jacobian
       11.3.6: Double Speed: Sparse Hessian
       11.3.5: Double Speed: Evaluate a Polynomial
       11.3.4: Double Speed: Ode Solution
       11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
       11.3.2: Double Speed: Determinant Using Lu Factorization
       11.3.1: Double Speed: Determinant by Minor Expansion
speed_test 8.3.1: speed_test: Example and test
           8.3: Run One Speed Test and Return Results
speed_test: 8.3.1: speed_test: Example and test
speedtest 8.4.1: Example Use of SpeedTest
sqrt 12.3.2.3: Square Root Function Reverse Mode Theory
     12.3.1.3: Square Root Function Forward Mode Theory
     4.4.2.11.1: The AD sqrt Function: Example and Test
     4.4.2.11: The Square Root Function: sqrt
square 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
       12.3.2.3: Square Root Function Reverse Mode Theory
       12.3.1.3: Square Root Function Forward Mode Theory
       11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
       8.14.2: LU Factorization of A Square Matrix
       7.2.9.1: Defines a User Atomic Operation that Computes Square Root
       4.4.2.11: The Square Root Function: sqrt
square_root 7.2.9.6.d: Run Multi-Threaded User Atomic Calculation: square_root
            7.2.9.5.d: Multi-Threaded User Atomic Take Down: square_root
squared: 4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
st 4.4.7.2.7.d.c: Atomic Reverse Jacobian Sparsity Patterns: Implementation.st
stability 12.10.3: LU Factorization of A Square Matrix and Stability Calculation
stack 10.2.15: Example Differentiating a Stack Machine Interpreter
standard 12.8.12.c.d: zdouble: An AD Base Type With Absolute Zero: Syntax.Standard Math
         12.3.2.c: The Theory of Reverse Mode: Standard Math Functions
         12.3.1.c: The Theory of Forward Mode: Standard Math Functions
         8.26: Union of Standard Sets
         4.7.9.5.h: Enable use of AD<Base> where Base is double: Unary Standard Math
         4.7.9.4.h: Enable use of AD<Base> where Base is float: Unary Standard Math
         4.7.9.3.k: Enable use of AD<Base> where Base is Adolc's adouble Type: Unary Standard Math
         4.7.9.1.o: Example AD<Base> Where Base Constructor Allocates Memory: Unary Standard Math
         4.7.5.b: Base Type Requirements for Standard Math Functions: Unary Standard Math
         4.7.5: Base Type Requirements for Standard Math Functions
         4.7.d: AD<Base> Requirements for a CppAD Base Type: Standard Base Types
         4.4.2: The Unary Standard Math Functions
start 10.2.10.c.d: Using Multiple Levels of AD: Procedure.Second Start AD< AD<double> >
      10.2.10.c.b: Using Multiple Levels of AD: Procedure.Start AD< AD<double> > Recording
      10.2.10.c.a: Using Multiple Levels of AD: Procedure.First Start AD<double>
      10.1: Getting Started Using CppAD to Compute Derivatives
      7.2.8.3.c: Do One Thread's Work for Sum of 1/i: start
      5.1.1.c: Declare Independent Variables and Start Recording: Start Recording
      5.1.1: Declare Independent Variables and Start Recording
      4.4.7.2.19.1.c: Matrix Multiply as an Atomic Operation: Start Class Definition
      4.4.7.2.18.2.b: Atomic Eigen Cholesky Factorization Class: Start Class Definition
      4.4.7.2.17.1.d: Atomic Eigen Matrix Inversion Class: Start Class Definition
      4.4.7.2.16.1.e: Atomic Eigen Matrix Multiply Class: Start Class Definition
      4.4.7.2.15.c: Tan and Tanh as User Atomic Operations: Example and Test: Start Class Definition
      4.4.7.2.14.c: Atomic Sparsity with Set Patterns: Example and Test: Start Class Definition
      4.4.7.2.13.c: Reciprocal as an Atomic Operation: Example and Test: Start Class Definition
      4.4.7.2.12.c: Atomic Euclidean Norm Squared: Example and Test: Start Class Definition
      4.4.7.2.11.b: Getting Started with Atomic Operations: Example and Test: Start Class Definition
      4.4.7.2.9.1.c: Atomic Reverse Hessian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.8.1.c: Atomic Forward Hessian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.7.1.c: Atomic Reverse Jacobian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.6.1.c: Atomic Forward Jacobian Sparsity: Example and Test: Start Class Definition
      4.4.7.2.5.1.c: Atomic Reverse: Example and Test: Start Class Definition
      4.4.7.2.4.1.c: Atomic Forward: Example and Test: Start Class Definition
started 10.1: Getting Started Using CppAD to Compute Derivatives
        4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
        4.4.7.2.e.a: User Defined Atomic AD Functions: Examples.Getting Started
started: 5.8.1.1: abs_normal Getting Started: Example and Test
state 5.3.8.e: Controlling Taylor Coefficients Memory Allocation: Original State
static 12.8.11.b.c: User Defined Atomic AD Functions: Syntax Function.Free Static Memory
       12.8.7: Memory Leak Detection
       4.4.7.2.10: Free Static Variables
status 12.8.10.t.a: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.status
       9.m.a: Use Ipopt to Solve a Nonlinear Programming Problem: solution.status
std 2.2.7.b: Choosing the CppAD Test Vector Template Class: std
std::complex<double> 4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
std::numeric_limits 4.4.6.c: Numeric Limits For an AD and Base Types: std::numeric_limits
std::vector 12.8.9.f: Choosing The Vector Testing Template Class: std::vector
            10.5.e: Using The CppAD Test Vector Template Class: std::vector
stegun 12.5.a: Bibliography: Abramowitz and Stegun
step 8.19.f.a: An Error Controller for ODE Solvers: Method.step
     5.8.8.t: Solve a Quadratic Program Using Interior Point Method: Newton Step
     2.a.d: CppAD Download, Test, and Install Instructions: Instructions.Step 4: Installation
     2.a.c: CppAD Download, Test, and Install Instructions: Instructions.Step 3: Check
     2.a.b: CppAD Download, Test, and Install Instructions: Instructions.Step 2: Cmake
     2.a.a: CppAD Download, Test, and Install Instructions: Instructions.Step 1: Download
steps 5.4.3.2.c: Reverse Mode General Case (Checkpointing): Example and Test: Processing Steps
stiff 10.2.11: A Stiff Ode: Example and Test
      8.20: An Arbitrary Order Gear Method
      8.18: A 3rd and 4th Order Rosenbrock ODE Solver
stop 5.1.3: Stop Recording and Store Operation Sequence
     5.1.2: Construct an ADFun Object and Stop Recording
     5.1.1.d: Declare Independent Variables and Start Recording: Stop Recording
storage 12.10.3.d: LU Factorization of A Square Matrix and Stability Calculation: Matrix Storage
        10.3.3.c: Lu Factor and Solve with Recorded Pivoting: Storage Convention
        8.14.3.d: Invert an LU Factored Equation: Matrix Storage
        8.14.2.d: LU Factorization of A Square Matrix: Matrix Storage
        8.14.1.e: Compute Determinant and Solve Linear Equations: Matrix Storage
storage: 12.1.k: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
store 5.1.3: Stop Recording and Store Operation Sequence
stored 5.3.6: Number Taylor Coefficient Orders Currently Stored
stream 4.3.5: AD Output Stream Operator
       4.3.4: AD Output Stream Operator
string 9.f.c: Use Ipopt to Solve a Nonlinear Programming Problem: options.String
       8.25: Convert Certain Types to a String
       4.3.3: Convert An AD or Base Type to String
structure 12.2: Directory Structure
          2.2: Using CMake to Configure CppAD
subgraph 12.6.b.a: The CppAD Wish List: Atomic.Subgraph
         11.1.f.f: Running the Speed Test Program: Global Options.subgraph
         5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
         5.5.11: Subgraph Dependency Sparsity Patterns
subgraphs 5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
          5.6.5: Compute Sparse Jacobians Using Subgraphs
          5.4.4: Reverse Mode Using Subgraphs
subgraphs: 5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
subset 5.6.5.j: Compute Sparse Jacobians Using Subgraphs: subset
       5.6.4.3: Subset of a Sparse Hessian: Example and Test
       5.6.4.2.d: Computing Sparse Hessian for a Subset of Variables: Subset
       5.6.4.2: Computing Sparse Hessian for a Subset of Variables
       5.6.4.p: Sparse Hessian: Subset Hessian
       5.6.4.f.c: Sparse Hessian: p.Column Subset
       5.6.3.o: Computing Sparse Hessians: Subset Hessian
       5.6.3.i.a: Computing Sparse Hessians: pattern.subset
       5.6.3.h: Computing Sparse Hessians: subset
       5.6.1.j: Computing Sparse Jacobians: subset
       5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test
       5.2.5.1: Subset of Second Order Partials: Example and Test
subsets 2.3.c: Checking the CppAD Examples and Tests: Subsets of make check
subsparsity 11.1.g.c: Running the Speed Test Program: Sparsity Options.subsparsity
subtract 4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
         4.4.1.4: AD Compound Assignment Operators
         4.4.1.3.2: AD Binary Subtraction: Example and Test
         4.4.1.3: AD Binary Arithmetic Operators
subtraction 12.3.2.b.b: The Theory of Reverse Mode: Binary Operators.Subtraction
            12.3.1.b.b: The Theory of Forward Mode: Binary Operators.Subtraction
            4.4.1.4.j.b: AD Compound Assignment Operators: Derivative.Subtraction
            4.4.1.3.j.b: AD Binary Arithmetic Operators: Derivative.Subtraction
subtraction: 4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
             4.4.1.3.2: AD Binary Subtraction: Example and Test
subversion 2.1.g.b: Download The CppAD Source Code: Source Code Control.Subversion
suggestion 4.7.h.a: AD<Base> Requirements for a CppAD Base Type: Integer.Suggestion
sum 11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
    11.2.6: Sum Elements of a Matrix Times Itself
    7.2.8.5.e: Multi-Threaded Implementation of Summation of 1/i: sum
    7.2.8.4.d: Take Down Multi-threading Sum of 1/i: sum
    7.2.8.4: Take Down Multi-threading Sum of 1/i
    7.2.8.3: Do One Thread's Work for Sum of 1/i
    7.2.8.2: Set Up Multi-threading Sum of 1/i
    7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i
    5.7.7: Example Optimization and Cumulative Sum Operations
summation 7.2.8.6: Timing Test of Multi-Threaded Summation of 1/i
          7.2.8.5: Multi-Threaded Implementation of Summation of 1/i
          7.2.8: Multi-Threading Harmonic Summation Example / Test
suppress 10.6: Suppress Suspect Implicit Conversion Warnings
suspect 10.6: Suppress Suspect Implicit Conversion Warnings
sweep 12.3.3.b: An Important Reverse Mode Identity: Reverse Sweep
      3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
      3.2.6.1: exp_eps: Verify Second Order Forward Sweep
      3.2.5.1: exp_eps: Verify First Order Reverse Sweep
      3.2.4.1: exp_eps: Verify First Order Forward Sweep
      3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
      3.2.7: exp_eps: Second Order Reverse Sweep
      3.2.6.d.g: exp_eps: Second Order Forward Mode: Operation Sequence.Sweep
      3.2.5: exp_eps: First Order Reverse Sweep
      3.2.4.c.f: exp_eps: First Order Forward Sweep: Operation Sequence.Sweep
      3.2.4: exp_eps: First Order Forward Sweep
      3.2.3.b.g: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Sweep
      3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
      3.1.7.1: exp_2: Verify Second Order Reverse Sweep
      3.1.6.1: exp_2: Verify Second Order Forward Sweep
      3.1.5.1: exp_2: Verify First Order Reverse Sweep
      3.1.4.1: exp_2: Verify First Order Forward Sweep
      3.1.3.1: exp_2: Verify Zero Order Forward Sweep
      3.1.6.d.g: exp_2: Second Order Forward Mode: Operation Sequence.Sweep
      3.1.4.d.f: exp_2: First Order Forward Mode: Operation Sequence.Sweep
      3.1.3.c.e: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence.Sweep
sweeps 3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
       3.1.8: exp_2: CppAD Forward and Reverse Sweeps
symbol 12.1.i.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
symbols 12.11.d: CppAD Addons: Preprocessor Symbols
        6: CppAD API Preprocessor Symbols
        e: cppad-20171217: A Package for Differentiation of C++ Algorithms: Preprocessor Symbols
syntax 12.10.3.a: LU Factorization of A Square Matrix and Stability Calculation: Syntax
       12.10.2.a: Jacobian and Hessian of Optimal Values: Syntax
       12.10.1.a: Computing Jacobian and Hessian of Bender's Reduced Objective: Syntax
       12.9.7.a: Determine Amount of Time to Execute det_by_minor: Syntax
       12.9.6.a: Returns Elapsed Number of Seconds: Syntax
       12.9.5.a: Repeat det_by_minor Routine A Specified Number of Times: Syntax
       12.9.4.a: Correctness Test of det_by_minor Routine: Syntax
       12.9.3.a: Simulate a [0,1] Uniform Random Variate: Syntax
       12.9.2.a: Compute Determinant using Expansion by Minors: Syntax
       12.9.1.a: Determinant of a Minor: Syntax
       12.9.a: Compare Speed of C and C++: Syntax
       12.8.12.c: zdouble: An AD Base Type With Absolute Zero: Syntax
       12.8.11.5.1.a: Define Matrix Multiply as a User Atomic Operation: Syntax
       12.8.11.b: User Defined Atomic AD Functions: Syntax Function
       12.8.10.b: Nonlinear Programming Using the CppAD Interface to Ipopt: Syntax
       12.8.9.b: Choosing The Vector Testing Template Class: Syntax
       12.8.8.b: Machine Epsilon For AD Types: Syntax
       12.8.7.b: Memory Leak Detection: Syntax
       12.8.6.12.b: Set Maximum Number of Threads for omp_alloc Allocator: Syntax
       12.8.6.11.b: Check If A Memory Allocation is Efficient for Another Use: Syntax
       12.8.6.10.b: Return A Raw Array to The Available Memory for a Thread: Syntax
       12.8.6.9.b: Allocate Memory and Create A Raw Array: Syntax
       12.8.6.8.b: Amount of Memory Available for Quick Use by a Thread: Syntax
       12.8.6.7.b: Amount of Memory a Thread is Currently Using: Syntax
       12.8.6.6.b: Free Memory Currently Available for Quick Use by a Thread: Syntax
       12.8.6.5.b: Return Memory to omp_alloc: Syntax
       12.8.6.4.b: Get At Least A Specified Amount of Memory: Syntax
       12.8.6.3.b: Get the Current OpenMP Thread Number: Syntax
       12.8.6.2.b: Is The Current Execution in OpenMP Parallel Mode: Syntax
       12.8.6.1.b: Set and Get Maximum Number of Threads for omp_alloc Allocator: Syntax
       12.8.6.a: A Quick OpenMP Memory Allocator Used by CppAD: Syntax
       12.8.5.b: Routines That Track Use of New and Delete: Syntax
       12.8.4.b: OpenMP Parallel Setup: Syntax
       12.8.3.a: Comparison Changes During Zero Order Forward Mode: Syntax
       12.8.2.a: ADFun Object Deprecated Member Functions: Syntax
       11.4.8.a: Adolc Test Utility: Allocate and Free Memory For a Matrix: Syntax
       11.2.10.a: Simulate a [0,1] Uniform Random Variate: Syntax
       11.2.9.a: Evaluate a Function That Has a Sparse Hessian: Syntax
       11.2.8.a: Evaluate a Function That Has a Sparse Jacobian: Syntax
       11.2.7.a: Evaluate a Function Defined in Terms of an ODE: Syntax
       11.2.6.a: Sum Elements of a Matrix Times Itself: Syntax
       11.2.5.a: Check Gradient of Determinant of 3 by 3 matrix: Syntax
       11.2.4.a: Check Determinant of 3 by 3 matrix: Syntax
       11.2.3.a: Determinant Using Expansion by Minors: Syntax
       11.2.2.a: Determinant of a Minor: Syntax
       11.2.1.a: Determinant Using Expansion by Lu Factorization: Syntax
       11.1.8.a: Microsoft Version of Elapsed Number of Seconds: Syntax
       11.1.a: Running the Speed Test Program: Syntax
       10.6.a: Suppress Suspect Implicit Conversion Warnings: Syntax
       10.5.a: Using The CppAD Test Vector Template Class: Syntax
       10.3.3.a: Lu Factor and Solve with Recorded Pivoting: Syntax
       10.2.4.a: Enable Use of Eigen Linear Algebra Package with CppAD: Syntax
       9.a: Use Ipopt to Solve a Nonlinear Programming Problem: Syntax
       8.28.a: Sparse Matrix Row, Column, Value Representation: Syntax
       8.27.a: Row and Column Index Sparsity Patterns: Syntax
       8.26.a: Union of Standard Sets: Syntax
       8.25.a: Convert Certain Types to a String: Syntax
       8.24.a: Returns Indices that Sort a Vector: Syntax
       8.23.14.a: Free All Memory That Was Allocated for Use by thread_alloc: Syntax
       8.23.13.a: Deallocate An Array and Call Destructor for its Elements: Syntax
       8.23.12.a: Allocate An Array and Call Default Constructor for its Elements: Syntax
       8.23.11.a: Amount of Memory Available for Quick Use by a Thread: Syntax
       8.23.10.a: Amount of Memory a Thread is Currently Using: Syntax
       8.23.9.a: Control When Thread Alloc Retains Memory For Future Use: Syntax
       8.23.8.a: Free Memory Currently Available for Quick Use by a Thread: Syntax
       8.23.7.a: Return Memory to thread_alloc: Syntax
       8.23.6.a: Get At Least A Specified Amount of Memory: Syntax
       8.23.5.a: Get the Current Thread Number: Syntax
       8.23.4.a: Is The Current Execution in Parallel Mode: Syntax
       8.23.3.a: Get Number of Threads: Syntax
       8.23.2.a: Setup thread_alloc For Use in Multi-Threading Environment: Syntax
       8.23.a: A Fast Multi-Threading Memory Allocator: Syntax
       8.22.a: The CppAD::vector Template Class: Syntax
       8.21.a: An Error Controller for Gear's Ode Solvers: Syntax
       8.20.a: An Arbitrary Order Gear Method: Syntax
       8.19.a: An Error Controller for ODE Solvers: Syntax
       8.18.a: A 3rd and 4th Order Rosenbrock ODE Solver: Syntax
       8.17.a: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Syntax
       8.16.a: Multi-dimensional Romberg Integration: Syntax
       8.15.a: One DimensionalRomberg Integration: Syntax
       8.14.3.a: Invert an LU Factored Equation: Syntax
       8.14.2.a: LU Factorization of A Square Matrix: Syntax
       8.14.1.a: Compute Determinant and Solve Linear Equations: Syntax
       8.13.a: Evaluate a Polynomial or its Derivative: Syntax
       8.12.a: The Integer Power Function: Syntax
       8.11.f.b: Obtain Nan or Determine if a Value is Nan: nan(zero).Syntax
       8.11.a: Obtain Nan or Determine if a Value is Nan: Syntax
       8.10.a: Check Simple Vector Concept: Syntax
       8.8.a: Check NumericType Class Concept: Syntax
       8.6.a: Object that Runs a Group of Tests: Syntax
       8.5.1.a: Returns Elapsed Number of Seconds: Syntax
       8.5.a: Determine Amount of Time to Execute a Test: Syntax
       8.4.a: Run One Speed Test and Print Results: Syntax
       8.3.a: Run One Speed Test and Return Results: Syntax
       8.2.a: Determine if Two Values Are Nearly Equal: Syntax
       8.1.2.a: CppAD Assertions During Execution: Syntax
       8.1.a: Replacing the CppAD Error Handler: Syntax
       7.2.11.a: Specifications for A Team of AD Threads: Syntax
       7.2.10.6.a: Timing Test of Multi-Threaded Newton Method: Syntax
       7.2.10.5.a: A Multi-Threaded Newton's Method: Syntax
       7.2.10.4.a: Take Down Multi-threaded Newton Method: Syntax
       7.2.10.3.a: Do One Thread's Work for Multi-Threaded Newton Method: Syntax
       7.2.10.2.a: Set Up Multi-Threaded Newton Method: Syntax
       7.2.9.7.a: Timing Test for Multi-Threaded User Atomic Calculation: Syntax
       7.2.9.6.a: Run Multi-Threaded User Atomic Calculation: Syntax
       7.2.9.5.a: Multi-Threaded User Atomic Take Down: Syntax
       7.2.9.3.a: Multi-Threaded User Atomic Set Up: Syntax
       7.2.9.1.a: Defines a User Atomic Operation that Computes Square Root: Syntax
       7.2.8.6.a: Timing Test of Multi-Threaded Summation of 1/i: Syntax
       7.2.8.5.a: Multi-Threaded Implementation of Summation of 1/i: Syntax
       7.2.8.4.a: Take Down Multi-threading Sum of 1/i: Syntax
       7.2.8.3.a: Do One Thread's Work for Sum of 1/i: Syntax
       7.2.8.2.a: Set Up Multi-threading Sum of 1/i: Syntax
       7.1.a: Enable AD Calculations During Parallel Mode: Syntax
       5.10.a: Check an ADFun Object For Nan Results: Syntax
       5.9.a: Check an ADFun Sequence of Operations: Syntax
       5.8.11.a: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: Syntax
       5.8.10.a: abs_normal: Minimize a Linear Abs-normal Approximation: Syntax
       5.8.9.a: abs_normal: Solve a Quadratic Program With Box Constraints: Syntax
       5.8.8.a: Solve a Quadratic Program Using Interior Point Method: Syntax
       5.8.7.a: Non-Smooth Optimization Using Abs-normal Linear Approximations: Syntax
       5.8.6.a: abs_normal: Minimize a Linear Abs-normal Approximation: Syntax
       5.8.5.a: abs_normal: Solve a Linear Program With Box Constraints: Syntax
       5.8.4.a: abs_normal: Solve a Linear Program Using Simplex Method: Syntax
       5.8.3.a: abs_normal: Evaluate First Order Approximation: Syntax
       5.8.2.a: abs_normal: Print a Vector or Matrix: Syntax
       5.8.1.a: Create An Abs-normal Representation of a Function: Syntax
       5.7.a: Optimize an ADFun Object Tape: Syntax
       5.6.5.a: Compute Sparse Jacobians Using Subgraphs: Syntax
       5.6.4.a: Sparse Hessian: Syntax
       5.6.3.a: Computing Sparse Hessians: Syntax
       5.6.2.a: Sparse Jacobian: Syntax
       5.6.1.a: Computing Sparse Jacobians: Syntax
       5.5.11.a: Subgraph Dependency Sparsity Patterns: Syntax
       5.5.8.a: Hessian Sparsity Pattern: Forward Mode: Syntax
       5.5.7.a: Forward Mode Hessian Sparsity Patterns: Syntax
       5.5.6.a: Hessian Sparsity Pattern: Reverse Mode: Syntax
       5.5.5.a: Reverse Mode Hessian Sparsity Patterns: Syntax
       5.5.4.a: Jacobian Sparsity Pattern: Reverse Mode: Syntax
       5.5.3.a: Reverse Mode Jacobian Sparsity Patterns: Syntax
       5.5.2.a: Jacobian Sparsity Pattern: Forward Mode: Syntax
       5.5.1.a: Forward Mode Jacobian Sparsity Patterns: Syntax
       5.4.4.a: Reverse Mode Using Subgraphs: Syntax
       5.4.3.a: Any Order Reverse Mode: Syntax
       5.4.2.a: Second Order Reverse Mode: Syntax
       5.4.1.a: First Order Reverse Mode: Syntax
       5.3.9.a: Number of Variables that Can be Skipped: Syntax
       5.3.8.a: Controlling Taylor Coefficients Memory Allocation: Syntax
       5.3.7.a: Comparison Changes Between Taping and Zero Order Forward: Syntax
       5.3.6.a: Number Taylor Coefficient Orders Currently Stored: Syntax
       5.3.5.a: Multiple Directions Forward Mode: Syntax
       5.3.4.a: Multiple Order Forward Mode: Syntax
       5.3.3.a: Second Order Forward Mode: Derivative Values: Syntax
       5.3.2.a: First Order Forward Mode: Derivative Values: Syntax
       5.3.1.a: Zero Order Forward Mode: Function Values: Syntax
       5.2.6.a: Reverse Mode Second Partial Derivative Driver: Syntax
       5.2.5.a: Forward Mode Second Partial Derivative Driver: Syntax
       5.2.4.a: First Order Derivative: Driver Routine: Syntax
       5.2.3.a: First Order Partial Derivative: Driver Routine: Syntax
       5.2.2.a: Hessian: Easy Driver: Syntax
       5.2.1.a: Jacobian: Driver Routine: Syntax
       5.1.5.a: ADFun Sequence Properties: Syntax
       5.1.4.a: Abort Recording of an Operation Sequence: Syntax
       5.1.3.a: Stop Recording and Store Operation Sequence: Syntax
       5.1.2.a: Construct an ADFun Object and Stop Recording: Syntax
       5.1.1.a: Declare Independent Variables and Start Recording: Syntax
       4.7.9.3.a: Enable use of AD<Base> where Base is Adolc's adouble Type: Syntax
       4.7.8.a: Base Type Requirements for Hash Coding Values: Syntax
       4.7.a: AD<Base> Requirements for a CppAD Base Type: Syntax
       4.6.a: AD Vectors that Record Index Operations: Syntax
       4.5.5.a: Check if Two Value are Identically Equal: Syntax
       4.5.4.a: Is an AD Object a Parameter or Variable: Syntax
       4.5.3.a: AD Boolean Functions: Syntax
       4.5.2.a: Compare AD and Base Objects for Nearly Equal: Syntax
       4.5.1.a: AD Binary Comparison Operators: Syntax
       4.4.7.2.10.a: Free Static Variables: Syntax
       4.4.7.2.9.a: Atomic Reverse Hessian Sparsity Patterns: Syntax
       4.4.7.2.8.a: Atomic Forward Hessian Sparsity Patterns: Syntax
       4.4.7.2.7.a: Atomic Reverse Jacobian Sparsity Patterns: Syntax
       4.4.7.2.6.a: Atomic Forward Jacobian Sparsity Patterns: Syntax
       4.4.7.2.5.a: Atomic Reverse Mode: Syntax
       4.4.7.2.4.a: Atomic Forward Mode: Syntax
       4.4.7.2.3.a: Using AD Version of Atomic Function: Syntax
       4.4.7.2.2.a: Set Atomic Function Options: Syntax
       4.4.7.2.1.a: Atomic Function Constructor: Syntax
       4.4.7.2.a: User Defined Atomic AD Functions: Syntax
       4.4.7.1.a: Checkpointing Functions: Syntax
       4.4.6.a: Numeric Limits For an AD and Base Types: Syntax
       4.4.5.a: Discrete AD Functions: Syntax
       4.4.4.a: AD Conditional Expressions: Syntax
       4.4.3.3.a: Absolute Zero Multiplication: Syntax
       4.4.3.2.a: The AD Power Function: Syntax
       4.4.3.1.a: AD Two Argument Inverse Tangent Function: Syntax
       4.4.2.21.a: The Sign: sign: Syntax
       4.4.2.20.a: The Logarithm of One Plus Argument: log1p: Syntax
       4.4.2.19.a: The Exponential Function Minus One: expm1: Syntax
       4.4.2.18.a: The Error Function: Syntax
       4.4.2.17.a: The Inverse Hyperbolic Tangent Function: atanh: Syntax
       4.4.2.16.a: The Inverse Hyperbolic Sine Function: asinh: Syntax
       4.4.2.15.a: The Inverse Hyperbolic Cosine Function: acosh: Syntax
       4.4.2.14.a: AD Absolute Value Functions: abs, fabs: Syntax
       4.4.2.13.a: The Hyperbolic Tangent Function: tanh: Syntax
       4.4.2.12.a: The Tangent Function: tan: Syntax
       4.4.2.11.a: The Square Root Function: sqrt: Syntax
       4.4.2.10.a: The Hyperbolic Sine Function: sinh: Syntax
       4.4.2.9.a: The Sine Function: sin: Syntax
       4.4.2.8.a: The Base 10 Logarithm Function: log10: Syntax
       4.4.2.7.a: The Exponential Function: log: Syntax
       4.4.2.6.a: The Exponential Function: exp: Syntax
       4.4.2.5.a: The Hyperbolic Cosine Function: cosh: Syntax
       4.4.2.4.a: The Cosine Function: cos: Syntax
       4.4.2.3.a: Inverse Tangent Function: atan: Syntax
       4.4.2.2.a: Inverse Sine Function: asin: Syntax
       4.4.2.1.a: Inverse Sine Function: acos: Syntax
       4.4.2.a: The Unary Standard Math Functions: Syntax
       4.4.1.4.a: AD Compound Assignment Operators: Syntax
       4.4.1.3.a: AD Binary Arithmetic Operators: Syntax
       4.4.1.2.a: AD Unary Minus Operator: Syntax
       4.4.1.1.a: AD Unary Plus Operator: Syntax
       4.3.7.a: Convert an AD Variable to a Parameter: Syntax
       4.3.6.a: Printing AD Values During Forward Mode: Syntax
       4.3.5.a: AD Output Stream Operator: Syntax
       4.3.4.a: AD Output Stream Operator: Syntax
       4.3.3.a: Convert An AD or Base Type to String: Syntax
       4.3.2.a: Convert From AD to Integer: Syntax
       4.3.1.a: Convert From an AD Type to its Base Type: Syntax
       4.2.a: AD Assignment Operator: Syntax
       4.1.a: AD Constructors: Syntax
       3.2.a: An Epsilon Accurate Exponential Approximation: Syntax
       3.1.a: Second Order Exponential Approximation: Syntax
       2.2.6.1.a: Download and Install Sacado in Build Directory: Syntax
       2.2.5.1.a: Download and Install Ipopt in Build Directory: Syntax
       2.2.4.1.a: Download and Install Fadbad in Build Directory: Syntax
       2.2.3.1.a: Download and Install Eigen in Build Directory: Syntax
       2.2.2.5.a: Download and Install ColPack in Build Directory: Syntax
       2.2.1.1.a: Download and Install Adolc in Build Directory: Syntax
       a: cppad-20171217: A Package for Differentiation of C++ Algorithms: Syntax
systems 8.5.1.d: Returns Elapsed Number of Seconds: Microsoft Systems
T
Taylor 10.2.14: Taylor's Ode Solver: An Example and Test
take 7.2.10.4: Take Down Multi-threaded Newton Method
     7.2.9.5: Multi-Threaded User Atomic Take Down
     7.2.8.4: Take Down Multi-threading Sum of 1/i
tan 12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
    12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
    12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
    4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
    4.4.3.1: AD Two Argument Inverse Tangent Function
    4.4.2.12.1: The AD tan Function: Example and Test
    4.4.2.12: The Tangent Function: tan
tangent 12.8.11.t.c: User Defined Atomic AD Functions: Example.Tangent Function
        12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
        12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
        12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
        12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
        12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
        4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
        4.4.3.1: AD Two Argument Inverse Tangent Function
        4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh
        4.4.2.13: The Hyperbolic Tangent Function: tanh
        4.4.2.12: The Tangent Function: tan
        4.4.2.3: Inverse Tangent Function: atan
tanh 12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
     4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
     4.4.2.13.1: The AD tanh Function: Example and Test
     4.4.2.13: The Hyperbolic Tangent Function: tanh
tape 12.4.k: Glossary: Tape
     12.1.k: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
     5.7: Optimize an ADFun Object Tape
     5.1.4: Abort Recording of an Operation Sequence
     5.1.3: Stop Recording and Store Operation Sequence
     5.1.2: Construct an ADFun Object and Stop Recording
     4.6: AD Vectors that Record Index Operations
     4.4.5.3: Interpolation With Retaping: Example and Test
     4.4.5.2: Interpolation With Out Retaping: Example and Test
     4.4.5.1: Taping Array Index Operation: Example and Test
     2.2: Using CMake to Configure CppAD
tape_addr_type 12.8.13.t: Autotools Unix Test and Installation: tape_addr_type
tape_id_type 12.8.13.u: Autotools Unix Test and Installation: tape_id_type
taping 12.6.g.a: The CppAD Wish List: Optimization.Taping
       5.3.7: Comparison Changes Between Taping and Zero Order Forward
       5.1.3.g: Stop Recording and Store Operation Sequence: Taping
       4.4.5.1: Taping Array Index Operation: Example and Test
       4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
       4.3.7: Convert an AD Variable to a Parameter
taping: 4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
target 8.28.g: Sparse Matrix Row, Column, Value Representation: target
       8.27.e: Row and Column Index Sparsity Patterns: target
taylor 12.4.l: Glossary: Taylor Coefficient
       12.3.2.a: The Theory of Reverse Mode: Taylor Notation
       12.3.1.9.b: Error Function Forward Taylor Polynomial Theory: Taylor Coefficients Recursion
       12.3.1.9: Error Function Forward Taylor Polynomial Theory
       12.3.1.8.b: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory: Taylor Coefficients Recursion
       12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
       12.3.1.7.b: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory: Taylor Coefficients Recursion
       12.3.1.6.b: Inverse Sine and Hyperbolic Sine Forward Mode Theory: Taylor Coefficients Recursion
       12.3.1.5.b: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory: Taylor Coefficients Recursion
       12.3.1.2.b: Logarithm Function Forward Mode Theory: Taylor Coefficients Recursion
       12.3.1.1.b: Exponential Function Forward Mode Theory: Taylor Coefficients Recursion
       12.3.1.c.b: The Theory of Forward Mode: Standard Math Functions.Taylor Coefficients Recursion Formula
       12.3.1.a: The Theory of Forward Mode: Taylor Notation
       5.4.3.1.a: Third Order Reverse Mode: Example and Test: Taylor Coefficients
       5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
       5.3.8: Controlling Taylor Coefficients Memory Allocation
       5.3.6: Number Taylor Coefficient Orders Currently Stored
       5.1.2.i.a: Construct an ADFun Object and Stop Recording: Assignment Operator.Taylor Coefficients
       4.4.7.2.18.1.b.b: AD Theory for Cholesky Factorization: Notation.Taylor Coefficient
taylor'10.2.14: Taylor's Ode Solver: An Example and Test
         10.2.13.e: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Taylor's Method Using AD
         10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
         10.2.12.e: Taylor's Ode Solver: A Multi-Level AD Example and Test: Taylor's Method Using AD
         10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
taylor_size 12.8.2.g: ADFun Object Deprecated Member Functions: taylor_size
team 7.2.11.3: Pthread Implementation of a Team of AD Threads
     7.2.11.2: Boost Thread Implementation of a Team of AD Threads
     7.2.11.1: OpenMP Implementation of a Team of AD Threads
     7.2.11: Specifications for A Team of AD Threads
     7.2.7: Using a Team of AD Threads: Example and Test
     7.2.k: Run Multi-Threading Examples and Speed Tests: Team Implementations
team_create 7.2.11.d: Specifications for A Team of AD Threads: team_create
team_destroy 7.2.11.f: Specifications for A Team of AD Threads: team_destroy
team_example 7.2.h: Run Multi-Threading Examples and Speed Tests: team_example
team_name 7.2.11.g: Specifications for A Team of AD Threads: team_name
team_work 7.2.11.e: Specifications for A Team of AD Threads: team_work
template 12.8.9: Choosing The Vector Testing Template Class
         10.5: Using The CppAD Test Vector Template Class
         10.d: Examples: The CppAD Test Vector Template Class
         8.22.1: CppAD::vector Template Class: Example and Test
         8.22: The CppAD::vector Template Class
         8.13: Evaluate a Polynomial or its Derivative
         8.9.1: Simple Vector Template Class: Example and Test
         8.9.a: Definition of a Simple Vector: Template Class Requirements
         8.d.b: Some General Purpose Utilities: Miscellaneous.The CppAD Vector Template Class
         2.2.7: Choosing the CppAD Test Vector Template Class
terms 11.2.7: Evaluate a Function Defined in Terms of an ODE
test 12.10.3.1: LuRatio: Example and Test
     12.10.2.1: opt_val_hes: Example and Test
     12.10.1.1: BenderQuad: Example and Test
     12.9.4: Correctness Test of det_by_minor Routine
     12.8.13: Autotools Unix Test and Installation
     12.8.12.1: zdouble: Example and Test
     12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
     12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
     12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
     12.8.11.1: Old Atomic Operation Reciprocal: Example and Test
     12.8.10.3: Speed Test for Both Simple and Fast Representations
     12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
     12.8.9: Choosing The Vector Testing Template Class
     12.8.6.13: OpenMP Memory Allocator: Example and Test
     12.8.5.1: Tracking Use of New and Delete: Example and Test
     12.1.i.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
     11.7: Speed Test Derivatives Using Sacado
     11.6: Speed Test Derivatives Using Fadbad
     11.5: Speed Test Derivatives Using CppAD
     11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
     11.4: Speed Test of Derivatives Using Adolc
     11.3: Speed Test of Functions in Double
     11.2.9.1: sparse_hes_fun: Example and test
     11.2.8.1: sparse_jac_fun: Example and test
     11.2.7.1: ode_evaluate: Example and test
     11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
     11.2.6: Sum Elements of a Matrix Times Itself
     11.2.3.1: Determinant Using Expansion by Minors: Example and Test
     11.2.2.1: Determinant of a Minor: Example and Test
     11.2.1.1: Determinant Using Lu Factorization: Example and Test
     11.1.7: Speed Testing Sparse Jacobian
     11.1.6: Speed Testing Sparse Hessian
     11.1.5: Speed Testing Second Derivative of a Polynomial
     11.1.4: Speed Testing the Jacobian of Ode Solution
     11.1.3: Speed Testing Derivative of Matrix Multiply
     11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
     11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
     11.1.d: Running the Speed Test Program: test
     11.1: Running the Speed Test Program
     11: Speed Test an Operator Overloading AD Package
     10.5: Using The CppAD Test Vector Template Class
     10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
     10.2.15: Example Differentiating a Stack Machine Interpreter
     10.2.14: Taylor's Ode Solver: An Example and Test
     10.2.13: Taylor's Ode Solver: A Multi-Level Adolc Example and Test
     10.2.12: Taylor's Ode Solver: A Multi-Level AD Example and Test
     10.2.11: A Stiff Ode: Example and Test
     10.2.10.1: Multiple Level of AD: Example and Test
     10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
     10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
     10.2.7: Interfacing to C: Example and Test
     10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
     10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
     10.2.4.3: Using Eigen To Compute Determinant: Example and Test
     10.2.4.2: Using Eigen Arrays: Example and Test
     10.2.3: Differentiate Conjugate Gradient Algorithm: Example and Test
     10.2.2: Example and Test Linking CppAD to Languages Other than C++
     10.2.1: Creating Your Own Interface to an ADFun Object
     10.d: Examples: The CppAD Test Vector Template Class
     9.2: Nonlinear Programming Retaping: Example and Test
     9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
     8.28.1: sparse_rcv: Example and Test
     8.27.1: sparse_rc: Example and Test
     8.26.1: Set Union: Example and Test
     8.25.1: to_string: Example and Test
     8.24.1: Index Sort: Example and Test
     8.23.1: Fast Multi-Threading Memory Allocator: Example and Test
     8.22.2: CppAD::vectorBool Class: Example and Test
     8.22.1: CppAD::vector Template Class: Example and Test
     8.21.1: OdeGearControl: Example and Test
     8.20.1: OdeGear: Example and Test
     8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
     8.19.1: OdeErrControl: Example and Test
     8.18.1: Rosen34: Example and Test
     8.17.2: Runge45: Example and Test
     8.17.1: Runge45: Example and Test
     8.16.1: One Dimensional Romberg Integration: Example and Test
     8.15.1: One Dimensional Romberg Integration: Example and Test
     8.14.3.1: LuInvert: Example and Test
     8.14.2.1: LuFactor: Example and Test
     8.14.1.1: LuSolve With Complex Arguments: Example and Test
     8.13.1: Polynomial Evaluation: Example and Test
     8.12.1: The Pow Integer Exponent: Example and Test
     8.11.1: nan: Example and Test
     8.10.1: The CheckSimpleVector Function: Example and Test
     8.9.1: Simple Vector Template Class: Example and Test
     8.8.1: The CheckNumericType Function: Example and Test
     8.7.1: The NumericType: Example and Test
     8.6.e: Object that Runs a Group of Tests: test
     8.5.2: time_test: Example and test
     8.5.1.1: Elapsed Seconds: Example and Test
     8.5.e: Determine Amount of Time to Execute a Test: test
     8.5: Determine Amount of Time to Execute a Test
     8.4.1: Example Use of SpeedTest
     8.3.1: speed_test: Example and test
     8.4.e: Run One Speed Test and Print Results: Test
     8.4: Run One Speed Test and Print Results
     8.3.f: Run One Speed Test and Return Results: test
     8.3: Run One Speed Test and Return Results
     8.2.1: NearEqual Function: Example and Test
     8.1.1: Replacing The CppAD Error Handler: Example and Test
     7.2.11.k: Specifications for A Team of AD Threads: Speed Test of Implementation
     7.2.10.6: Timing Test of Multi-Threaded Newton Method
     7.2.10: Multi-Threaded Newton Method Example / Test
     7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation
     7.2.9: Multi-Threading User Atomic Example / Test
     7.2.8.6: Timing Test of Multi-Threaded Summation of 1/i
     7.2.8: Multi-Threading Harmonic Summation Example / Test
     7.2.7: Using a Team of AD Threads: Example and Test
     7.2.6: A Simple pthread AD: Example and Test
     7.2.5: A Simple Boost Threading AD: Example and Test
     7.2.4: A Simple OpenMP AD: Example and Test
     7.2.3: A Simple Parallel Pthread Example and Test
     7.2.2: A Simple Boost Thread Example and Test
     7.2.1: A Simple OpenMP Example and Test
     5.10.1: ADFun Checking For Nan: Example and Test
     5.9.1: ADFun Check and Re-Tape: Example and Test
     5.8.11.1: abs_normal min_nso_quad: Example and Test
     5.8.10.1: abs_min_quad: Example and Test
     5.8.9.1: abs_normal qp_box: Example and Test
     5.8.8.1: abs_normal qp_interior: Example and Test
     5.8.7.1: abs_normal min_nso_linear: Example and Test
     5.8.6.1: abs_min_linear: Example and Test
     5.8.5.1: abs_normal lp_box: Example and Test
     5.8.4.1: abs_normal simplex_method: Example and Test
     5.8.3.1: abs_eval: Example and Test
     5.8.1.1: abs_normal Getting Started: Example and Test
     5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
     5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
     5.6.4.3: Subset of a Sparse Hessian: Example and Test
     5.6.4.1: Sparse Hessian: Example and Test
     5.6.3.1: Computing Sparse Hessian: Example and Test
     5.6.2.1: Sparse Jacobian: Example and Test
     5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
     5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
     5.5.11.1: Subgraph Dependency Sparsity Patterns: Example and Test
     5.5.10: Preferred Sparsity Patterns: Row and Column Indices: Example and Test
     5.5.9: Computing Dependency: Example and Test
     5.5.8.1: Forward Mode Hessian Sparsity: Example and Test
     5.5.7.1: Forward Mode Hessian Sparsity: Example and Test
     5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test
     5.5.6.1: Reverse Mode Hessian Sparsity: Example and Test
     5.5.5.1: Reverse Mode Hessian Sparsity: Example and Test
     5.5.4.1: Reverse Mode Jacobian Sparsity: Example and Test
     5.5.3.1: Reverse Mode Jacobian Sparsity: Example and Test
     5.5.2.1: Forward Mode Jacobian Sparsity: Example and Test
     5.5.1.1: Forward Mode Jacobian Sparsity: Example and Test
     5.4.4.1: Computing Reverse Mode on Subgraphs: Example and Test
     5.4.3.2: Reverse Mode General Case (Checkpointing): Example and Test
     5.4.3.1: Third Order Reverse Mode: Example and Test
     5.4.2.2: Hessian Times Direction: Example and Test
     5.4.2.1: Second Order Reverse ModeExample and Test
     5.4.1.1: First Order Reverse Mode: Example and Test
     5.3.9.1: Number of Variables That Can be Skipped: Example and Test
     5.3.8.1: Controlling Taylor Coefficient Memory Allocation: Example and Test
     5.3.7.1: CompareChange and Re-Tape: Example and Test
     5.3.5.1: Forward Mode: Example and Test of Multiple Directions
     5.3.4.2: Forward Mode: Example and Test of Multiple Orders
     5.3.4.1: Forward Mode: Example and Test
     5.2.6.1: Second Partials Reverse Driver: Example and Test
     5.2.5.1: Subset of Second Order Partials: Example and Test
     5.2.4.1: First Order Derivative Driver: Example and Test
     5.2.3.1: First Order Partial Driver: Example and Test
     5.2.2.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
     5.2.2.1: Hessian: Example and Test
     5.2.1.1: Jacobian: Example and Test
     5.1.5.1: ADFun Sequence Properties: Example and Test
     5.1.4.1: Abort Current Recording: Example and Test
     5.1.2.1: ADFun Assignment: Example and Test
     5.1.1.1: Independent and ADFun Constructor: Example and Test
     4.7.9.6.1: Complex Polynomial: Example and Test
     4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
     4.7.9.2: Using a User Defined AD Base Type: Example and Test
     4.6.1: AD Vectors that Record Index Operations: Example and Test
     4.5.5.1: EqualOpSeq: Example and Test
     4.5.4.1: AD Parameter and Variable Functions: Example and Test
     4.5.3.1: AD Boolean Functions: Example and Test
     4.5.2.1: Compare AD with Base Objects: Example and Test
     4.5.1.1: AD Binary Comparison Operators: Example and Test
     4.4.7.2.19: User Atomic Matrix Multiply: Example and Test
     4.4.7.2.18: Atomic Eigen Cholesky Factorization: Example and Test
     4.4.7.2.17: Atomic Eigen Matrix Inverse: Example and Test
     4.4.7.2.16: Atomic Eigen Matrix Multiply: Example and Test
     4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
     4.4.7.2.14.k.g: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function.Test Result
     4.4.7.2.14.k: Atomic Sparsity with Set Patterns: Example and Test: Test Atomic Function
     4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
     4.4.7.2.13: Reciprocal as an Atomic Operation: Example and Test
     4.4.7.2.12: Atomic Euclidean Norm Squared: Example and Test
     4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
     4.4.7.2.9.1.j: Atomic Reverse Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.9.1: Atomic Reverse Hessian Sparsity: Example and Test
     4.4.7.2.8.1.j: Atomic Forward Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.8.1: Atomic Forward Hessian Sparsity: Example and Test
     4.4.7.2.7.1.h: Atomic Reverse Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.7.1: Atomic Reverse Jacobian Sparsity: Example and Test
     4.4.7.2.6.1.h: Atomic Forward Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.6.1: Atomic Forward Jacobian Sparsity: Example and Test
     4.4.7.2.5.1: Atomic Reverse: Example and Test
     4.4.7.2.4.1: Atomic Forward: Example and Test
     4.4.7.1.4: Checkpointing an Extended ODE Solver: Example and Test
     4.4.7.1.3: Checkpointing an ODE Solver: Example and Test
     4.4.7.1.2: Atomic Operations and Multiple-Levels of AD: Example and Test
     4.4.7.1.1: Simple Checkpointing: Example and Test
     4.4.6.1: Numeric Limits: Example and Test
     4.4.5.3: Interpolation With Retaping: Example and Test
     4.4.5.2: Interpolation With Out Retaping: Example and Test
     4.4.5.1: Taping Array Index Operation: Example and Test
     4.4.4.1: Conditional Expressions: Example and Test
     4.4.4.n: AD Conditional Expressions: Test
     4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test
     4.4.3.2.1: The AD Power Function: Example and Test
     4.4.3.1.1: The AD atan2 Function: Example and Test
     4.4.2.21.1: Sign Function: Example and Test
     4.4.2.20.1: The AD log1p Function: Example and Test
     4.4.2.19.1: The AD exp Function: Example and Test
     4.4.2.18.1: The AD erf Function: Example and Test
     4.4.2.17.1: The AD atanh Function: Example and Test
     4.4.2.16.1: The AD asinh Function: Example and Test
     4.4.2.15.1: The AD acosh Function: Example and Test
     4.4.2.14.1: AD Absolute Value Function: Example and Test
     4.4.2.13.1: The AD tanh Function: Example and Test
     4.4.2.12.1: The AD tan Function: Example and Test
     4.4.2.11.1: The AD sqrt Function: Example and Test
     4.4.2.10.1: The AD sinh Function: Example and Test
     4.4.2.9.1: The AD sin Function: Example and Test
     4.4.2.8.1: The AD log10 Function: Example and Test
     4.4.2.7.1: The AD log Function: Example and Test
     4.4.2.6.1: The AD exp Function: Example and Test
     4.4.2.5.1: The AD cosh Function: Example and Test
     4.4.2.4.1: The AD cos Function: Example and Test
     4.4.2.3.1: The AD atan Function: Example and Test
     4.4.2.2.1: The AD asin Function: Example and Test
     4.4.2.1.1: The AD acos Function: Example and Test
     4.4.1.4.4: AD Compound Assignment Division: Example and Test
     4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
     4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
     4.4.1.4.1: AD Compound Assignment Addition: Example and Test
     4.4.1.3.4: AD Binary Division: Example and Test
     4.4.1.3.3: AD Binary Multiplication: Example and Test
     4.4.1.3.2: AD Binary Subtraction: Example and Test
     4.4.1.3.1: AD Binary Addition: Example and Test
     4.4.1.2.1: AD Unary Minus Operator: Example and Test
     4.4.1.1.1: AD Unary Plus Operator: Example and Test
     4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
     4.3.6.2: Print During Zero Order Forward Mode: Example and Test
     4.3.6.1: Printing During Forward Mode: Example and Test
     4.3.5.1: AD Output Operator: Example and Test
     4.3.4.1: AD Output Operator: Example and Test
     4.3.2.1: Convert From AD to Integer: Example and Test
     4.3.1.1: Convert From AD to its Base Type: Example and Test
     4.2.1: AD Assignment: Example and Test
     4.1.1: AD Constructors: Example and Test
     3.2.2: exp_eps: Test of exp_eps
     3.2.j: An Epsilon Accurate Exponential Approximation: Test
     3.1.2: exp_2: Test
     3.1.j: Second Order Exponential Approximation: Test
     2.2.7: Choosing the CppAD Test Vector Template Class
     2.2.3.d: Including the Eigen Examples and Tests: Test Vector
     2.2.2.4: ColPack: Sparse Hessian Example and Test
     2.2.2.3: ColPack: Sparse Hessian Example and Test
     2.2.2.2: ColPack: Sparse Jacobian Example and Test
     2.2.2.1: ColPack: Sparse Jacobian Example and Test
     2: CppAD Download, Test, and Install Instructions
test_boolofvoid 12.6.d: The CppAD Wish List: test_boolofvoid
test_size 8.5.g: Determine Amount of Time to Execute a Test: test_size
test_time 7.2.10.6.f: Timing Test of Multi-Threaded Newton Method: test_time
          7.2.9.7.d: Timing Test for Multi-Threaded User Atomic Calculation: test_time
          7.2.8.6.f: Timing Test of Multi-Threaded Summation of 1/i: test_time
          7.2.j.a: Run Multi-Threading Examples and Speed Tests: multi_newton.test_time
          7.2.i.a: Run Multi-Threading Examples and Speed Tests: harmonic.test_time
testing 12.8.9: Choosing The Vector Testing Template Class
        12.6.h.b: The CppAD Wish List: checkpoint.Testing
        11.2: Speed Testing Utilities
        11.1.7: Speed Testing Sparse Jacobian
        11.1.6: Speed Testing Sparse Hessian
        11.1.5: Speed Testing Second Derivative of a Polynomial
        11.1.4: Speed Testing the Jacobian of Ode Solution
        11.1.3: Speed Testing Derivative of Matrix Multiply
        11.1.2: Speed Testing Gradient of Determinant by Minor Expansion
        11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
        8.a: Some General Purpose Utilities: Testing
        5.7.g: Optimize an ADFun Object Tape: Speed Testing
        2.1.i: Download The CppAD Source Code: Windows File Extraction and Testing
tests 12.8.13.e.a: Autotools Unix Test and Installation: make.Examples and Tests
      11.7.c: Speed Test Derivatives Using Sacado: Running Tests
      11.6.c: Speed Test Derivatives Using Fadbad: Running Tests
      11.5.b: Speed Test Derivatives Using CppAD: Running Tests
      11.4.c: Speed Test of Derivatives Using Adolc: Running Tests
      11.3.b: Speed Test of Functions in Double: Running Tests
      10.3.2.a: Run the Speed Examples: Running Tests
      10.3.1.a: CppAD Examples and Tests: Running Tests
      10.3.1: CppAD Examples and Tests
      8.6: Object that Runs a Group of Tests
      7.2.e: Run Multi-Threading Examples and Speed Tests: Running Tests
      7.2: Run Multi-Threading Examples and Speed Tests
      3.3.a: Correctness Tests For Exponential Approximation in Introduction: Running Tests
      3.3: Correctness Tests For Exponential Approximation in Introduction
      2.3: Checking the CppAD Examples and Tests
      2.2.6.c: Including the Sacado Speed Tests: Speed Tests
      2.2.6: Including the Sacado Speed Tests
      2.2.5.c: Including the cppad_ipopt Library and Tests: Examples and Tests
      2.2.5: Including the cppad_ipopt Library and Tests
      2.2.4.c: Including the FADBAD Speed Tests: Speed Tests
      2.2.4: Including the FADBAD Speed Tests
      2.2.3: Including the Eigen Examples and Tests
      2.2.1.d: Including the ADOL-C Examples and Tests: Speed Tests
      2.2.1: Including the ADOL-C Examples and Tests
text 4.3.6: Printing AD Values During Forward Mode
tf 8.21.i: An Error Controller for Gear's Ode Solvers: tf
   8.19.h: An Error Controller for ODE Solvers: tf
   8.18.h: A 3rd and 4th Order Rosenbrock ODE Solver: tf
   8.17.i: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: tf
than 10.2.2: Example and Test Linking CppAD to Languages Other than C++
that 12.8.5: Routines That Track Use of New and Delete
     12.3.1.c.c: The Theory of Forward Mode: Standard Math Functions.Cases that Apply Recursion Above
     11.2.9: Evaluate a Function That Has a Sparse Hessian
     11.2.8: Evaluate a Function That Has a Sparse Jacobian
     10.2.10.2: Computing a Jacobian With Constants that Change
     8.24: Returns Indices that Sort a Vector
     8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
     8.6: Object that Runs a Group of Tests
     7.2.9.1: Defines a User Atomic Operation that Computes Square Root
     5.3.9.1: Number of Variables That Can be Skipped: Example and Test
     5.3.9: Number of Variables that Can be Skipped
     4.7.9: Example AD Base Types That are not AD<OtherBase>
     4.6.1: AD Vectors that Record Index Operations: Example and Test
     4.6: AD Vectors that Record Index Operations
the 12.12: Your License for the CppAD Software
    12.8.10.2.4: Driver for Running the Ipopt ODE Example
    12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
    12.8.9: Choosing The Vector Testing Template Class
    12.8.6.10: Return A Raw Array to The Available Memory for a Thread
    12.8.6.3: Get the Current OpenMP Thread Number
    12.8.6.2: Is The Current Execution in OpenMP Parallel Mode
    12.6: The CppAD Wish List
    12.5.b: Bibliography: The C++ Programming Language
    12.3.2: The Theory of Reverse Mode
    12.3.1: The Theory of Forward Mode
    12.3: The Theory of Derivative Calculations
    11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
    11.2.6.1: Sum of the Elements of the Square of a Matrix: Example and Test
    11.1.4: Speed Testing the Jacobian of Ode Solution
    11.1: Running the Speed Test Program
    10.5: Using The CppAD Test Vector Template Class
    10.3.2: Run the Speed Examples
    10.d: Examples: The CppAD Test Vector Template Class
    8.23.5: Get the Current Thread Number
    8.23.4: Is The Current Execution in Parallel Mode
    8.22: The CppAD::vector Template Class
    8.12.1: The Pow Integer Exponent: Example and Test
    8.12: The Integer Power Function
    8.10.1: The CheckSimpleVector Function: Example and Test
    8.8.1: The CheckNumericType Function: Example and Test
    8.7.1: The NumericType: Example and Test
    8.1.1: Replacing The CppAD Error Handler: Example and Test
    8.1: Replacing the CppAD Error Handler
    8.d.b: Some General Purpose Utilities: Miscellaneous.The CppAD Vector Template Class
    4.7.3.a.a: Base Type Requirements for Identically Equal Comparisons: EqualOpSeq.The Simple Case
    4.4.3.2.1: The AD Power Function: Example and Test
    4.4.3.2: The AD Power Function
    4.4.3.1.1: The AD atan2 Function: Example and Test
    4.4.2.21: The Sign: sign
    4.4.2.20.1: The AD log1p Function: Example and Test
    4.4.2.20: The Logarithm of One Plus Argument: log1p
    4.4.2.19.1: The AD exp Function: Example and Test
    4.4.2.19: The Exponential Function Minus One: expm1
    4.4.2.18.1: The AD erf Function: Example and Test
    4.4.2.18: The Error Function
    4.4.2.17.1: The AD atanh Function: Example and Test
    4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh
    4.4.2.16.1: The AD asinh Function: Example and Test
    4.4.2.16: The Inverse Hyperbolic Sine Function: asinh
    4.4.2.15.1: The AD acosh Function: Example and Test
    4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh
    4.4.2.13.1: The AD tanh Function: Example and Test
    4.4.2.12.1: The AD tan Function: Example and Test
    4.4.2.11.1: The AD sqrt Function: Example and Test
    4.4.2.10.1: The AD sinh Function: Example and Test
    4.4.2.9.1: The AD sin Function: Example and Test
    4.4.2.8.1: The AD log10 Function: Example and Test
    4.4.2.7.1: The AD log Function: Example and Test
    4.4.2.6.1: The AD exp Function: Example and Test
    4.4.2.5.1: The AD cosh Function: Example and Test
    4.4.2.4.1: The AD cos Function: Example and Test
    4.4.2.3.1: The AD atan Function: Example and Test
    4.4.2.2.1: The AD asin Function: Example and Test
    4.4.2.1.1: The AD acos Function: Example and Test
    4.4.2.13: The Hyperbolic Tangent Function: tanh
    4.4.2.12: The Tangent Function: tan
    4.4.2.11: The Square Root Function: sqrt
    4.4.2.10: The Hyperbolic Sine Function: sinh
    4.4.2.9: The Sine Function: sin
    4.4.2.8: The Base 10 Logarithm Function: log10
    4.4.2.7: The Exponential Function: log
    4.4.2.6: The Exponential Function: exp
    4.4.2.5: The Hyperbolic Cosine Function: cosh
    4.4.2.4: The Cosine Function: cos
    4.4.3: The Binary Math Functions
    4.4.2: The Unary Standard Math Functions
    2.3: Checking the CppAD Examples and Tests
    2.2.7: Choosing the CppAD Test Vector Template Class
    2.2.6: Including the Sacado Speed Tests
    2.2.5: Including the cppad_ipopt Library and Tests
    2.2.4: Including the FADBAD Speed Tests
    2.2.3: Including the Eigen Examples and Tests
    2.2.2: Including the ColPack Sparsity Calculations
    2.2.1: Including the ADOL-C Examples and Tests
    2.2.a: Using CMake to Configure CppAD: The CMake Program
    2.1: Download The CppAD Source Code
theorem 12.3.3.c: An Important Reverse Mode Identity: Theorem
theory 12.8.11.4.b: Old Tan and Tanh as User Atomic Operations: Example and Test: Theory
       12.8.11.1.b: Old Atomic Operation Reciprocal: Example and Test: Theory
       12.3.2.9: Error Function Reverse Mode Theory
       12.3.2.8: Tangent and Hyperbolic Tangent Reverse Mode Theory
       12.3.2.7: Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory
       12.3.2.6: Inverse Sine and Hyperbolic Sine Reverse Mode Theory
       12.3.2.5: Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory
       12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
       12.3.2.3: Square Root Function Reverse Mode Theory
       12.3.2.2: Logarithm Function Reverse Mode Theory
       12.3.2.1: Exponential Function Reverse Mode Theory
       12.3.2: The Theory of Reverse Mode
       12.3.1.9: Error Function Forward Taylor Polynomial Theory
       12.3.1.8: Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory
       12.3.1.7: Inverse Cosine and Hyperbolic Cosine Forward Mode Theory
       12.3.1.6: Inverse Sine and Hyperbolic Sine Forward Mode Theory
       12.3.1.5: Inverse Tangent and Hyperbolic Tangent Forward Mode Theory
       12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
       12.3.1.3: Square Root Function Forward Mode Theory
       12.3.1.2: Logarithm Function Forward Mode Theory
       12.3.1.1: Exponential Function Forward Mode Theory
       12.3.1: The Theory of Forward Mode
       12.3: The Theory of Derivative Calculations
       8.21.w: An Error Controller for Gear's Ode Solvers: Theory
       8.20.n: An Arbitrary Order Gear Method: Theory
       8.19.v: An Error Controller for ODE Solvers: Theory
       4.4.7.2.18.1: AD Theory for Cholesky Factorization
       4.4.7.2.17.1.c: Atomic Eigen Matrix Inversion Class: Theory
       4.4.7.2.16.1.d: Atomic Eigen Matrix Multiply Class: Theory
       4.4.7.2.15.a: Tan and Tanh as User Atomic Operations: Example and Test: Theory
       4.4.7.2.13.a: Reciprocal as an Atomic Operation: Example and Test: Theory
       4.4.7.2.12.a: Atomic Euclidean Norm Squared: Example and Test: Theory
theta 4.4.3.1.e: AD Two Argument Inverse Tangent Function: theta
third 5.4.3.1: Third Order Reverse Mode: Example and Test
this 12.7.b: Changes and Additions to CppAD: This Year
     8.4.1.a: Example Use of SpeedTest: Running This Program
thread 12.8.7.d: Memory Leak Detection: thread
       12.8.6.13: OpenMP Memory Allocator: Example and Test
       12.8.6.11.g: Check If A Memory Allocation is Efficient for Another Use: Thread
       12.8.6.10.f: Return A Raw Array to The Available Memory for a Thread: Thread
       12.8.6.10: Return A Raw Array to The Available Memory for a Thread
       12.8.6.8.d: Amount of Memory Available for Quick Use by a Thread: thread
       12.8.6.8: Amount of Memory Available for Quick Use by a Thread
       12.8.6.7.d: Amount of Memory a Thread is Currently Using: thread
       12.8.6.7: Amount of Memory a Thread is Currently Using
       12.8.6.6.d: Free Memory Currently Available for Quick Use by a Thread: thread
       12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
       12.8.6.5.e: Return Memory to omp_alloc: Thread
       12.8.6.3.d: Get the Current OpenMP Thread Number: thread
       12.8.6.3: Get the Current OpenMP Thread Number
       12.8.5: Routines That Track Use of New and Delete
       8.23.13.e: Deallocate An Array and Call Destructor for its Elements: Thread
       8.23.11.c: Amount of Memory Available for Quick Use by a Thread: thread
       8.23.11: Amount of Memory Available for Quick Use by a Thread
       8.23.10.c: Amount of Memory a Thread is Currently Using: thread
       8.23.10: Amount of Memory a Thread is Currently Using
       8.23.9: Control When Thread Alloc Retains Memory For Future Use
       8.23.8.c: Free Memory Currently Available for Quick Use by a Thread: thread
       8.23.8: Free Memory Currently Available for Quick Use by a Thread
       8.23.7.d: Return Memory to thread_alloc: Thread
       8.23.5.c: Get the Current Thread Number: thread
       8.23.5: Get the Current Thread Number
       7.2.11.2: Boost Thread Implementation of a Team of AD Threads
       7.2.10.6.c: Timing Test of Multi-Threaded Newton Method: Thread
       7.2.10.5.c: A Multi-Threaded Newton's Method: Thread
       7.2.10.4.c: Take Down Multi-threaded Newton Method: Thread
       7.2.10.2.c: Set Up Multi-Threaded Newton Method: Thread
       7.2.9.7.b: Timing Test for Multi-Threaded User Atomic Calculation: Thread
       7.2.9.6.b: Run Multi-Threaded User Atomic Calculation: Thread
       7.2.9.5.c: Multi-Threaded User Atomic Take Down: Thread
       7.2.9.3.c: Multi-Threaded User Atomic Set Up: Thread
       7.2.8.6.c: Timing Test of Multi-Threaded Summation of 1/i: Thread
       7.2.8.5.c: Multi-Threaded Implementation of Summation of 1/i: Thread
       7.2.8.4.c: Take Down Multi-threading Sum of 1/i: Thread
       7.2.8.2.c: Set Up Multi-threading Sum of 1/i: Thread
       7.2.7: Using a Team of AD Threads: Example and Test
       7.2.5: A Simple Boost Threading AD: Example and Test
       7.2.3: A Simple Parallel Pthread Example and Test
       7.2.2: A Simple Boost Thread Example and Test
       7.2.1: A Simple OpenMP Example and Test
       7.g: Using CppAD in a Multi-Threading Environment: Same Thread
thread'7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
         7.2.8.3: Do One Thread's Work for Sum of 1/i
thread_alloc 8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
             8.23.7: Return Memory to thread_alloc
             8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
             8.22: The CppAD::vector Template Class
thread_num 8.23.2.f: Setup thread_alloc For Use in Multi-Threading Environment: thread_num
           7.2.10.3.e: Do One Thread's Work for Multi-Threaded Newton Method: thread_num
           7.2.8.3.e: Do One Thread's Work for Sum of 1/i: thread_num
thread_team 7.2.7.b: Using a Team of AD Threads: Example and Test: thread_team
threading 7.2.5: A Simple Boost Threading AD: Example and Test
          7.2.c: Run Multi-Threading Examples and Speed Tests: threading
threads 12.8.6.12: Set Maximum Number of Threads for omp_alloc Allocator
        12.8.6.1: Set and Get Maximum Number of Threads for omp_alloc Allocator
        8.23.3: Get Number of Threads
        7.2.11.3: Pthread Implementation of a Team of AD Threads
        7.2.11.2: Boost Thread Implementation of a Team of AD Threads
        7.2.11.1: OpenMP Implementation of a Team of AD Threads
        7.2.11: Specifications for A Team of AD Threads
        2.2: Using CMake to Configure CppAD
threads: 7.2.7: Using a Team of AD Threads: Example and Test
three 4.4.7.2.17.1.c.b: Atomic Eigen Matrix Inversion Class: Theory.Product of Three Matrices
ti 8.21.h: An Error Controller for Gear's Ode Solvers: ti
   8.19.g: An Error Controller for ODE Solvers: ti
   8.18.g: A 3rd and 4th Order Rosenbrock ODE Solver: ti
   8.17.h: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: ti
time 12.9.7.d: Determine Amount of Time to Execute det_by_minor: time
     12.9.7: Determine Amount of Time to Execute det_by_minor
     12.8.10.2.1.e.a: An ODE Inverse Problem Example: Trapezoidal Approximation.Trapezoidal Time Grid
     8.5.1: Returns Elapsed Number of Seconds
     8.5.h: Determine Amount of Time to Execute a Test: time
     8.5: Determine Amount of Time to Execute a Test
time_min 12.9.7.c: Determine Amount of Time to Execute det_by_minor: time_min
         8.5.f: Determine Amount of Time to Execute a Test: time_min
         8.3.h: Run One Speed Test and Return Results: time_min
time_out 7.2.10.6.e: Timing Test of Multi-Threaded Newton Method: time_out
         7.2.9.7.c: Timing Test for Multi-Threaded User Atomic Calculation: time_out
         7.2.8.6.e: Timing Test of Multi-Threaded Summation of 1/i: time_out
time_test 8.5.2: time_test: Example and test
          8.5: Determine Amount of Time to Execute a Test
time_test: 8.5.2: time_test: Example and test
timer 8.5.1.1: Elapsed Seconds: Example and Test
times 12.9.5: Repeat det_by_minor Routine A Specified Number of Times
      11.2.6: Sum Elements of a Matrix Times Itself
      5.4.2.2: Hessian Times Direction: Example and Test
      5.4.2.i: Second Order Reverse Mode: Hessian Times Direction
      4.4.1.4: AD Compound Assignment Operators
      4.4.1.3.3: AD Binary Multiplication: Example and Test
      4.4.1.3: AD Binary Arithmetic Operators
timing 8.5.i: Determine Amount of Time to Execute a Test: Timing
       8.3.j: Run One Speed Test and Return Results: Timing
       7.2.10.6: Timing Test of Multi-Threaded Newton Method
       7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation
       7.2.8.6: Timing Test of Multi-Threaded Summation of 1/i
to_string 8.d.e: Some General Purpose Utilities: Miscellaneous.to_string
          4.7.9.6.o: Enable use of AD<Base> where Base is std::complex<double>: to_string
          4.7.9.5.l: Enable use of AD<Base> where Base is double: to_string
          4.7.9.4.l: Enable use of AD<Base> where Base is float: to_string
          4.7.9.3.q: Enable use of AD<Base> where Base is Adolc's adouble Type: to_string
          4.7.9.1.t: Example AD<Base> Where Base Constructor Allocates Memory: to_string
          4.7.7: Extending to_string To Another Floating Point Type
to_string: 8.25.1: to_string: Example and Test
tracing 12.6.p: The CppAD Wish List: Tracing
track 12.8.5: Routines That Track Use of New and Delete
track_count 12.8.7.i: Memory Leak Detection: TRACK_COUNT
trackcount 12.8.5.n: Routines That Track Use of New and Delete: TrackCount
trackdelvec 12.8.5.l: Routines That Track Use of New and Delete: TrackDelVec
trackextend 12.8.5.m: Routines That Track Use of New and Delete: TrackExtend
tracking 12.8.5.1: Tracking Use of New and Delete: Example and Test
tracknewvec 12.8.5.k: Routines That Track Use of New and Delete: TrackNewVec
transpose 5.5.11.j: Subgraph Dependency Sparsity Patterns: transpose
          5.5.6.i.b: Hessian Sparsity Pattern: Reverse Mode: h.transpose true
          5.5.6.i.a: Hessian Sparsity Pattern: Reverse Mode: h.transpose false
          5.5.6.f: Hessian Sparsity Pattern: Reverse Mode: transpose
          5.5.5.i: Reverse Mode Hessian Sparsity Patterns: transpose
          5.5.4.i.b: Jacobian Sparsity Pattern: Reverse Mode: s.transpose true
          5.5.4.i.a: Jacobian Sparsity Pattern: Reverse Mode: s.transpose false
          5.5.4.h.b: Jacobian Sparsity Pattern: Reverse Mode: r.transpose true
          5.5.4.h.a: Jacobian Sparsity Pattern: Reverse Mode: r.transpose false
          5.5.4.f: Jacobian Sparsity Pattern: Reverse Mode: transpose
          5.5.3.g: Reverse Mode Jacobian Sparsity Patterns: transpose
          5.5.2.i.b: Jacobian Sparsity Pattern: Forward Mode: s.transpose true
          5.5.2.i.a: Jacobian Sparsity Pattern: Forward Mode: s.transpose false
          5.5.2.h.b: Jacobian Sparsity Pattern: Forward Mode: r.transpose true
          5.5.2.h.a: Jacobian Sparsity Pattern: Forward Mode: r.transpose false
          5.5.2.f: Jacobian Sparsity Pattern: Forward Mode: transpose
          5.5.1.g: Forward Mode Jacobian Sparsity Patterns: transpose
trapezoidal 12.8.10.2.3.d: ODE Fitting Using Fast Representation: Trapezoidal Approximation
            12.8.10.2.2.e: ODE Fitting Using Simple Representation: Trapezoidal Approximation Constraint
            12.8.10.2.1.e.a: An ODE Inverse Problem Example: Trapezoidal Approximation.Trapezoidal Time Grid
            12.8.10.2.1.e: An ODE Inverse Problem Example: Trapezoidal Approximation
            9.3.e: ODE Inverse Problem Definitions: Source Code: Trapezoidal Approximation
triangular 4.4.7.2.18.1.b.c: AD Theory for Cholesky Factorization: Notation.Lower Triangular Part
trigonometric 12.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
              12.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
true 5.5.6.i.b: Hessian Sparsity Pattern: Reverse Mode: h.transpose true
     5.5.4.i.b: Jacobian Sparsity Pattern: Reverse Mode: s.transpose true
     5.5.4.h.b: Jacobian Sparsity Pattern: Reverse Mode: r.transpose true
     5.5.2.i.b: Jacobian Sparsity Pattern: Forward Mode: s.transpose true
     5.5.2.h.b: Jacobian Sparsity Pattern: Forward Mode: r.transpose true
     4.4.2.20.d.a: The Logarithm of One Plus Argument: log1p: CPPAD_USE_CPLUSPLUS_2011.true
     4.4.2.19.d.a: The Exponential Function Minus One: expm1: CPPAD_USE_CPLUSPLUS_2011.true
     4.4.2.18.d.a: The Error Function: CPPAD_USE_CPLUSPLUS_2011.true
     4.4.2.17.d.a: The Inverse Hyperbolic Tangent Function: atanh: CPPAD_USE_CPLUSPLUS_2011.true
     4.4.2.16.d.a: The Inverse Hyperbolic Sine Function: asinh: CPPAD_USE_CPLUSPLUS_2011.true
     4.4.2.15.d.a: The Inverse Hyperbolic Cosine Function: acosh: CPPAD_USE_CPLUSPLUS_2011.true
tvector 12.8.11.e.a: User Defined Atomic AD Functions: CPPAD_USER_ATOMIC.Tvector
two 12.8.10.2.1.f.a: An ODE Inverse Problem Example: Black Box Method.Two levels of Iteration
    8.2: Determine if Two Values Are Nearly Equal
    5.3.3: Second Order Forward Mode: Derivative Values
    4.5.5: Check if Two Value are Identically Equal
    4.4.7.2.16.1.d.b: Atomic Eigen Matrix Multiply Class: Theory.Product of Two Matrices
    4.4.3.1: AD Two Argument Inverse Tangent Function
tx 12.8.11.k: User Defined Atomic AD Functions: tx
   4.4.7.2.5.e: Atomic Reverse Mode: tx
   4.4.7.2.4.h: Atomic Forward Mode: tx
ty 12.8.11.l: User Defined Atomic AD Functions: ty
   4.4.7.2.5.f: Atomic Reverse Mode: ty
   4.4.7.2.4.i: Atomic Forward Mode: ty
type 12.8.12.e: zdouble: An AD Base Type With Absolute Zero: Base Type Requirements
     12.8.12: zdouble: An AD Base Type With Absolute Zero
     12.8.6.10.d: Return A Raw Array to The Available Memory for a Thread: Type
     12.8.6.9.d: Allocate Memory and Create A Raw Array: Type
     12.4.e: Glossary: Base Type
     12.4.c: Glossary: AD Type Above Base
     8.23.13.c: Deallocate An Array and Call Destructor for its Elements: Type
     8.23.12.c: Allocate An Array and Call Default Constructor for its Elements: Type
     8.22.m.e: The CppAD::vector Template Class: vectorBool.Element Type
     8.13.h: Evaluate a Polynomial or its Derivative: Type
     8.12.h: The Integer Power Function: Type
     8.9.j: Definition of a Simple Vector: Value Type
     8.9.b: Definition of a Simple Vector: Elements of Specified Type
     8.7.a: Definition of a Numeric Type: Type Requirements
     8.7: Definition of a Numeric Type
     8.2.h: Determine if Two Values Are Nearly Equal: Type
     4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
     4.7.9.2: Using a User Defined AD Base Type: Example and Test
     4.7.8: Base Type Requirements for Hash Coding Values
     4.7.7: Extending to_string To Another Floating Point Type
     4.7.6: Base Type Requirements for Numeric Limits
     4.7.5: Base Type Requirements for Standard Math Functions
     4.7.4.b: Base Type Requirements for Ordered Comparisons: Ordered Type
     4.7.4: Base Type Requirements for Ordered Comparisons
     4.7.3: Base Type Requirements for Identically Equal Comparisons
     4.7.2.c.a: Base Type Requirements for Conditional Expressions: CondExpTemplate.Ordered Type
     4.7.2: Base Type Requirements for Conditional Expressions
     4.7.f: AD<Base> Requirements for a CppAD Base Type: Numeric Type
     4.7: AD<Base> Requirements for a CppAD Base Type
     4.5.2.h: Compare AD and Base Objects for Nearly Equal: Type
     4.4.4.d: AD Conditional Expressions: Type
     4.3.3: Convert An AD or Base Type to String
     4.3.1: Convert From an AD Type to its Base Type
     4.3.1: Convert From an AD Type to its Base Type
     4.b: AD Objects: Base Type Requirements
     3.2.h: An Epsilon Accurate Exponential Approximation: Type
     3.1.g: Second Order Exponential Approximation: Type
type: 4.7.9.2: Using a User Defined AD Base Type: Example and Test
      4.3.1.1: Convert From AD to its Base Type: Example and Test
types 12.8.8: Machine Epsilon For AD Types
      12.1.d: Frequently Asked Questions and Answers: Complex Types
      8.25: Convert Certain Types to a String
      4.7.9: Example AD Base Types That are not AD<OtherBase>
      4.7.d: AD<Base> Requirements for a CppAD Base Type: Standard Base Types
      4.4.7.2.18.2.c.a: Atomic Eigen Cholesky Factorization Class: Public.Types
      4.4.7.2.17.1.e.a: Atomic Eigen Matrix Inversion Class: Public.Types
      4.4.7.2.16.1.f.a: Atomic Eigen Matrix Multiply Class: Public.Types
      4.4.6: Numeric Limits For an AD and Base Types
      4.4.2.14.d: AD Absolute Value Functions: abs, fabs: Complex Types
      4.4.2.c: The Unary Standard Math Functions: Possible Types
      4.3.2.d.c: Convert From AD to Integer: x.AD Types
      4.3.2.d.b: Convert From AD to Integer: x.Complex Types
      4.3.2.d.a: Convert From AD to Integer: x.Real Types
U
12.10.3.h.d: LU Factorization of A Square Matrix and Stability Calculation: LU.U
  12.8.11.r.f: User Defined Atomic AD Functions: rev_hes_sparse.u
  8.14.3.g.b: Invert an LU Factored Equation: LU.U
  8.14.2.h.d: LU Factorization of A Square Matrix: LU.U
  4.5.3.d: AD Boolean Functions: u
  4.4.7.2.9.e: Atomic Reverse Hessian Sparsity Patterns: u
u) 5.8.1.d.b: Create An Abs-normal Representation of a Function: g.y(x, u)
   5.8.1.d.a: Create An Abs-normal Representation of a Function: g.z(x, u)
   5.4.3.c.c: Any Order Reverse Mode: Notation.Y(t, u)
   5.4.3.c.b: Any Order Reverse Mode: Notation.X(t, u)
u^(k) 5.4.3.c.a: Any Order Reverse Mode: Notation.u^(k)
unary 4.7.9.6.l: Enable use of AD<Base> where Base is std::complex<double>: Invalid Unary Math
      4.7.9.6.k: Enable use of AD<Base> where Base is std::complex<double>: Valid Unary Math
      4.7.9.5.h: Enable use of AD<Base> where Base is double: Unary Standard Math
      4.7.9.4.h: Enable use of AD<Base> where Base is float: Unary Standard Math
      4.7.9.3.k: Enable use of AD<Base> where Base is Adolc's adouble Type: Unary Standard Math
      4.7.9.1.o: Example AD<Base> Where Base Constructor Allocates Memory: Unary Standard Math
      4.7.5.b: Base Type Requirements for Standard Math Functions: Unary Standard Math
      4.7.1.e: Required Base Class Member Functions: Unary Operators
      4.5.3.g: AD Boolean Functions: Create Unary
      4.4.2: The Unary Standard Math Functions
      4.4.1.2.1: AD Unary Minus Operator: Example and Test
      4.4.1.2: AD Unary Minus Operator
      4.4.1.1.1: AD Unary Plus Operator: Example and Test
      4.4.1.1: AD Unary Plus Operator
unary_name 4.5.3.c: AD Boolean Functions: unary_name
uniform 12.9.3: Simulate a [0,1] Uniform Random Variate
        11.2.10: Simulate a [0,1] Uniform Random Variate
uniform_01 11.2.10.1: Source: uniform_01
           11.2.10: Simulate a [0,1] Uniform Random Variate
           11.1: Running the Speed Test Program
union 12.8.11.5.1.h: Define Matrix Multiply as a User Atomic Operation: Set Union
      8.26: Union of Standard Sets
union: 8.26.1: Set Union: Example and Test
unix 12.8.13: Autotools Unix Test and Installation
     2.2.1.e: Including the ADOL-C Examples and Tests: Unix
unknown 8.1.2.f: CppAD Assertions During Execution: Unknown
up 7.2.10.3.d: Do One Thread's Work for Multi-Threaded Newton Method: up
   7.2.10.2: Set Up Multi-Threaded Newton Method
   7.2.9.3: Multi-Threaded User Atomic Set Up
   7.2.8.2: Set Up Multi-threading Sum of 1/i
usage 12.8.11.r.a: User Defined Atomic AD Functions: rev_hes_sparse.Usage
      12.8.11.q.a: User Defined Atomic AD Functions: rev_jac_sparse.Usage
      12.8.11.p.a: User Defined Atomic AD Functions: for_jac_sparse.Usage
      12.8.11.o.a: User Defined Atomic AD Functions: reverse.Usage
      12.8.11.n.a: User Defined Atomic AD Functions: forward.Usage
      2.4.b: CppAD pkg-config Files: Usage
      2.2: Using CMake to Configure CppAD
use 12.8.11.t.b: User Defined Atomic AD Functions: Example.Use AD
    12.8.11.b.a: User Defined Atomic AD Functions: Syntax Function.Use Function
    12.8.6.11: Check If A Memory Allocation is Efficient for Another Use
    12.8.6.8: Amount of Memory Available for Quick Use by a Thread
    12.8.6.6: Free Memory Currently Available for Quick Use by a Thread
    12.8.5.1: Tracking Use of New and Delete: Example and Test
    12.8.5: Routines That Track Use of New and Delete
    10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
    9: Use Ipopt to Solve a Nonlinear Programming Problem
    8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
    8.23.11: Amount of Memory Available for Quick Use by a Thread
    8.23.9: Control When Thread Alloc Retains Memory For Future Use
    8.23.8: Free Memory Currently Available for Quick Use by a Thread
    8.23.2: Setup thread_alloc For Use in Multi-Threading Environment
    8.4.1: Example Use of SpeedTest
    7.2.11.i: Specifications for A Team of AD Threads: Example Use
    7.2.10.1: Common Variables use by Multi-Threaded Newton Method
    4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
    4.7.9.5: Enable use of AD<Base> where Base is double
    4.7.9.4: Enable use of AD<Base> where Base is float
    4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
    4.4.7.2.19.c: User Atomic Matrix Multiply: Example and Test: Use Atomic Function
    4.4.7.2.18.c: Atomic Eigen Cholesky Factorization: Example and Test: Use Atomic Function
    4.4.7.2.17.c: Atomic Eigen Matrix Inverse: Example and Test: Use Atomic Function
    4.4.7.2.16.c: Atomic Eigen Matrix Multiply: Example and Test: Use Atomic Function
    4.4.7.2.15.k: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function
    4.4.7.2.13.k: Reciprocal as an Atomic Operation: Example and Test: Use Atomic Function
    4.4.7.2.12.k: Atomic Euclidean Norm Squared: Example and Test: Use Atomic Function
    4.4.7.2.11.f: Getting Started with Atomic Operations: Example and Test: Use Atomic Function
    4.4.7.2.9.1.i: Atomic Reverse Hessian Sparsity: Example and Test: Use Atomic Function
    4.4.7.2.8.1.i: Atomic Forward Hessian Sparsity: Example and Test: Use Atomic Function
    4.4.7.2.7.1.g: Atomic Reverse Jacobian Sparsity: Example and Test: Use Atomic Function
    4.4.7.2.6.1.g: Atomic Forward Jacobian Sparsity: Example and Test: Use Atomic Function
    4.4.7.2.5.1.g: Atomic Reverse: Example and Test: Use Atomic Function
    4.4.7.2.4.1.f: Atomic Forward: Example and Test: Use Atomic Function
    4.4.7.2.10.c: Free Static Variables: Future Use
    4.4.7.2.1.d.b: Atomic Function Constructor: Example.Use Constructor
use_ad 7.2.10.6.k: Timing Test of Multi-Threaded Newton Method: use_ad
       7.2.j.f: Run Multi-Threading Examples and Speed Tests: multi_newton.use_ad
use_vecad 12.8.2.h: ADFun Object Deprecated Member Functions: use_VecAD
used 12.8.6: A Quick OpenMP Memory Allocator Used by CppAD
     10.3: Utility Routines used by CppAD Examples
     7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i
user 12.8.11.5.1.i: Define Matrix Multiply as a User Atomic Operation: CppAD User Atomic Callback Functions
     12.8.11.5.1: Define Matrix Multiply as a User Atomic Operation
     12.8.11.5: Old Matrix Multiply as a User Atomic Operation: Example and Test
     12.8.11.4: Old Tan and Tanh as User Atomic Operations: Example and Test
     12.8.11.3: Using AD to Compute Atomic Function Derivatives
     12.8.11.2: Using AD to Compute Atomic Function Derivatives
     12.8.11: User Defined Atomic AD Functions
     7.2.9.7: Timing Test for Multi-Threaded User Atomic Calculation
     7.2.9.6: Run Multi-Threaded User Atomic Calculation
     7.2.9.5: Multi-Threaded User Atomic Take Down
     7.2.9.4: Multi-Threaded User Atomic Worker
     7.2.9.3: Multi-Threaded User Atomic Set Up
     7.2.9.2: Multi-Threaded User Atomic Common Information
     7.2.9.1: Defines a User Atomic Operation that Computes Square Root
     7.2.9: Multi-Threading User Atomic Example / Test
     4.7.9.2: Using a User Defined AD Base Type: Example and Test
     4.4.7.2.19: User Atomic Matrix Multiply: Example and Test
     4.4.7.2.15: Tan and Tanh as User Atomic Operations: Example and Test
     4.4.7.2: User Defined Atomic AD Functions
uses 5.9.k: Check an ADFun Sequence of Operations: FunCheck Uses Forward
     5.6.5.i: Compute Sparse Jacobians Using Subgraphs: Uses Forward
     5.6.4.n: Sparse Hessian: Uses Forward
     5.6.3.m: Computing Sparse Hessians: Uses Forward
     5.6.2.m: Sparse Jacobian: Uses Forward
     5.6.1.o: Computing Sparse Jacobians: Uses Forward
     5.2.6.j: Reverse Mode Second Partial Derivative Driver: RevTwo Uses Forward
     5.2.5.j: Forward Mode Second Partial Derivative Driver: ForTwo Uses Forward
     5.2.4.h: First Order Derivative: Driver Routine: RevOne Uses Forward
     5.2.3.h: First Order Partial Derivative: Driver Routine: ForOne Uses Forward
     5.2.2.i: Hessian: Easy Driver: Hessian Uses Forward
using 12.9.2: Compute Determinant using Expansion by Minors
      12.8.11.3: Using AD to Compute Atomic Function Derivatives
      12.8.11.2: Using AD to Compute Atomic Function Derivatives
      12.8.10.2.3.1: ODE Fitting Using Fast Representation
      12.8.10.2.2.1: ODE Fitting Using Simple Representation
      12.8.10.2.3: ODE Fitting Using Fast Representation
      12.8.10.2.2: ODE Fitting Using Simple Representation
      12.8.10.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
      12.8.10: Nonlinear Programming Using the CppAD Interface to Ipopt
      12.8.6.7: Amount of Memory a Thread is Currently Using
      11.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
      11.7: Speed Test Derivatives Using Sacado
      11.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
      11.6: Speed Test Derivatives Using Fadbad
      11.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
      11.5: Speed Test Derivatives Using CppAD
      11.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
      11.4: Speed Test of Derivatives Using Adolc
      11.3.2: Double Speed: Determinant Using Lu Factorization
      11.2.3.1: Determinant Using Expansion by Minors: Example and Test
      11.2.3: Determinant Using Expansion by Minors
      11.2.1.1: Determinant Using Lu Factorization: Example and Test
      11.2.1: Determinant Using Expansion by Lu Factorization
      11.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
      10.5: Using The CppAD Test Vector Template Class
      10.2.13.e: Taylor's Ode Solver: A Multi-Level Adolc Example and Test: Taylor's Method Using AD
      10.2.12.e: Taylor's Ode Solver: A Multi-Level AD Example and Test: Taylor's Method Using AD
      10.2.10: Using Multiple Levels of AD
      10.2.9: Gradient of Determinant Using Lu Factorization: Example and Test
      10.2.8: Gradient of Determinant Using Expansion by Minors: Example and Test
      10.2.6: Gradient of Determinant Using LU Factorization: Example and Test
      10.2.5: Gradient of Determinant Using Expansion by Minors: Example and Test
      10.2.4.3: Using Eigen To Compute Determinant: Example and Test
      10.2.4.2: Using Eigen Arrays: Example and Test
      10.1: Getting Started Using CppAD to Compute Derivatives
      9.1: Nonlinear Programming Using CppAD and Ipopt: Example and Test
      8.23.10: Amount of Memory a Thread is Currently Using
      8.19.2: OdeErrControl: Example and Test Using Maxabs Argument
      8.9.k.a: Definition of a Simple Vector: Element Access.Using Value
      7.2.7: Using a Team of AD Threads: Example and Test
      7: Using CppAD in a Multi-Threading Environment
      5.8.11: Non-Smooth Optimization Using Abs-normal Quadratic Approximations
      5.8.8: Solve a Quadratic Program Using Interior Point Method
      5.8.7: Non-Smooth Optimization Using Abs-normal Linear Approximations
      5.8.4: abs_normal: Solve a Linear Program Using Simplex Method
      5.6.5.2: Sparse Hessian Using Subgraphs and Jacobian: Example and Test
      5.6.5.1: Computing Sparse Jacobian Using Reverse Mode: Example and Test
      5.6.5: Compute Sparse Jacobians Using Subgraphs
      5.6.1.2: Computing Sparse Jacobian Using Reverse Mode: Example and Test
      5.6.1.1: Computing Sparse Jacobian Using Forward Mode: Example and Test
      5.4.4: Reverse Mode Using Subgraphs
      4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
      4.7.9.2: Using a User Defined AD Base Type: Example and Test
      4.4.7.2.3: Using AD Version of Atomic Function
      2.2: Using CMake to Configure CppAD
utilities 12.10: Some Numerical AD Utilities
          11.2: Speed Testing Utilities
          8: Some General Purpose Utilities
utility 11.2.b: Speed Testing Utilities: Speed Utility Routines
        10.3: Utility Routines used by CppAD Examples
utility: 11.4.8: Adolc Test Utility: Allocate and Free Memory For a Matrix
V
Value 4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
      4.3.1.1: Convert From AD to its Base Type: Example and Test
      4.3.1: Convert From an AD Type to its Base Type
Var2Par 4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
        4.3.7: Convert an AD Variable to a Parameter
VecAD 4.6.1: AD Vectors that Record Index Operations: Example and Test
      4.6: AD Vectors that Record Index Operations
      4.2: AD Assignment Operator
      4.1: AD Constructors
VecAD<Base> 4.6: AD Vectors that Record Index Operations
12.8.11.r.g: User Defined Atomic AD Functions: rev_hes_sparse.v
  8.28.k.b: Sparse Matrix Row, Column, Value Representation: set.v
  8.11.e.a: Obtain Nan or Determine if a Value is Nan: hasnan.v
  4.6.e.a: AD Vectors that Record Index Operations: Constructor.v
  4.5.3.i: AD Boolean Functions: v
  4.4.7.2.9.e.a: Atomic Reverse Hessian Sparsity Patterns: u.v
v_ptr 12.8.6.11.d: Check If A Memory Allocation is Efficient for Another Use: v_ptr
      12.8.6.5.d: Return Memory to omp_alloc: v_ptr
      12.8.6.4.f: Get At Least A Specified Amount of Memory: v_ptr
      8.23.7.c: Return Memory to thread_alloc: v_ptr
      8.23.6.e: Get At Least A Specified Amount of Memory: v_ptr
val 8.28.n: Sparse Matrix Row, Column, Value Representation: val
valid 4.7.9.6.k: Enable use of AD<Base> where Base is std::complex<double>: Valid Unary Math
value 11.1.5.d: Speed Testing Second Derivative of a Polynomial: Return Value
      11.1.4.e: Speed Testing the Jacobian of Ode Solution: Return Value
      11.1.3.c: Speed Testing Derivative of Matrix Multiply: Return Value
      11.1.2.d: Speed Testing Gradient of Determinant by Minor Expansion: Return Value
      11.1.1.d: Speed Testing Gradient of Determinant Using Lu Factorization: Return Value
      10.1.d: Getting Started Using CppAD to Compute Derivatives: Value
      8.28: Sparse Matrix Row, Column, Value Representation
      8.25.d: Convert Certain Types to a String: value
      8.23.9.c: Control When Thread Alloc Retains Memory For Future Use: value
      8.11: Obtain Nan or Determine if a Value is Nan
      8.9.k.a: Definition of a Simple Vector: Element Access.Using Value
      8.9.j: Definition of a Simple Vector: Value Type
      4.5.5: Check if Two Value are Identically Equal
      4.4.2.14.1: AD Absolute Value Function: Example and Test
      4.4.2.14: AD Absolute Value Functions: abs, fabs
      4.3.3.c: Convert An AD or Base Type to String: value
      3.2.6.e: exp_eps: Second Order Forward Mode: Return Value
      3.2.4.d: exp_eps: First Order Forward Sweep: Return Value
      3.2.3.c: exp_eps: Operation Sequence and Zero Order Forward Sweep: Return Value
      3.1.6.e: exp_2: Second Order Forward Mode: Return Value
      3.1.4.e: exp_2: First Order Forward Mode: Return Value
      3.1.3.d: exp_2: Operation Sequence and Zero Order Forward Mode: Return Value
value_ 4.3.7: Convert an AD Variable to a Parameter
value_type 8.9: Definition of a Simple Vector
valued 4.5: Bool Valued Operations and Functions with AD Arguments
       4.4: AD Valued Operations and Functions
values 12.10.2: Jacobian and Hessian of Optimal Values
       12.8.10.2.1.c.c: An ODE Inverse Problem Example: Measurements.Simulated Measurement Values
       12.8.10.2.1.c.b: An ODE Inverse Problem Example: Measurements.Simulation Parameter Values
       9.3.c.c: ODE Inverse Problem Definitions: Source Code: Measurements.Simulated Measurement Values
       9.3.c.b: ODE Inverse Problem Definitions: Source Code: Measurements.Simulation Parameter Values
       8.2: Determine if Two Values Are Nearly Equal
       5.3.4.b.b: Multiple Order Forward Mode: Purpose.Derivative Values
       5.3.4.b.a: Multiple Order Forward Mode: Purpose.Function Values
       5.3.3: Second Order Forward Mode: Derivative Values
       5.3.2: First Order Forward Mode: Derivative Values
       5.3.1: Zero Order Forward Mode: Function Values
       4.7.8: Base Type Requirements for Hash Coding Values
       4.4.7.2.15.k.h: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.Large x Values
       4.3.6: Printing AD Values During Forward Mode
valuevector 8.28.c: Sparse Matrix Row, Column, Value Representation: ValueVector
var 4.3.6.f: Printing AD Values During Forward Mode: var
variable 12.4.m: Glossary: Variable
         12.4.k.c: Glossary: Tape.Independent Variable
         4.5.4.1: AD Parameter and Variable Functions: Example and Test
         4.5.4: Is an AD Object a Parameter or Variable
         4.4.7.2.9.1.j: Atomic Reverse Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
         4.4.7.2.8.1.j: Atomic Forward Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
         4.4.7.2.7.1.h: Atomic Reverse Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
         4.4.7.2.6.1.h: Atomic Forward Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
         4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
         4.3.7: Convert an AD Variable to a Parameter
         3.2.3.b.a: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Variable
variables 12.4.k.d: Glossary: Tape.Dependent Variables
          12.1.f: Frequently Asked Questions and Answers: Independent Variables
          7.2.10.1: Common Variables use by Multi-Threaded Newton Method
          7.2.8.1: Common Variables Used by Multi-threading Sum of 1/i
          5.6.4.2: Computing Sparse Hessian for a Subset of Variables
          5.3.9.1: Number of Variables That Can be Skipped: Example and Test
          5.3.9: Number of Variables that Can be Skipped
          5.1.1: Declare Independent Variables and Start Recording
          4.4.7.2.18.2.d.a: Atomic Eigen Cholesky Factorization Class: Private.Variables
          4.4.7.2.17.1.f.a: Atomic Eigen Matrix Inversion Class: Private.Variables
          4.4.7.2.16.1.g.a: Atomic Eigen Matrix Multiply Class: Private.Variables
          4.4.7.2.10: Free Static Variables
variables: 5.5.6.2: Sparsity Patterns For a Subset of Variables: Example and Test
variate 12.9.3: Simulate a [0,1] Uniform Random Variate
        11.2.10: Simulate a [0,1] Uniform Random Variate
vec 5.10.g.a: Check an ADFun Object For Nan Results: get_check_for_nan.vec
vec_ad.cpp 4.6.1: AD Vectors that Record Index Operations: Example and Test
vecad<base> 4.4.3.3.e: Absolute Zero Multiplication: VecAD<Base>
            4.4.2.c.c: The Unary Standard Math Functions: Possible Types.VecAD<Base>
vecad<base>::reference 4.6.d: AD Vectors that Record Index Operations: VecAD<Base>::reference
vector 12.8.10.2.2.b: ODE Fitting Using Simple Representation: Argument Vector
       12.8.10.f.a: Nonlinear Programming Using the CppAD Interface to Ipopt: fg(x).Index Vector
       12.8.9: Choosing The Vector Testing Template Class
       12.4.j.c: Glossary: Sparsity Pattern.Vector of Sets
       12.4.j.b: Glossary: Sparsity Pattern.Boolean Vector
       12.4.f: Glossary: Elementary Vector
       12.1.i.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
       11.2.10.g: Simulate a [0,1] Uniform Random Variate: Vector
       11.2.6.h: Sum Elements of a Matrix Times Itself: Vector
       11.2.5.f: Check Gradient of Determinant of 3 by 3 matrix: Vector
       11.2.4.f: Check Determinant of 3 by 3 matrix: Vector
       11.2.3.g: Determinant Using Expansion by Minors: Vector
       11.2.1.g: Determinant Using Expansion by Lu Factorization: Vector
       10.5: Using The CppAD Test Vector Template Class
       10.d: Examples: The CppAD Test Vector Template Class
       8.24: Returns Indices that Sort a Vector
       8.22.1: CppAD::vector Template Class: Example and Test
       8.22: The CppAD::vector Template Class
       8.21.u: An Error Controller for Gear's Ode Solvers: Vector
       8.20.k: An Arbitrary Order Gear Method: Vector
       8.19.t: An Error Controller for ODE Solvers: Vector
       8.18.l: A 3rd and 4th Order Rosenbrock ODE Solver: Vector
       8.17.m: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Vector
       8.13.i: Evaluate a Polynomial or its Derivative: Vector
       8.11.h: Obtain Nan or Determine if a Value is Nan: Vector
       8.10: Check Simple Vector Concept
       8.9.1: Simple Vector Template Class: Example and Test
       8.9: Definition of a Simple Vector
       8.3.e: Run One Speed Test and Return Results: Vector
       8.d.b: Some General Purpose Utilities: Miscellaneous.The CppAD Vector Template Class
       5.9.j: Check an ADFun Sequence of Operations: Vector
       5.8.9.f: abs_normal: Solve a Quadratic Program With Box Constraints: Vector
       5.8.8.f: Solve a Quadratic Program Using Interior Point Method: Vector
       5.8.5.e: abs_normal: Solve a Linear Program With Box Constraints: Vector
       5.8.4.e: abs_normal: Solve a Linear Program Using Simplex Method: Vector
       5.8.3.e: abs_normal: Evaluate First Order Approximation: Vector
       5.8.2: abs_normal: Print a Vector or Matrix
       5.4.3.j: Any Order Reverse Mode: Vector
       5.4.2.h: Second Order Reverse Mode: Vector
       5.4.1.g: First Order Reverse Mode: Vector
       5.3.5.n: Multiple Directions Forward Mode: Vector
       5.3.4.l: Multiple Order Forward Mode: Vector
       5.3.3.h: Second Order Forward Mode: Derivative Values: Vector
       5.3.2.f: First Order Forward Mode: Derivative Values: Vector
       5.3.1.g: Zero Order Forward Mode: Function Values: Vector
       5.2.4.g: First Order Derivative: Driver Routine: Vector
       5.2.3.g: First Order Partial Derivative: Driver Routine: Vector
       5.2.2.h: Hessian: Easy Driver: Vector
       5.2.1.f: Jacobian: Driver Routine: Vector
       4.4.7.2.e.c: User Defined Atomic AD Functions: Examples.Vector Range
       2.2.7: Choosing the CppAD Test Vector Template Class
       2.2.3.d: Including the Eigen Examples and Tests: Test Vector
       2.2: Using CMake to Configure CppAD
vector_size 5.10.f.a: Check an ADFun Object For Nan Results: Error Message.vector_size
vectorad 5.1.2.e: Construct an ADFun Object and Stop Recording: VectorAD
         5.1.1.g: Declare Independent Variables and Start Recording: VectorAD
vectorBool 8.22.2: CppAD::vectorBool Class: Example and Test
vectorbase 5.6.4.k: Sparse Hessian: VectorBase
           5.6.2.j: Sparse Jacobian: VectorBase
           5.2.6.h: Reverse Mode Second Partial Derivative Driver: VectorBase
           5.2.5.h: Forward Mode Second Partial Derivative Driver: VectorBase
vectorbool 8.22.m: The CppAD::vector Template Class: vectorBool
vectors 12.4.j.a: Glossary: Sparsity Pattern.Row and Column Index Vectors
        10.5.g: Using The CppAD Test Vector Template Class: Eigen Vectors
        4.6.1: AD Vectors that Record Index Operations: Example and Test
        4.6: AD Vectors that Record Index Operations
vectorset 5.6.4.l: Sparse Hessian: VectorSet
          5.6.2.k: Sparse Jacobian: VectorSet
          5.5.8.h: Hessian Sparsity Pattern: Forward Mode: VectorSet
          5.5.6.j: Hessian Sparsity Pattern: Reverse Mode: VectorSet
          5.5.4.j: Jacobian Sparsity Pattern: Reverse Mode: VectorSet
          5.5.2.j: Jacobian Sparsity Pattern: Forward Mode: VectorSet
vectorsize 5.6.4.m: Sparse Hessian: VectorSize
           5.6.2.l: Sparse Jacobian: VectorSize
vectorsize_5.2.6.i: Reverse Mode Second Partial Derivative Driver: VectorSize_t
             5.2.5.i: Forward Mode Second Partial Derivative Driver: VectorSize_t
verification 3.2.7.k: exp_eps: Second Order Reverse Sweep: Verification
             3.2.6.f: exp_eps: Second Order Forward Mode: Verification
             3.2.5.k: exp_eps: First Order Reverse Sweep: Verification
             3.2.4.e: exp_eps: First Order Forward Sweep: Verification
             3.2.3.e: exp_eps: Operation Sequence and Zero Order Forward Sweep: Verification
             3.1.7.h: exp_2: Second Order Reverse Mode: Verification
             3.1.6.f: exp_2: Second Order Forward Mode: Verification
             3.1.5.h: exp_2: First Order Reverse Mode: Verification
             3.1.4.f: exp_2: First Order Forward Mode: Verification
             3.1.3.e: exp_2: Operation Sequence and Zero Order Forward Mode: Verification
verify 3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
       3.2.6.1: exp_eps: Verify Second Order Forward Sweep
       3.2.5.1: exp_eps: Verify First Order Reverse Sweep
       3.2.4.1: exp_eps: Verify First Order Forward Sweep
       3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
       3.1.7.1: exp_2: Verify Second Order Reverse Sweep
       3.1.6.1: exp_2: Verify Second Order Forward Sweep
       3.1.5.1: exp_2: Verify First Order Reverse Sweep
       3.1.4.1: exp_2: Verify First Order Forward Sweep
       3.1.3.1: exp_2: Verify Zero Order Forward Sweep
version 11.1.8: Microsoft Version of Elapsed Number of Seconds
        4.4.7.2.3: Using AD Version of Atomic Function
        4.4.5.i: Discrete AD Functions: Create AD Version
        2.1.c: Download The CppAD Source Code: Version
        : cppad-20171217: A Package for Differentiation of C++ Algorithms
version) 11.3.3: CppAD Speed: Matrix Multiplication (Double Version)
versions 2.1.h: Download The CppAD Source Code: Monthly Versions
virtual 4.4.7.2.4: Atomic Forward Mode
        4.4.7.2.c: User Defined Atomic AD Functions: Virtual Functions
vx 12.8.11.n.b: User Defined Atomic AD Functions: forward.vx
   4.4.7.2.9.d.a: Atomic Reverse Hessian Sparsity Patterns: Implementation.vx
   4.4.7.2.8.d.a: Atomic Forward Hessian Sparsity Patterns: Implementation.vx
   4.4.7.2.4.f: Atomic Forward Mode: vx
vy 12.8.11.n.c: User Defined Atomic AD Functions: forward.vy
   4.4.7.2.4.g: Atomic Forward Mode: vy
W
5.6.4.e: Sparse Hessian: w
  5.6.3.g: Computing Sparse Hessians: w
  5.4.3.f: Any Order Reverse Mode: w
  5.4.2.f: Second Order Reverse Mode: w
  5.4.2.d: Second Order Reverse Mode: W
  5.4.1.e: First Order Reverse Mode: w
  5.2.2.f: Hessian: Easy Driver: w
w(u) 5.4.3.c.e: Any Order Reverse Mode: Notation.W(u)
w^(k) 5.4.3.c.d: Any Order Reverse Mode: Notation.w^(k)
warning 8.21.f.e: An Error Controller for Gear's Ode Solvers: Fun.Warning
        8.20.d.e: An Arbitrary Order Gear Method: Fun.Warning
        8.18.e.g: A 3rd and 4th Order Rosenbrock ODE Solver: Fun.Warning
        8.17.f.d: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Fun.Warning
        4.7.c: AD<Base> Requirements for a CppAD Base Type: API Warning
warnings 10.6: Suppress Suspect Implicit Conversion Warnings
was 8.23.14: Free All Memory That Was Allocated for Use by thread_alloc
when 8.23.9: Control When Thread Alloc Retains Memory For Future Use
where 4.7.9.6: Enable use of AD<Base> where Base is std::complex<double>
      4.7.9.5: Enable use of AD<Base> where Base is double
      4.7.9.4: Enable use of AD<Base> where Base is float
      4.7.9.3: Enable use of AD<Base> where Base is Adolc's adouble Type
      4.7.9.1: Example AD<Base> Where Base Constructor Allocates Memory
width 8.6.d: Object that Runs a Group of Tests: width
windows 12.8.9.d: Choosing The Vector Testing Template Class: MS Windows
        2.3.b.a: Checking the CppAD Examples and Tests: Check All.Windows
        2.1.i: Download The CppAD Source Code: Windows File Extraction and Testing
wish 12.8.10.v: Nonlinear Programming Using the CppAD Interface to Ipopt: Wish List
     12.6: The CppAD Wish List
with 12.8.12: zdouble: An AD Base Type With Absolute Zero
     10.3.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
     10.3.3: Lu Factor and Solve with Recorded Pivoting
     10.2.10.2: Computing a Jacobian With Constants that Change
     10.2.4: Enable Use of Eigen Linear Algebra Package with CppAD
     8.14.1.1: LuSolve With Complex Arguments: Example and Test
     5.8.9: abs_normal: Solve a Quadratic Program With Box Constraints
     5.8.5: abs_normal: Solve a Linear Program With Box Constraints
     4.7.9.3.1: Using Adolc with Multiple Levels of Taping: Example and Test
     4.5.2.1: Compare AD with Base Objects: Example and Test
     4.5.2: Compare AD and Base Objects for Nearly Equal
     4.5: Bool Valued Operations and Functions with AD Arguments
     4.4.7.2.14: Atomic Sparsity with Set Patterns: Example and Test
     4.4.7.2.11: Getting Started with Atomic Operations: Example and Test
     4.4.7.2.9.1.j: Atomic Reverse Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.8.1.j: Atomic Forward Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.7.1.h: Atomic Reverse Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.7.2.6.1.h: Atomic Forward Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
     4.4.5.3: Interpolation With Retaping: Example and Test
     4.4.5.2: Interpolation With Out Retaping: Example and Test
work 7.2.10.3: Do One Thread's Work for Multi-Threaded Newton Method
     7.2.8.3: Do One Thread's Work for Sum of 1/i
     5.6.4.i: Sparse Hessian: work
     5.6.4.f.b: Sparse Hessian: p.work
     5.6.3.k: Computing Sparse Hessians: work
     5.6.2.h: Sparse Jacobian: work
     5.6.1.m: Computing Sparse Jacobians: work
worker 7.2.9.4: Multi-Threaded User Atomic Worker
write 4.3.4: AD Output Stream Operator
X
12.10.2.f: Jacobian and Hessian of Optimal Values: x
  12.10.1.e: Computing Jacobian and Hessian of Bender's Reduced Objective: x
  12.8.10.t.b: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.x
  11.2.10.f: Simulate a [0,1] Uniform Random Variate: x
  11.2.9.g: Evaluate a Function That Has a Sparse Hessian: x
  11.2.8.h: Evaluate a Function That Has a Sparse Jacobian: x
  11.2.7.e: Evaluate a Function Defined in Terms of an ODE: x
  11.2.6.e: Sum Elements of a Matrix Times Itself: x
  11.2.5.d: Check Gradient of Determinant of 3 by 3 matrix: x
  11.2.4.d: Check Determinant of 3 by 3 matrix: x
  11.1.7.h: Speed Testing Sparse Jacobian: x
  11.1.6.e: Speed Testing Sparse Hessian: x
  11.1.4.h: Speed Testing the Jacobian of Ode Solution: x
  11.1.3.f: Speed Testing Derivative of Matrix Multiply: x
  9.m.b: Use Ipopt to Solve a Nonlinear Programming Problem: solution.x
  9.l.b: Use Ipopt to Solve a Nonlinear Programming Problem: fg_eval.x
  8.21.f.b: An Error Controller for Gear's Ode Solvers: Fun.x
  8.20.h: An Arbitrary Order Gear Method: X
  8.20.d.b: An Arbitrary Order Gear Method: Fun.x
  8.18.e.b: A 3rd and 4th Order Rosenbrock ODE Solver: Fun.x
  8.17.f.b: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: Fun.x
  8.14.3.h: Invert an LU Factored Equation: X
  8.14.1.k: Compute Determinant and Solve Linear Equations: X
  8.12.e: The Integer Power Function: x
  8.10.c: Check Simple Vector Concept: x, y
  8.2.c: Determine if Two Values Are Nearly Equal: x
  5.9.f: Check an ADFun Sequence of Operations: x
  5.9.d.a: Check an ADFun Sequence of Operations: g.x
  5.6.5.h: Compute Sparse Jacobians Using Subgraphs: x
  5.6.4.d: Sparse Hessian: x
  5.6.3.f: Computing Sparse Hessians: x
  5.6.2.d: Sparse Jacobian: x
  5.6.1.i: Computing Sparse Jacobians: x
  5.5.8.d: Hessian Sparsity Pattern: Forward Mode: x
  5.5.7.c: Forward Mode Hessian Sparsity Patterns: x
  5.5.6.d: Hessian Sparsity Pattern: Reverse Mode: x
  5.5.5.c: Reverse Mode Hessian Sparsity Patterns: x
  5.5.4.d: Jacobian Sparsity Pattern: Reverse Mode: x
  5.5.3.c: Reverse Mode Jacobian Sparsity Patterns: x
  5.5.2.d: Jacobian Sparsity Pattern: Forward Mode: x
  5.5.1.c: Forward Mode Jacobian Sparsity Patterns: x
  5.4.1.d: First Order Reverse Mode: x
  5.2.6.d: Reverse Mode Second Partial Derivative Driver: x
  5.2.5.d: Forward Mode Second Partial Derivative Driver: x
  5.2.4.d: First Order Derivative: Driver Routine: x
  5.2.3.d: First Order Partial Derivative: Driver Routine: x
  5.2.2.d: Hessian: Easy Driver: x
  5.2.1.d: Jacobian: Driver Routine: x
  5.1.3.d: Stop Recording and Store Operation Sequence: x
  5.1.2.c: Construct an ADFun Object and Stop Recording: x
  5.1.1.e: Declare Independent Variables and Start Recording: x
  4.7.8.d: Base Type Requirements for Hash Coding Values: x
  4.6.i.a: AD Vectors that Record Index Operations: AD Indexing.x
  4.5.5.d: Check if Two Value are Identically Equal: x
  4.5.4.c: Is an AD Object a Parameter or Variable: x
  4.5.3.e: AD Boolean Functions: x
  4.5.2.c: Compare AD and Base Objects for Nearly Equal: x
  4.5.1.d: AD Binary Comparison Operators: x
  4.4.7.2.15.k.h: Tan and Tanh as User Atomic Operations: Example and Test: Use Atomic Function.Large x Values
  4.4.7.2.9.e.b: Atomic Reverse Hessian Sparsity Patterns: u.x
  4.4.7.2.8.d.e: Atomic Forward Hessian Sparsity Patterns: Implementation.x
  4.4.7.2.7.d.d: Atomic Reverse Jacobian Sparsity Patterns: Implementation.x
  4.4.7.2.6.d.d: Atomic Forward Jacobian Sparsity Patterns: Implementation.x
  4.4.5.e: Discrete AD Functions: x
  4.4.3.2.d: The AD Power Function: x
  4.4.3.1.d: AD Two Argument Inverse Tangent Function: x
  4.4.2.21.c: The Sign: sign: x, y
  4.4.2.20.c: The Logarithm of One Plus Argument: log1p: x, y
  4.4.2.19.c: The Exponential Function Minus One: expm1: x, y
  4.4.2.18.c: The Error Function: x, y
  4.4.2.17.c: The Inverse Hyperbolic Tangent Function: atanh: x, y
  4.4.2.16.c: The Inverse Hyperbolic Sine Function: asinh: x, y
  4.4.2.15.c: The Inverse Hyperbolic Cosine Function: acosh: x, y
  4.4.2.14.b: AD Absolute Value Functions: abs, fabs: x, y
  4.4.2.13.b: The Hyperbolic Tangent Function: tanh: x, y
  4.4.2.12.b: The Tangent Function: tan: x, y
  4.4.2.11.b: The Square Root Function: sqrt: x, y
  4.4.2.10.b: The Hyperbolic Sine Function: sinh: x, y
  4.4.2.9.b: The Sine Function: sin: x, y
  4.4.2.8.b: The Base 10 Logarithm Function: log10: x, y
  4.4.2.7.b: The Exponential Function: log: x, y
  4.4.2.6.b: The Exponential Function: exp: x, y
  4.4.2.5.b: The Hyperbolic Cosine Function: cosh: x, y
  4.4.2.4.b: The Cosine Function: cos: x, y
  4.4.2.3.b: Inverse Tangent Function: atan: x, y
  4.4.2.2.b: Inverse Sine Function: asin: x, y
  4.4.2.1.b: Inverse Sine Function: acos: x, y
  4.4.1.4.e: AD Compound Assignment Operators: x
  4.4.1.3.e: AD Binary Arithmetic Operators: x
  4.4.1.2.d: AD Unary Minus Operator: x
  4.4.1.1.c: AD Unary Plus Operator: x
  4.3.7.d: Convert an AD Variable to a Parameter: x
  4.3.5.e: AD Output Stream Operator: x
  4.3.4.d: AD Output Stream Operator: x
  4.3.2.d: Convert From AD to Integer: x
  4.3.1.d: Convert From an AD Type to its Base Type: x
  4.2.c: AD Assignment Operator: x
  4.1.c: AD Constructors: x
  3.2.e: An Epsilon Accurate Exponential Approximation: x
  3.1.e: Second Order Exponential Approximation: x
x(5.4.3.c.b: Any Order Reverse Mode: Notation.X(t, u)
x(t) 5.3.5.k: Multiple Directions Forward Mode: X(t)
     5.3.4.i: Multiple Order Forward Mode: X(t)
x) 4.3.6.c: Printing AD Values During Forward Mode: f.Forward(0, x)
x0 5.3.3.d: Second Order Forward Mode: Derivative Values: x0
   5.3.2.d: First Order Forward Mode: Derivative Values: x0
   5.3.1.d: Zero Order Forward Mode: Function Values: x0
x1 5.3.3.e: Second Order Forward Mode: Derivative Values: x1
   5.3.2.e: First Order Forward Mode: Derivative Values: x1
x2 5.3.3.f: Second Order Forward Mode: Derivative Values: x2
x^(k) 5.4.2.c: Second Order Reverse Mode: x^(k)
x_4.4.7.2.9.1.j: Atomic Reverse Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
    4.4.7.2.8.1.j: Atomic Forward Hessian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
    4.4.7.2.7.1.h: Atomic Reverse Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
    4.4.7.2.6.1.h: Atomic Forward Jacobian Sparsity: Example and Test: Test with x_1 Both a Variable and a Parameter
x_12.8.10.n: Nonlinear Programming Using the CppAD Interface to Ipopt: x_i
x_in 5.8.11.n: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: x_in
     5.8.7.n: Non-Smooth Optimization Using Abs-normal Linear Approximations: x_in
x_12.8.10.o: Nonlinear Programming Using the CppAD Interface to Ipopt: x_l
x_out 5.8.11.o: Non-Smooth Optimization Using Abs-normal Quadratic Approximations: x_out
      5.8.7.o: Non-Smooth Optimization Using Abs-normal Linear Approximations: x_out
x_12.8.10.p: Nonlinear Programming Using the CppAD Interface to Ipopt: x_u
xf 8.21.e: An Error Controller for Gear's Ode Solvers: xf
   8.19.e: An Error Controller for ODE Solvers: xf
   8.18.d: A 3rd and 4th Order Rosenbrock ODE Solver: xf
   8.17.e: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: xf
xi 9.g: Use Ipopt to Solve a Nonlinear Programming Problem: xi
   8.21.j: An Error Controller for Gear's Ode Solvers: xi
   8.19.i: An Error Controller for ODE Solvers: xi
   8.18.i: A 3rd and 4th Order Rosenbrock ODE Solver: xi
   8.17.j: An Embedded 4th and 5th Order Runge-Kutta ODE Solver: xi
xin 5.8.9.p: abs_normal: Solve a Quadratic Program With Box Constraints: xin
    5.8.8.n: Solve a Quadratic Program Using Interior Point Method: xin
xl 9.h: Use Ipopt to Solve a Nonlinear Programming Problem: xl
xlow 7.2.10.5.i: A Multi-Threaded Newton's Method: xlow
     7.2.10.2.e: Set Up Multi-Threaded Newton Method: xlow
xout 7.2.10.5.f: A Multi-Threaded Newton's Method: xout
     7.2.10.4.d: Take Down Multi-threaded Newton Method: xout
     5.8.9.q: abs_normal: Solve a Quadratic Program With Box Constraints: xout
     5.8.8.o: Solve a Quadratic Program Using Interior Point Method: xout
     5.8.5.l: abs_normal: Solve a Linear Program With Box Constraints: xout
     5.8.4.k: abs_normal: Solve a Linear Program Using Simplex Method: xout
xq 5.3.5.h: Multiple Directions Forward Mode: xq
   5.3.4.g: Multiple Order Forward Mode: xq
xu 9.i: Use Ipopt to Solve a Nonlinear Programming Problem: xu
xup 7.2.10.5.j: A Multi-Threaded Newton's Method: xup
    7.2.10.2.f: Set Up Multi-Threaded Newton Method: xup
Y
y(5.4.3.c.c: Any Order Reverse Mode: Notation.Y(t, u)
y(t) 12.3.2.8.b: Tangent and Hyperbolic Tangent Reverse Mode Theory: Eliminating Y(t)
     5.3.5.l: Multiple Directions Forward Mode: Y(t)
     5.3.4.j: Multiple Order Forward Mode: Y(t)
y(5.8.1.d.b: Create An Abs-normal Representation of a Function: g.y(x, u)
y0 5.3.1.f: Zero Order Forward Mode: Function Values: y0
y2 5.3.3.g: Second Order Forward Mode: Derivative Values: y2
y_initial 7.2.9.1.c.b: Defines a User Atomic Operation that Computes Square Root: au.y_initial
y_squared 7.2.9.6.c: Run Multi-Threaded User Atomic Calculation: y_squared
          7.2.9.3.d: Multi-Threaded User Atomic Set Up: y_squared
          7.2.9.1.c.c: Defines a User Atomic Operation that Computes Square Root: au.y_squared
year 12.7.b: Changes and Additions to CppAD: This Year
years 12.7.c: Changes and Additions to CppAD: Previous Years
your 12.12: Your License for the CppAD Software
     10.2.1: Creating Your Own Interface to an ADFun Object
yout 5.8.8.p: Solve a Quadratic Program Using Interior Point Method: yout
yq 5.3.5.m: Multiple Directions Forward Mode: yq
   5.3.4.k: Multiple Order Forward Mode: yq
Z
11.2.6.g: Sum Elements of a Matrix Times Itself: z
  11.1.5.h: Speed Testing Second Derivative of a Polynomial: z
  11.1.3.g: Speed Testing Derivative of Matrix Multiply: z
  8.13.f: Evaluate a Polynomial or its Derivative: z
  8.12.g: The Integer Power Function: z
  8.11.f.c: Obtain Nan or Determine if a Value is Nan: nan(zero).z
  4.4.3.2.f: The AD Power Function: z
  4.4.1.3.g: AD Binary Arithmetic Operators: z
z(t) 12.3.2.9.c: Error Function Reverse Mode Theory: Order Zero Z(t)
     12.3.2.9.b: Error Function Reverse Mode Theory: Positive Orders Z(t)
     12.3.2.8.d: Tangent and Hyperbolic Tangent Reverse Mode Theory: Order Zero Z(t)
     12.3.2.8.c: Tangent and Hyperbolic Tangent Reverse Mode Theory: Positive Orders Z(t)
z(5.8.1.d.a: Create An Abs-normal Representation of a Function: g.z(x, u)
z_12.8.10.t.c: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.z_l
z_12.8.10.t.d: Nonlinear Programming Using the CppAD Interface to Ipopt: solution.z_u
zdouble: 12.8.12.1: zdouble: Example and Test
         12.8.12: zdouble: An AD Base Type With Absolute Zero
zero 12.8.12.b: zdouble: An AD Base Type With Absolute Zero: Absolute Zero
     12.8.12: zdouble: An AD Base Type With Absolute Zero
     12.8.3: Comparison Changes During Zero Order Forward Mode
     12.3.2.9.c: Error Function Reverse Mode Theory: Order Zero Z(t)
     12.3.2.8.d: Tangent and Hyperbolic Tangent Reverse Mode Theory: Order Zero Z(t)
     5.3.7: Comparison Changes Between Taping and Zero Order Forward
     5.3.5.i: Multiple Directions Forward Mode: Zero Order
     5.3.4.m: Multiple Order Forward Mode: Zero Order
     5.3.1: Zero Order Forward Mode: Function Values
     4.7.i: AD<Base> Requirements for a CppAD Base Type: Absolute Zero, azmul
     4.4.3.3.1: AD Absolute Zero Multiplication: Example and Test
     4.4.3.3: Absolute Zero Multiplication
     4.3.6.2: Print During Zero Order Forward Mode: Example and Test
     3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
     3.2.6.d.b: exp_eps: Second Order Forward Mode: Operation Sequence.Zero
     3.2.4.c.c: exp_eps: First Order Forward Sweep: Operation Sequence.Zero Order
     3.2.3.b.f: exp_eps: Operation Sequence and Zero Order Forward Sweep: Operation Sequence.Zero Order
     3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     3.1.3.1: exp_2: Verify Zero Order Forward Sweep
     3.1.6.d.b: exp_2: Second Order Forward Mode: Operation Sequence.Zero
     3.1.4.d.c: exp_2: First Order Forward Mode: Operation Sequence.Zero Order
     3.1.3.c.d: exp_2: Operation Sequence and Zero Order Forward Mode: Operation Sequence.Zero Order
     3.1.3.b: exp_2: Operation Sequence and Zero Order Forward Mode: Zero Order Expansion
     3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode
zeta 5.8.1.c.a: Create An Abs-normal Representation of a Function: a.zeta
zl 9.m.c: Use Ipopt to Solve a Nonlinear Programming Problem: solution.zl
zu 9.m.d: Use Ipopt to Solve a Nonlinear Programming Problem: solution.zu

15: External Internet References
Reference Location
_printable.htm: CppAD
_printable.xml: CppAD
cppad.htm: CppAD
cppad.xml: CppAD
http://cscapes.cs.purdue.edu/dox/ColPack/html2.2.2.a: colpack_prefix#Purpose
http://cscapes.cs.purdue.edu/dox/ColPack/html/2.2.2.5.b: get_colpack.sh#Purpose
http://cygwin.com/setup.html#naming12.7.12.t: whats_new_06#11-30
http://eigen.tuxfamily.org2.2.3.a: eigen_prefix#Purpose
http://eigen.tuxfamily.org2.2.3.1.b: get_eigen.sh#Purpose
http://eigen.tuxfamily.org12.7.5.ag: whats_new_13#04-28
http://eigen.tuxfamily.org12.7.6.ar: whats_new_12#06-16
http://eigen.tuxfamily.org12.8.13.p: autotools#eigen_dir
http://eigen.tuxfamily.org/dox/TopicCustomizingEigen.html12.7.6.ap: whats_new_12#07-01
http://en.wikipedia.org/wiki/Automatic_differentiationb: CppAD#Introduction
http://en.wikipedia.org/wiki/Automatic_differentiationb: CppAD#Introduction
http://list.coin-or.org/pipermail/cppad/2006-February/000020.html12.7.12.cs: whats_new_06#02-21
http://list.coin-or.org/pipermail/cppad/2006q4/000076.html12.7.12.o: whats_new_06#12-07
http://list.coin-or.org/pipermail/cppad/2010q2/000166.html12.7.8.i: whats_new_10#06-01
http://lists.fedoraproject.org/pipermail/devel/2011-January/147915.html12.7.7.bm: whats_new_11#01-19
http://moby.ihme.washington.edu/bradbell/cppad_mixed12.11.a: addon#Name
http://msdn.microsoft.com/en-us/library/bh44f2cb(v=vs.71).aspx12.7.7.ay: whats_new_11#04-29
http://people.freedesktop.org/~dbn/pkg-config-guide.html2.4.a: pkgconfig#Purpose
http://projects.coin-or.org/CppAD/browser12.7.13.m: whats_new_05#12-05
http://pubs.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html12.7.6.bt: whats_new_12#03-17
http://subversion.tigris.org/2.1.g.b: download#Source Code Control.Subversion
http://trilinos.sandia.gov/packages/sacado2.2.6.a: sacado_prefix#Purpose
http://trilinos.sandia.gov/packages/sacado2.2.6.1.b: get_sacado.sh#Purpose
http://trilinos.sandia.gov/packages/sacado12.7.5.ag: whats_new_13#04-28
http://trilinos.sandia.gov/packages/sacado/b: CppAD#Introduction
http://trilinos.sandia.gov/packages/sacado/11.a: speed#Purpose
http://trilinos.sandia.gov/packages/sacado/12.7.11.v: whats_new_07#10-22
http://trilinos.sandia.gov/packages/sacado/12.8.13.s: autotools#sacado_dir
http://valgrind.org/12.7.9.ar: whats_new_09#06-06
http://valgrind.org/12.7.11.bj: whats_new_07#02-03
http://valgrind.org/12.7.12.ap: whats_new_06#08-17
http://www.7-zip.org2.1.i: download#Windows File Extraction and Testing
http://www.7-zip.org12.7.14.ae: whats_new_04#09-02
http://www.autodiff.orgb: CppAD#Introduction
http://www.autodiff.orgb: CppAD#Introduction
http://www.boost.org/development/requirements.html#Guidelines12.6.o: wish_list#Software Guidelines
http://www.boost.org/doc/libs/1_47_0/doc/html/thread.html12.7.7.v: whats_new_11#09-06
http://www.boost.org/doc/libs/1_52_0/libs/numeric/ublas/doc/vector.htm2.2.7.d: cppad_testvector#boost
http://www.boost.org/libs/numeric/ublas/doc/index.htmb: CppAD#Introduction
http://www.cmake.org12.7.6.v: whats_new_12#11-06
http://www.cmake.org/cmake/help/cmake2.6docs.html#module:FindBoost2.2.7.d: cppad_testvector#boost
http://www.cmake.org/cmake/help/cmake2.6docs.html#section_Generators2.2.e: cmake#generator
http://www.cmake.org/cmake/help/install.html2.2.a: cmake#The CMake Program
http://www.coin-or.org/CppAD/b: CppAD#Introduction
http://www.coin-or.org/CppAD/11.a: speed#Purpose
http://www.coin-or.org/CppAD/12.7.13.l: whats_new_05#12-06
http://www.coin-or.org/CppAD/12.8.10.c: cppad_ipopt_nlp#Purpose
http://www.coin-or.org/download/source/CppAD/2.1.f.a: download#Compressed Archives.Coin
http://www.coin-or.org/download/source/CppAD/2.1.h: download#Monthly Versions
http://www.coin-or.org/download/source/CppAD/12.7.8.l: whats_new_10#04-24
http://www.coin-or.org/download/source/CppAD/12.7.9.an: whats_new_09#06-25
http://www.coin-or.org/foundation.htmlb: CppAD#Introduction
http://www.coin-or.org/projects/Ipopt.xml2.2.5.b: ipopt_prefix#ipopt_prefix
http://www.coin-or.org/projects/Ipopt.xml2.2.5.1.b: get_ipopt.sh#Purpose
http://www.coin-or.org/projects/Ipopt.xml9.b: ipopt_solve#Purpose
http://www.coin-or.org/projects/Ipopt.xml12.8.10.c: cppad_ipopt_nlp#Purpose
http://www.coin-or.org/projects/Ipopt.xml12.8.13.r: autotools#ipopt_dir
http://www.fadbad.com2.2.4.a: fadbad_prefix#Purpose
http://www.fadbad.com2.2.4.1.b: get_fadbad.sh#Purpose
http://www.fadbad.com12.7.5.ai: whats_new_13#04-26
http://www.fadbad.com/b: CppAD#Introduction
http://www.fadbad.com/11.a: speed#Purpose
http://www.fadbad.com/12.8.13.q: autotools#fadbad_dir
http://www.microsoft.com/en-us/download/confirmation.aspx?id=449142.1.i: download#Windows File Extraction and Testing
http://www.mingw.org12.7.15.c: whats_new_03#12-22
http://www.opensource.org/licenses/AGPL-3.0b: CppAD#Introduction
http://www.opensource.org/licenses/EPL-1.0b: CppAD#Introduction
http://www.rpm.org/max-rpm/ch-rpm-file-format.html12.7.12.t: whats_new_06#11-30
http://www.seanet.com/~bradbell/cppad_swig12.7.1.bl: whats_new_17#01-27
http://www.seanet.com/~bradbell/cppad_swig12.11.a: addon#Name
http://www.seanet.com/~bradbell/omhelp/2.1.k: download#Building Documentation
http://www.seanet.com/~bradbell/pycppad/pycppad.htm12.11.a: addon#Name
http://www.winzip.com12.7.14.ae: whats_new_04#09-02
https://cran.r-project.org/web/packages/TMB/index.html12.7.1.k: whats_new_17#11-19
https://cran.r-project.org/web/packages/TMB/index.html12.7.1.o: whats_new_17#11-08
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html10.2.4.b: cppad_eigen.hpp#Purpose
https://en.wikipedia.org/wiki/Git_%28software%292.1.g.a: download#Source Code Control.Git
https://github.com/coin-or/CppAD/issues12.1.b: Faq#Bugs
https://github.com/coin-or/CppAD/issues/1612.7.2.m: whats_new_16#09-29
https://github.com/coin-or/CppAD/releases2.1.f.b: download#Compressed Archives.Github
https://github.com/joaoleal/CppADCodeGen/12.11.a: addon#Name
https://github.com/kaskr/adcomp12.7.3.bc: whats_new_15#03-06
https://github.com/kaskr/adcomp12.11.a: addon#Name
https://projects.coin-or.org/ADOL-Cb: CppAD#Introduction
https://projects.coin-or.org/ADOL-C2.2.1.a: adolc_prefix#Purpose
https://projects.coin-or.org/ADOL-C2.2.1.1.b: get_adolc.sh#Purpose
https://projects.coin-or.org/ADOL-C11.a: speed#Purpose
https://projects.coin-or.org/ADOL-C12.7.5.n: whats_new_13#10-14
https://projects.coin-or.org/ADOL-C12.7.9.ay: whats_new_09#01-18
https://projects.coin-or.org/ADOL-C12.7.14.cs: whats_new_04#01-22
https://projects.coin-or.org/ADOL-C12.8.13.n: autotools#adolc_dir
https://www.coin-or.org/CppAD/Doc/doxydoc/html/12.7.1.bn: whats_new_17#01-18
https://www.cygwin.com/12.7.1.aw: whats_new_17#02-26
mailto:Jean-Pierre.Dussault@Usherbrooke.ca12.7.13.bz: whats_new_05#02-24
mailto:magister@u.washington.edu12.7.14.bl: whats_new_04#04-19
mailto:magister@u.washington.edu12.7.15.y: whats_new_03#10-05