,
cppad-20090131.0: A Package for Differentiation of C++ Algorithms
One web page per Section    All as one web page
Math displayed using Latex    cppad.htm    _printable.htm
Math displayed using MathML    cppad.xml    _printable.xml

a: Syntax
# include <cppad/cppad.hpp>


b: Introduction
We refer to the step by step conversion from an algorithm that computes function values to an algorithm that computes derivative values as Algorithmic Differentiation (often referred to as Automatic Differentiation.) Given a C++ algorithm that computes function values, CppAD generates an algorithm that computes its derivative values. A brief introduction to Algorithmic Differentiation can be found in wikipedia (http://en.wikipedia.org/wiki/Automatic_differentiation) . The web site autodiff.org (http://www.autodiff.org) is dedicated to research about, and promoting the use of, AD.
  1. CppAD (http://www.coin-or.org/CppAD/) uses operator overloading to compute derivatives of algorithms defined in C++. It is distributed by the COIN-OR Foundation (http://www.coin-or.org/foundation.html) with the Common Public License CPL (http://www.opensource.org/licenses/cpl1.0.php) or the GNU General Public License GPL (http://www.opensource.org/licenses/gpl-license.php) . Installation procedures are provided for both 2.1: Unix and 2.2: MS Windows operating systems. Extensive user and developer documentation is included.
  2. An AD of Base 9.4.g.b: operation sequence is stored as an 5: AD function object which can evaluate function values and derivatives. Arbitrary order 5.6.1: forward and 5.6.2: reverse mode derivative calculations can be preformed on the operation sequence. Logical comparisons can be included in an operation sequence using AD 4.4.4: conditional expressions . Evaluation of user defined unary 4.4.5: discrete functions can also be included in the sequence of operations; i.e., functions that depend on the 9.4.j.c: independent variables but which have identically zero derivatives (e.g., a step function).
  3. Derivatives of functions that are defined in terms of other derivatives can be computed using multiple levels of AD; see 8.1.11.1: mul_level.cpp for a simple example and 8.1.8: ode_taylor.cpp for a more realistic example. To this end, CppAD can also be used with other AD types; for example see 8.1.9: ode_taylor_adolc.cpp .
  4. A set of programs for doing 9.2: speed comparisons between Adolc (http://www.math.tu-dresden.de/~adol-c/) , CppAD, Fadbad (http://www.imm.dtu.dk/fadbad.html/) , and Sacado (http://trilinos.sandia.gov/packages/sacado/) are included.
  5. Includes a C++ 6: library that is useful for general operator overloaded numerical method. Allows for replacement of the 8.4: test_vector template vector class which is used for extensive testing; for example, you can do your testing with the uBlas (http://www.boost.org/libs/numeric/ublas/doc/index.htm) template vector class.
  6. See 9.8: whats_new for a list of recent extensions and bug fixes.
You can find out about other algorithmic differentiation tools and about algorithmic differentiation in general at the following web sites: wikipedia (http://en.wikipedia.org/wiki/Automatic_differentiation) , autodiff.org (http://www.autodiff.org) .

c: Example
The file 3.1: get_started.cpp contains an example and test of using CppAD to compute the derivative of a polynomial. There are many other 8: examples .

d: Include File
The following include directive
     # include <cppad/cppad.hpp>
includes the CppAD package for the rest of the current compilation unit.

e: Preprocessor Symbols
All the preprocessor symbols used by CppAD begin with eight CppAD or CPPAD_; see 7: preprocessor .

f: Namespace
All of the functions and objects defined by CppAD are in the CppAD namespace; for example, you can access the 4: AD types as
     size_t n = 2;
     CppAD::vector< CppAD::AD<
Base> > x(n)
You can abbreviate access to one object or function a using command of the form
     using CppAD::AD
     CppAD::vector< AD<
Base> > x(n)
You can abbreviate access to all CppAD objects and functions with a command of the form
     using namespace CppAD
     vector< AD<
Base> > x(n)
If you include other namespaces in a similar manner, this can cause naming conflicts.

g: Contents
_contents: 1Table of Contents
Install: 2CppAD Download, Test, and Installation Instructions
Introduction: 3An Introduction by Example to Algorithmic Differentiation
AD: 4AD Objects
ADFun: 5ADFun Objects
library: 6The CppAD General Purpose Library
preprocessor: 7Preprocessor Definitions Used by CppAD
Example: 8Examples
Appendix: 9Appendix
_reference: 10Alphabetic Listing of Cross Reference Tags
_index: 11Keyword Index
_external: 12External Internet References

% --------------------------------------------------------------------
% Latex macros defined here and used throughout the CppAD documentation
\newcommand{\T}{ {\rm T} }
\newcommand{\R}{ {\bf R} }
\newcommand{\C}{ {\bf C} }
\newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} }
\newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} }
\newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial  {#2}^{#1}} }
\newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }
% --------------------------------------------------------------------

Input File: doc.omh
1: Table of Contents
cppad-20090131.0: A Package for Differentiation of C++ Algorithms: : CppAD
    Table of Contents: 1: _contents
    CppAD Download, Test, and Installation Instructions: 2: Install
        Unix Download, Test and Installation: 2.1: InstallUnix
            Using Subversion To Download Source Code: 2.1.1: subversion
        Windows Download and Test: 2.2: InstallWindows
    An Introduction by Example to Algorithmic Differentiation: 3: Introduction
        A Simple Program Using CppAD to Compute Derivatives: 3.1: get_started.cpp
        Second Order Exponential Approximation: 3.2: exp_2
            exp_2: Implementation: 3.2.1: exp_2.hpp
            exp_2: Test: 3.2.2: exp_2.cpp
            exp_2: Operation Sequence and Zero Order Forward Mode: 3.2.3: exp_2_for0
                exp_2: Verify Zero Order Forward Sweep: 3.2.3.1: exp_2_for0.cpp
            exp_2: First Order Forward Mode: 3.2.4: exp_2_for1
                exp_2: Verify First Order Forward Sweep: 3.2.4.1: exp_2_for1.cpp
            exp_2: First Order Reverse Mode: 3.2.5: exp_2_rev1
                exp_2: Verify First Order Reverse Sweep: 3.2.5.1: exp_2_rev1.cpp
            exp_2: Second Order Forward Mode: 3.2.6: exp_2_for2
                exp_2: Verify Second Order Forward Sweep: 3.2.6.1: exp_2_for2.cpp
            exp_2: Second Order Reverse Mode: 3.2.7: exp_2_rev2
                exp_2: Verify Second Order Reverse Sweep: 3.2.7.1: exp_2_rev2.cpp
            exp_2: CppAD Forward and Reverse Sweeps: 3.2.8: exp_2_cppad
        An Epsilon Accurate Exponential Approximation: 3.3: exp_eps
            exp_eps: Implementation: 3.3.1: exp_eps.hpp
            exp_eps: Test of exp_eps: 3.3.2: exp_eps.cpp
            exp_eps: Operation Sequence and Zero Order Forward Sweep: 3.3.3: exp_eps_for0
                exp_eps: Verify Zero Order Forward Sweep: 3.3.3.1: exp_eps_for0.cpp
            exp_eps: First Order Forward Sweep: 3.3.4: exp_eps_for1
                exp_eps: Verify First Order Forward Sweep: 3.3.4.1: exp_eps_for1.cpp
            exp_eps: First Order Reverse Sweep: 3.3.5: exp_eps_rev1
                exp_eps: Verify First Order Reverse Sweep: 3.3.5.1: exp_eps_rev1.cpp
            exp_eps: Second Order Forward Mode: 3.3.6: exp_eps_for2
                exp_eps: Verify Second Order Forward Sweep: 3.3.6.1: exp_eps_for2.cpp
            exp_eps: Second Order Reverse Sweep: 3.3.7: exp_eps_rev2
                exp_eps: Verify Second Order Reverse Sweep: 3.3.7.1: exp_eps_rev2.cpp
            exp_eps: CppAD Forward and Reverse Sweeps: 3.3.8: exp_eps_cppad
        Run the exp_2 and exp_eps Tests: 3.4: exp_apx_main.cpp
    AD Objects: 4: AD
        AD Default Constructor: 4.1: Default
            Default AD Constructor: Example and Test: 4.1.1: Default.cpp
        AD Copy Constructor and Assignment Operator: 4.2: ad_copy
            AD Copy Constructor: Example and Test: 4.2.1: CopyAD.cpp
            AD Constructor From Base Type: Example and Test: 4.2.2: CopyBase.cpp
            AD Assignment Operator: Example and Test: 4.2.3: Eq.cpp
        Conversion and Printing of AD Objects: 4.3: Convert
            Convert From an AD Type to its Base Type: 4.3.1: Value
                Convert From AD to its Base Type: Example and Test: 4.3.1.1: Value.cpp
            Convert From AD to Integer: 4.3.2: Integer
                Convert From AD to Integer: Example and Test: 4.3.2.1: Integer.cpp
            AD Output Stream Operator: 4.3.3: Output
                AD Output Operator: Example and Test: 4.3.3.1: Output.cpp
            Printing AD Values During Forward Mode: 4.3.4: PrintFor
                Printing During Forward Mode: Example and Test: 4.3.4.1: PrintFor.cpp
            Convert an AD Variable to a Parameter: 4.3.5: Var2Par
                Convert an AD Variable to a Parameter: Example and Test: 4.3.5.1: Var2Par.cpp
        AD Valued Operations and Functions: 4.4: ADValued
            AD Arithmetic Operators and Computed Assignments: 4.4.1: Arithmetic
                AD Unary Plus Operator: 4.4.1.1: UnaryPlus
                    AD Unary Plus Operator: Example and Test: 4.4.1.1.1: UnaryPlus.cpp
                AD Unary Minus Operator: 4.4.1.2: UnaryMinus
                    AD Unary Minus Operator: Example and Test: 4.4.1.2.1: UnaryMinus.cpp
                AD Binary Arithmetic Operators: 4.4.1.3: ad_binary
                    AD Binary Addition: Example and Test: 4.4.1.3.1: Add.cpp
                    AD Binary Subtraction: Example and Test: 4.4.1.3.2: Sub.cpp
                    AD Binary Multiplication: Example and Test: 4.4.1.3.3: Mul.cpp
                    AD Binary Division: Example and Test: 4.4.1.3.4: Div.cpp
                AD Computed Assignment Operators: 4.4.1.4: compute_assign
                    AD Computed Assignment Addition: Example and Test: 4.4.1.4.1: AddEq.cpp
                    AD Computed Assignment Subtraction: Example and Test: 4.4.1.4.2: SubEq.cpp
                    AD Computed Assignment Multiplication: Example and Test: 4.4.1.4.3: MulEq.cpp
                    AD Computed Assignment Division: Example and Test: 4.4.1.4.4: DivEq.cpp
            AD Standard Math Unary Functions: 4.4.2: std_math_ad
                The AD acos Function: Example and Test: 4.4.2.1: Acos.cpp
                The AD asin Function: Example and Test: 4.4.2.2: Asin.cpp
                The AD atan Function: Example and Test: 4.4.2.3: Atan.cpp
                The AD cos Function: Example and Test: 4.4.2.4: Cos.cpp
                The AD cosh Function: Example and Test: 4.4.2.5: Cosh.cpp
                The AD exp Function: Example and Test: 4.4.2.6: Exp.cpp
                The AD log Function: Example and Test: 4.4.2.7: Log.cpp
                The AD log10 Function: Example and Test: 4.4.2.8: Log10.cpp
                The AD sin Function: Example and Test: 4.4.2.9: Sin.cpp
                The AD sinh Function: Example and Test: 4.4.2.10: Sinh.cpp
                The AD sqrt Function: Example and Test: 4.4.2.11: Sqrt.cpp
                The AD tan Function: Example and Test: 4.4.2.12: Tan.cpp
                The AD tanh Function: Example and Test: 4.4.2.13: Tanh.cpp
            Other AD Math Functions: 4.4.3: MathOther
                AD Absolute Value Function: 4.4.3.1: abs
                    AD Absolute Value Function: Example and Test: 4.4.3.1.1: Abs.cpp
                AD Two Argument Inverse Tangent Function: 4.4.3.2: atan2
                    The AD atan2 Function: Example and Test: 4.4.3.2.1: Atan2.cpp
                The AD Error Function: 4.4.3.3: erf
                    The AD erf Function: Example and Test: 4.4.3.3.1: Erf.cpp
                The AD Power Function: 4.4.3.4: pow
                    The AD Power Function: Example and Test: 4.4.3.4.1: Pow.cpp
                    The Pow Integer Exponent: Example and Test: 4.4.3.4.2: pow_int.cpp
            AD Conditional Expressions: 4.4.4: CondExp
                Conditional Expressions: Example and Test: 4.4.4.1: CondExp.cpp
            Discrete AD Functions: 4.4.5: Discrete
                Taping Array Index Operation: Example and Test: 4.4.5.1: TapeIndex.cpp
                Interpolation With Out Retaping: Example and Test: 4.4.5.2: interp_onetape.cpp
                Interpolation With Retaping: Example and Test: 4.4.5.3: interp_retape.cpp
        Bool Valued Operations and Functions with AD Arguments: 4.5: BoolValued
            AD Binary Comparison Operators: 4.5.1: Compare
                AD Binary Comparison Operators: Example and Test: 4.5.1.1: Compare.cpp
            Compare AD and Base Objects for Nearly Equal: 4.5.2: NearEqualExt
                Compare AD with Base Objects: Example and Test: 4.5.2.1: NearEqualExt.cpp
            AD Boolean Functions: 4.5.3: BoolFun
                AD Boolean Functions: Example and Test: 4.5.3.1: BoolFun.cpp
            Is an AD Object a Parameter or Variable: 4.5.4: ParVar
                AD Parameter and Variable Functions: Example and Test: 4.5.4.1: ParVar.cpp
            Check if Equal and Correspond to Same Operation Sequence: 4.5.5: EqualOpSeq
                EqualOpSeq: Example and Test: 4.5.5.1: EqualOpSeq.cpp
        AD Vectors that Record Index Operations: 4.6: VecAD
            AD Vectors that Record Index Operations: Example and Test: 4.6.1: VecAD.cpp
        AD<Base> Requirements for Base Type: 4.7: base_require
            Enable use of AD<Base> where Base is std::complex<double>: 4.7.1: base_complex.hpp
                Complex Polynomial: Example and Test: 4.7.1.1: ComplexPoly.cpp
                Not Complex Differentiable: Example and Test: 4.7.1.2: not_complex_ad.cpp
            Enable use of AD<Base> where Base is Adolc's adouble Type: 4.7.2: base_adolc.hpp
                Using Adolc with Multiple Levels of Taping: Example and Test: 4.7.2.1: mul_level_adolc.cpp
    ADFun Objects: 5: ADFun
        Declare Independent Variables and Start Recording: 5.1: Independent
            Independent and ADFun Constructor: Example and Test: 5.1.1: Independent.cpp
        Construct an ADFun Object and Stop Recording: 5.2: FunConstruct
        Stop Recording and Store Operation Sequence: 5.3: Dependent
        Abort Recording of an Operation Sequence: 5.4: abort_recording
            Abort Current Recording: Example and Test: 5.4.1: abort_recording.cpp
        ADFun Sequence Properties: 5.5: SeqProperty
            ADFun Sequence Properties: Example and Test: 5.5.1: SeqProperty.cpp
        Evaluate ADFun Functions, Derivatives, and Sparsity Patterns: 5.6: FunEval
            Forward Mode: 5.6.1: Forward
                Zero Order Forward Mode: Function Values: 5.6.1.1: ForwardZero
                First Order Forward Mode: Derivative Values: 5.6.1.2: ForwardOne
                Any Order Forward Mode: 5.6.1.3: ForwardAny
                Number Taylor Coefficients, Per Variable, Currently Stored: 5.6.1.4: size_taylor
                Comparison Changes During Zero Order Forward Mode: 5.6.1.5: CompareChange
                    CompareChange and Re-Tape: Example and Test: 5.6.1.5.1: CompareChange.cpp
                Controlling taylor_ Coefficients Memory Allocation: 5.6.1.6: capacity_taylor
                Forward Mode: Example and Test: 5.6.1.7: Forward.cpp
            Reverse Mode: 5.6.2: Reverse
                First Order Reverse Mode: 5.6.2.1: reverse_one
                    First Order Reverse Mode: Example and Test: 5.6.2.1.1: reverse_one.cpp
                Second Order Reverse Mode: 5.6.2.2: reverse_two
                    Second Order Reverse ModeExample and Test: 5.6.2.2.1: reverse_two.cpp
                    Hessian Times Direction: Example and Test: 5.6.2.2.2: HesTimesDir.cpp
                Any Order Reverse Mode: 5.6.2.3: reverse_any
                    Any Order Reverse Mode: Example and Test: 5.6.2.3.1: reverse_any.cpp
            Calculating Sparsity Patterns: 5.6.3: Sparse
                Jacobian Sparsity Pattern: Forward Mode: 5.6.3.1: ForSparseJac
                    Forward Mode Jacobian Sparsity: Example and Test: 5.6.3.1.1: ForSparseJac.cpp
                Jacobian Sparsity Pattern: Reverse Mode: 5.6.3.2: RevSparseJac
                    Reverse Mode Jacobian Sparsity: Example and Test: 5.6.3.2.1: RevSparseJac.cpp
                Hessian Sparsity Pattern: Reverse Mode: 5.6.3.3: RevSparseHes
                    Reverse Mode Hessian Sparsity: Example and Test: 5.6.3.3.1: RevSparseHes.cpp
        First and Second Derivatives: Easy Drivers: 5.7: Drivers
            Jacobian: Driver Routine: 5.7.1: Jacobian
                Jacobian: Example and Test: 5.7.1.1: Jacobian.cpp
            First Order Partial Derivative: Driver Routine: 5.7.2: ForOne
                First Order Partial Driver: Example and Test: 5.7.2.1: ForOne.cpp
            First Order Derivative: Driver Routine: 5.7.3: RevOne
                First Order Derivative Driver: Example and Test: 5.7.3.1: RevOne.cpp
            Hessian: Easy Driver: 5.7.4: Hessian
                Hessian: Example and Test: 5.7.4.1: Hessian.cpp
                Hessian of Lagrangian and ADFun Default Constructor: Example and Test: 5.7.4.2: HesLagrangian.cpp
            Forward Mode Second Partial Derivative Driver: 5.7.5: ForTwo
                Subset of Second Order Partials: Example and Test: 5.7.5.1: ForTwo.cpp
            Reverse Mode Second Partial Derivative Driver: 5.7.6: RevTwo
                Second Partials Reverse Driver: Example and Test: 5.7.6.1: RevTwo.cpp
            Sparse Jacobian: Easy Driver: 5.7.7: sparse_jacobian
                Sparse Jacobian: Example and Test: 5.7.7.1: sparse_jacobian.cpp
            Sparse Hessian: Easy Driver: 5.7.8: sparse_hessian
                Sparse Hessian: Example and Test: 5.7.8.1: sparse_hessian.cpp
        Check an ADFun Sequence of Operations: 5.8: FunCheck
            ADFun Check and Re-Tape: Example and Test: 5.8.1: FunCheck.cpp
        OpenMP Maximum Thread Number: 5.9: omp_max_thread
            Compile and Run the OpenMP Test: 5.9.1: openmp_run.sh
                A Simple Parallel Loop: 5.9.1.1: example_a11c.cpp
                Multi-Threaded Newton's Method Main Program: 5.9.1.2: multi_newton.cpp
                    Multi-Threaded Newton's Method Routine: 5.9.1.2.1: multi_newton
                    OpenMP Multi-Threading Newton's Method Source Code: 5.9.1.2.2: multi_newton.hpp
                Sum of 1/i Main Program: 5.9.1.3: sum_i_inv.cpp
        ADFun Object Deprecated Member Functions: 5.10: FunDeprecated
    The CppAD General Purpose Library: 6: library
        Replacing the CppAD Error Handler: 6.1: ErrorHandler
            Replacing The CppAD Error Handler: Example and Test: 6.1.1: ErrorHandler.cpp
            CppAD Assertions During Execution: 6.1.2: cppad_assert
        Determine if Two Values Are Nearly Equal: 6.2: NearEqual
            NearEqual Function: Example and Test: 6.2.1: Near_Equal.cpp
        Run One Speed Test and Return Results: 6.3: speed_test
            speed_test: Example and test: 6.3.1: speed_test.cpp
        Run One Speed Test and Print Results: 6.4: SpeedTest
            Example Use of SpeedTest: 6.4.1: speed_program.cpp
        Definition of a Numeric Type: 6.5: NumericType
            The NumericType: Example and Test: 6.5.1: NumericType.cpp
        Check NumericType Class Concept: 6.6: CheckNumericType
            The CheckNumericType Function: Example and Test: 6.6.1: CheckNumericType.cpp
        Definition of a Simple Vector: 6.7: SimpleVector
            Simple Vector Template Class: Example and Test: 6.7.1: SimpleVector.cpp
        Check Simple Vector Concept: 6.8: CheckSimpleVector
            The CheckSimpleVector Function: Example and Test: 6.8.1: CheckSimpleVector.cpp
        Obtain Nan and Determine if a Value is Nan: 6.9: nan
            nan: Example and Test: 6.9.1: nan.cpp
        The Integer Power Function: 6.10: pow_int
        Evaluate a Polynomial or its Derivative: 6.11: Poly
            Polynomial Evaluation: Example and Test: 6.11.1: Poly.cpp
            Source: Poly: 6.11.2: poly.hpp
        Compute Determinants and Solve Equations by LU Factorization: 6.12: LuDetAndSolve
            Compute Determinant and Solve Linear Equations: 6.12.1: LuSolve
                LuSolve With Complex Arguments: Example and Test: 6.12.1.1: LuSolve.cpp
                Source: LuSolve: 6.12.1.2: lu_solve.hpp
            LU Factorization of A Square Matrix: 6.12.2: LuFactor
                LuFactor: Example and Test: 6.12.2.1: LuFactor.cpp
                Source: LuFactor: 6.12.2.2: lu_factor.hpp
            Invert an LU Factored Equation: 6.12.3: LuInvert
                LuInvert: Example and Test: 6.12.3.1: LuInvert.cpp
                Source: LuInvert: 6.12.3.2: lu_invert.hpp
        One DimensionalRomberg Integration: 6.13: RombergOne
            One Dimensional Romberg Integration: Example and Test: 6.13.1: RombergOne.cpp
        Multi-dimensional Romberg Integration: 6.14: RombergMul
            One Dimensional Romberg Integration: Example and Test: 6.14.1: RombergMul.cpp
        An Embedded 4th and 5th Order Runge-Kutta ODE Solver: 6.15: Runge45
            Runge45: Example and Test: 6.15.1: Runge45.cpp
        A 3rd and 4th Order Rosenbrock ODE Solver: 6.16: Rosen34
            Rosen34: Example and Test: 6.16.1: Rosen34.cpp
        An Error Controller for ODE Solvers: 6.17: OdeErrControl
            OdeErrControl: Example and Test: 6.17.1: OdeErrControl.cpp
            OdeErrControl: Example and Test Using Maxabs Argument: 6.17.2: OdeErrMaxabs.cpp
        An Arbitrary Order Gear Method: 6.18: OdeGear
            OdeGear: Example and Test: 6.18.1: OdeGear.cpp
        An Error Controller for Gear's Ode Solvers: 6.19: OdeGearControl
            OdeGearControl: Example and Test: 6.19.1: OdeGearControl.cpp
        Computing Jacobian and Hessian of Bender's Reduced Objective: 6.20: BenderQuad
            BenderQuad: Example and Test: 6.20.1: BenderQuad.cpp
        LU Factorization of A Square Matrix and Stability Calculation: 6.21: LuRatio
            LuRatio: Example and Test: 6.21.1: LuRatio.cpp
        Float and Double Standard Math Unary Functions: 6.22: std_math_unary
        The CppAD::vector Template Class: 6.23: CppAD_vector
            CppAD::vector Template Class: Example and Test: 6.23.1: CppAD_vector.cpp
            CppAD::vectorBool Class: Example and Test: 6.23.2: vectorBool.cpp
        Routines That Track Use of New and Delete: 6.24: TrackNewDel
            Tracking Use of New and Delete: Example and Test: 6.24.1: TrackNewDel.cpp
    Preprocessor Definitions Used by CppAD: 7: preprocessor
    Examples: 8: Example
        General Examples: 8.1: General
            Nonlinear Programming Using the CppAD Interface to Ipopt: 8.1.1: ipopt_cppad_nlp
                Linking the CppAD Interface to Ipopt in Visual Studio 9.0: 8.1.1.1: ipopt_cppad_windows
                Nonlinear Programming Using CppAD and Ipopt: Example and Test: 8.1.1.2: ipopt_cppad_simple.cpp
                Example Simultaneous Solution of Forward and Inverse Problem: 8.1.1.3: ipopt_cppad_ode
                    An ODE Forward Problem Example: 8.1.1.3.1: ipopt_cppad_ode_forward
                    An ODE Inverse Problem Example: 8.1.1.3.2: ipopt_cppad_ode_inverse
                    Simulating ODE Measurement Values: 8.1.1.3.3: ipopt_cppad_ode_simulate
                    ipopt_cppad_nlp ODE Problem Representation: 8.1.1.3.4: ipopt_cppad_ode_represent
                    ipopt_cppad_nlp ODE Example Source Code: 8.1.1.3.5: ipopt_cppad_ode.cpp
            Interfacing to C: Example and Test: 8.1.2: Interface2C.cpp
            Gradient of Determinant Using Expansion by Minors: Example and Test: 8.1.3: JacMinorDet.cpp
            Gradient of Determinant Using Lu Factorization: Example and Test: 8.1.4: JacLuDet.cpp
            Gradient of Determinant Using Expansion by Minors: Example and Test: 8.1.5: HesMinorDet.cpp
            Gradient of Determinant Using LU Factorization: Example and Test: 8.1.6: HesLuDet.cpp
            A Stiff Ode: Example and Test: 8.1.7: OdeStiff.cpp
            Taylor's Ode Solver: An Example and Test: 8.1.8: ode_taylor.cpp
            Using Adolc with Taylor's Ode Solver: An Example and Test: 8.1.9: ode_taylor_adolc.cpp
            Example Differentiating a Stack Machine Interpreter: 8.1.10: StackMachine.cpp
            Using Multiple Levels of AD: 8.1.11: mul_level
                Multiple Tapes: Example and Test: 8.1.11.1: mul_level.cpp
        Utility Routines used by CppAD Examples: 8.2: ExampleUtility
            Program That Runs the CppAD Examples: 8.2.1: Example.cpp
            Program That Runs the Speed Examples: 8.2.2: speed_example.cpp
            Lu Factor and Solve with Recorded Pivoting: 8.2.3: LuVecAD
                Lu Factor and Solve With Recorded Pivoting: Example and Test: 8.2.3.1: LuVecADOk.cpp
        List of All the CppAD Examples: 8.3: ListAllExamples
        Choosing The Vector Testing Template Class: 8.4: test_vector
    Appendix: 9: Appendix
        Frequently Asked Questions and Answers: 9.1: Faq
        Speed Test Routines: 9.2: speed
            Speed Testing Main Program: 9.2.1: speed_main
                Speed Testing Gradient of Determinant Using Lu Factorization: 9.2.1.1: link_det_lu
                Speed Testing Gradient of Determinant by Minor Expansion: 9.2.1.2: link_det_minor
                Speed Testing Second Derivative of a Polynomial: 9.2.1.3: link_poly
                Speed Testing Sparse Hessian: 9.2.1.4: link_sparse_hessian
                Speed Testing Gradient of Ode Solution: 9.2.1.5: link_ode
            Speed Testing Utilities: 9.2.2: speed_utility
                Simulate a [0,1] Uniform Random Variate: 9.2.2.1: uniform_01
                    Source: uniform_01: 9.2.2.1.1: uniform_01.hpp
                Determinant of a Minor: 9.2.2.2: det_of_minor
                    Determinant of a Minor: Example and Test: 9.2.2.2.1: det_of_minor.cpp
                    Source: det_of_minor: 9.2.2.2.2: det_of_minor.hpp
                Determinant Using Expansion by Minors: 9.2.2.3: det_by_minor
                    Determinant Using Expansion by Minors: Example and Test: 9.2.2.3.1: det_by_minor.cpp
                    Source: det_by_minor: 9.2.2.3.2: det_by_minor.hpp
                Determinant Using Expansion by Lu Factorization: 9.2.2.4: det_by_lu
                    Determinant Using Lu Factorization: Example and Test: 9.2.2.4.1: det_by_lu.cpp
                    Source: det_by_lu: 9.2.2.4.2: det_by_lu.hpp
                Check Determinant of 3 by 3 matrix: 9.2.2.5: det_33
                    Source: det_33: 9.2.2.5.1: det_33.hpp
                Check Gradient of Determinant of 3 by 3 matrix: 9.2.2.6: det_grad_33
                    Source: det_grad_33: 9.2.2.6.1: det_grad_33.hpp
                Evaluate a Function Defined in Terms of an ODE: 9.2.2.7: ode_evaluate
                    ode_evaluate: Example and test: 9.2.2.7.1: ode_evaluate.cpp
                    Source: ode_evaluate: 9.2.2.7.2: ode_evaluate.hpp
                Evaluate a Function That Has a Sparse Hessian: 9.2.2.8: sparse_evaluate
                    sparse_evaluate: Example and test: 9.2.2.8.1: sparse_evaluate.cpp
                    Source: sparse_evaluate: 9.2.2.8.2: sparse_evaluate.hpp
            Speed Test Functions in Double: 9.2.3: speed_double
                Double Speed: Determinant by Minor Expansion: 9.2.3.1: double_det_minor.cpp
                Double Speed: Determinant Using Lu Factorization: 9.2.3.2: double_det_lu.cpp
                Double Speed: Ode Solution: 9.2.3.3: double_ode.cpp
                Double Speed: Evaluate a Polynomial: 9.2.3.4: double_poly.cpp
                Double Speed: Sparse Hessian: 9.2.3.5: double_sparse_hessian.cpp
            Speed Test Derivatives Using Adolc: 9.2.4: speed_adolc
                Adolc Speed: Gradient of Determinant by Minor Expansion: 9.2.4.1: adolc_det_minor.cpp
                Adolc Speed: Gradient of Determinant Using Lu Factorization: 9.2.4.2: adolc_det_lu.cpp
                Adolc Speed: Ode: 9.2.4.3: adolc_ode.cpp
                Adolc Speed: Second Derivative of a Polynomial: 9.2.4.4: adolc_poly.cpp
                Adolc Speed: Sparse Hessian: 9.2.4.5: adolc_sparse_hessian.cpp
            Speed Test Derivatives Using CppAD: 9.2.5: speed_cppad
                CppAD Speed: Gradient of Determinant by Minor Expansion: 9.2.5.1: cppad_det_minor.cpp
                CppAD Speed: Gradient of Determinant Using Lu Factorization: 9.2.5.2: cppad_det_lu.cpp
                CppAD Speed: Gradient of Ode Solution: 9.2.5.3: cppad_ode.cpp
                CppAD Speed: Second Derivative of a Polynomial: 9.2.5.4: cppad_poly.cpp
                CppAD Speed: Sparse Hessian: 9.2.5.5: cppad_sparse_hessian.cpp
            Speed Test Derivatives Using Fadbad: 9.2.6: speed_fadbad
                Fadbad Speed: Gradient of Determinant by Minor Expansion: 9.2.6.1: fadbad_det_minor.cpp
                Fadbad Speed: Gradient of Determinant Using Lu Factorization: 9.2.6.2: fadbad_det_lu.cpp
                Fadbad Speed: Ode: 9.2.6.3: fadbad_ode.cpp
                Fadbad Speed: Second Derivative of a Polynomial: 9.2.6.4: fadbad_poly.cpp
                Fadbad Speed: Sparse Hessian: 9.2.6.5: fadbad_sparse_hessian.cpp
            Speed Test Derivatives Using Sacado: 9.2.7: speed_sacado
                Sacado Speed: Gradient of Determinant by Minor Expansion: 9.2.7.1: sacado_det_minor.cpp
                Sacado Speed: Gradient of Determinant Using Lu Factorization: 9.2.7.2: sacado_det_lu.cpp
                Sacado Speed: Gradient of Ode Solution: 9.2.7.3: sacado_ode.cpp
                Sacado Speed: Second Derivative of a Polynomial: 9.2.7.4: sacado_poly.cpp
                Sacado Speed: Sparse Hessian: 9.2.7.5: sacado_sparse_hessian.cpp
        The Theory of Derivative Calculations: 9.3: Theory
            The Theory of Forward Mode: 9.3.1: ForwardTheory
                Exponential Function Forward Taylor Polynomial Theory: 9.3.1.1: ExpForward
                Logarithm Function Forward Taylor Polynomial Theory: 9.3.1.2: LogForward
                Square Root Function Forward Taylor Polynomial Theory: 9.3.1.3: SqrtForward
                Trigonometric and Hyperbolic Sine and Cosine Forward Theory: 9.3.1.4: SinCosForward
                Arctangent Function Forward Taylor Polynomial Theory: 9.3.1.5: AtanForward
                Arcsine Function Forward Taylor Polynomial Theory: 9.3.1.6: AsinForward
                Arccosine Function Forward Taylor Polynomial Theory: 9.3.1.7: AcosForward
            The Theory of Reverse Mode: 9.3.2: ReverseTheory
                Exponential Function Reverse Mode Theory: 9.3.2.1: ExpReverse
                Logarithm Function Reverse Mode Theory: 9.3.2.2: LogReverse
                Square Root Function Reverse Mode Theory: 9.3.2.3: SqrtReverse
                Trigonometric and Hyperbolic Sine and Cosine Reverse Theory: 9.3.2.4: SinCosReverse
                Arctangent Function Reverse Mode Theory: 9.3.2.5: AtanReverse
                Arcsine Function Reverse Mode Theory: 9.3.2.6: AsinReverse
                Arccosine Function Reverse Mode Theory: 9.3.2.7: AcosReverse
            An Important Reverse Mode Identity: 9.3.3: reverse_identity
        Glossary: 9.4: glossary
        Bibliography: 9.5: Bib
        Know Bugs and Problems Using CppAD: 9.6: Bugs
        The CppAD Wish List: 9.7: WishList
        Changes and Additions to CppAD: 9.8: whats_new
            Changes and Additions to CppAD During 2009: 9.8.1: whats_new_09
            Changes and Additions to CppAD During 2008: 9.8.2: whats_new_08
            Changes and Additions to CppAD During 2007: 9.8.3: whats_new_07
            Changes and Additions to CppAD During 2006: 9.8.4: whats_new_06
            Changes and Additions to CppAD During 2005: 9.8.5: whats_new_05
            Changes and Additions to CppAD During 2004: 9.8.6: whats_new_04
            Changes and Additions to CppAD During 2003: 9.8.7: whats_new_03
        Deprecated Include Files: 9.9: include_deprecated
        Your License for the CppAD Software: 9.10: License
    Alphabetic Listing of Cross Reference Tags: 10: _reference
    Keyword Index: 11: _index
    External Internet References: 12: _external

2: CppAD Download, Test, and Installation Instructions

2.a: Contents
2.1: Unix Download, Test and Installation
2.2: Windows Download and Test

Input File: omh/install.omh
2.1: Unix Download, Test and Installation

2.1.a: Fedora
CppAD is available through yum on the Fedora operating system starting Fedora version 7. You can download and install CppAD with the instruction yum install cppad-devel (In Fedora, devel is used for program development tools.) You can download and install the corresponding version of the documentation using the command yum install cppad-doc

2.1.b: RPM
If you want to use the Fedora cppad.spec file to build an RPM for some other operating system, it can be found at
https://projects.coin-or.org/CppAD/browser/trunk/cppad.spec

2.1.c: Download

2.1.c.a: Subversion
If you are familiar with subversion, you may want to follow the more complicated CppAD download instructions; see the following 2.1.1: subversion instructions .

2.1.c.b: Web Link
If you are not using the subversion download instructions, make sure you are reading the web version of this documentation by following the link web version (http://www.coin-or.org/CppAD/Doc/installunix.htm) . Then proceed with the instruction that appear below this point.

2.1.c.c: Unix Tar Files
The download files below were first archived with tar and then compressed with gzip: The ascii files for these downloads are in Unix format; i.e., each line ends with just a line feed.
CPL License    cppad-20090131.0.cpl.tgz
GPL License    cppad-20090131.0.gpl.tgz

2.1.c.d: Tar File Extraction
Use the unix command
     tar -xvzf cppad-20090131.0.
license.tgz
(where license is cpl or gpl) to decompress and extract the unix format version into the distribution directory
     cppad-20090131.0
To see if this has been done correctly, check for the following file:
     cppad-20090131.0/cppad/cppad.hpp

2.1.d: Configure
Enter the directory created by the extraction and execute the command:
     ./configure                            \
     --prefix=
PrefixDir                     \
     --with-Documentation                   \
     --with-Introduction                    \
     --with-Example                         \
     --with-TestMore                        \
     --with-Speed                           \
     --with-PrintFor                        \
     --with-stdvector                       \  
     POSTFIX_DIR=
PostfixDir                 \
     ADOLC_DIR=
AdolcDir                     \
     FADBAD_DIR=
FadbadDir                   \
     SADADO_DIR=
SacadoDir                   \
     BOOST_DIR=
BoostDir                     \
     IPOPT_DIR=
IpoptDir                     \
     CXX_FLAGS=
CompilerFlags 
where only the configure command need appear. The entries one each of the other lines are optional and the text in italic is replaced values that you choose.

2.1.e: Testing Return Status
All of the test programs mentioned below (with the exception of 2.1.l: print_for ) return a status of zero, if all correctness tests pass, and one for an error. For example, if 2.1.i: --with-Example is included in the configure command,
 
	if ! example/example
	then
		echo "example failed its test"
		echo exit 1
	fi
could be used abort the a bash shell script when the test failed.

2.1.f: PrefixDir
The default value for prefix directory is $HOME i.e., by default the CppAD include files will 2.1.v: install below $HOME. If you want to install elsewhere, you will have to use this option. As an example of using the --prefix=PrefixDir option, if you specify
 
	./configure --prefix=/usr/local
the CppAD include files will be installed in the directory
     /usr/local/include/cppad
If 2.1.g: --with-Documentation is specified, the CppAD documentation files will be installed in the directory
     /usr/local/share/doc/cppad-20090131.0

2.1.g: --with-Documentation
If the command line argument --with-Documentation is specified, the CppAD documentation HTML and XML files are copied to the directory
     
PrefixDir/share/doc/PostfixDir/cppad-20090131.0
(see 2.1.n: PostfixDir ). The top of the CppAD HTML documentation tree (with mathematics displayed as LaTex command) will be located at
     
PrefixDir/share/doc/PostfixDir/cppad-20090131.0/cppad.htm
and the top of the XML documentation tree (with mathematics displayed as MathML) will be located at
     
PrefixDir/share/doc/PostfixDir/cppad-20090131.0/cppad.xml

2.1.h: --with-Introduction

2.1.h.a: get_started
If the command line argument --with-Introduction is specified, the 3.1: get_started.cpp example will be built. Once the make command has been executed, you can run this example by executing the command
 
	introduction/get_started/get_started


2.1.h.b: exp_apx
If the command line argument --with-Introduction is specified, the 3.4: exp_apx_main.cpp program (verifies calculations in the 3: Introduction exp_apx example) will be built. Once the make command has been executed, you can run these examples by executing the command
 
	introduction/exp_apx/exp_apx


2.1.i: --with-Example
If the command line argument --with-Example is specified, the 8.2.1: Example.cpp program (an extensive set of examples and correctness tests) will be built. Once the make command has been executed, you can run these examples by executing the command
 
	example/example


2.1.j: --with-TestMore
If the command line argument --with-TestMore is specified, another extensive set of correctness tests will be compiled by the 2.1.u: make command. Once the make command has been executed, you can run these tests by executing the command
 
	test_more/test_more


2.1.k: --with-Speed
If the command line argument --with-Speed is specified, a set of speed tests will built.

2.1.k.a: cppad
After you execute the 2.1.u: make command, you can run the 9.2.1: speed_main program with the command
     speed/cppad/cppad 
option seed

2.1.k.b: double
After you execute the 2.1.u: make command, you can run the 9.2.1: speed_main program with the command
     speed/double/double 
option seed

2.1.k.c: profile
The C++ compiler flags used to build the profile speed tests are
 
     AM_CXXFLAGS   = -pg -O2 $(CXX_FLAGS) -DPROFILE
After you execute the 2.1.u: make command, you can run the 9.2.1: speed_main program with the command
     speed/profile/profile 
option seed
You can then obtain the profiling results with
     gprof -b speed/profile/profile
If you are using a windows operating system with Cygwin or MinGW, you may have to replace profile by profile.exe in the gprof command above; i.e.,
 
	gprof -b speed/profile/profile.exe
In C++, template parameters and argument types become part of a routines's name. This can make the gprof output hard to read (the routine names can be very long). You can remove the template parameters and argument types from the routine names by executing the following command
 
	gprof -b speed/profile/profile | sed -f speed/profile/gprof.sed
If you are using a windows operating system with Cygwin or MinGW, you would need to use
 
	gprof -b speed/profile/profile.exe | sed -f speed/profile/gprof.sed


2.1.k.d: example
After you execute the 2.1.u: make command, you can run the 9.2.2: speed_utility examples with the command
     speed/example/example

2.1.l: --with-PrintFor
If the command line argument --with-PrintFor is specified, the 4.3.4.1: PrintFor.cpp example will be built. Once the make command has been executed, you can run this example by executing the command
 
	print_for/print_for
Unlike the other programs listed in this section, print_for does not automatically check for correctness and return a corresponding 2.1.e: status . Instead, it displays what it's output should be.

2.1.m: --with-stdvector
The 8.4: CPPAD_TEST_VECTOR template class is used for extensive examples and testing of CppAD. If the command line argument --with-stdvector is specified, the default definition this template class is replaced by
 
	std::vector
(In this case BoostDir must not also be specified.)

2.1.n: PostfixDir
By default, the postfix directory is empty; i.e., there is no postfix directory. As an example of using the POSTFIX_DIR=PostfixDir option, if you specify
 
	./configure --prefix=/usr/local POSTFIX_DIR=coin
the CppAD include files will be 2.1.v: installed in the directory
     /usr/local/include/coin/cppad
If 2.1.g: --with-Documentation is specified, the CppAD documentation files will be installed in the directory
     /usr/local/share/doc/coin/cppad-20090131.0

2.1.o: AdolcDir
If you have Adolc 1.10.2 (http://www.math.tu-dresden.de/~adol-c/) installed on your system, you can specify a value for AdolcDir in the 2.1.d: configure command line. The value of AdolcDir must be such that
     
AdolcDir/include/adolc/adouble.h
is a valid way to reference adouble.h. If 2.1.k: --with-Speed is also specified, after you execute the 2.1.u: make command, you can run the 9.2.1: speed_main program with the command
     speed/
package/package option seed
where package is equal to adolc.

2.1.o.a: Fix Adolc
Some compilers will complain about Adolc use for a const char* as a char*. This can be fixed by changing the following routine in the file adolc/avector.h:
 
	class ADOLC_DLL_EXPORT err_retu {
	char* message;
	public:
		err_retu(char* x) {
		printf("%s \n",x);
		};
	};
to be as follows
 
	class ADOLC_DLL_EXPORT err_retu {
	char* message;
	public:
		err_retu(const char* x) {
		printf("%s \n",x);
		};
	};


2.1.o.b: Linux
If you are using Linux, you will have to add to following lines to the file .bash_profile in your home directory:
     LD_LIBRARY_PATH=
AdolcDir/lib:${LD_LIBRARY_PATH}
     export LD_LIBRARY_PATH
in order for Adolc to run properly.

2.1.o.c: Cygwin
If you are using Cygwin, you will have to add to following lines to the file .bash_profile in your home directory:
     PATH=
AdolcDir/bin:${PATH}
     export PATH
in order for Adolc to run properly. If AdolcDir begins with a disk specification, you must use the Cygwin format for the disk specification. For example, if d:/adolc_base is the proper directory, /cygdrive/d/adolc_base should be used for AdolcDir.

2.1.p: FadbadDir
If you have Fadbad 2.1 (http://www.fadbad.com/) installed on your system, you can specify a value for FadbadDir. It must be such that
     
FadbadDir/FADBAD++/badiff.h
is a valid reference to badiff.h. If 2.1.k: --with-Speed is also specified, after you execute the 2.1.u: make command, you can run the 9.2.1: speed_main program with the command
     speed/
package/package option seed
where package is equal to fadbad.

2.1.q: SacadoDir
If you have Sacado (http://trilinos.sandia.gov/packages/sacado/) installed on your system, you can specify a value for SacadoDir. It must be such that
     
SacadoDir/include/Sacado.hpp
is a valid reference to Sacado.hpp. If 2.1.k: --with-Speed is also specified, after you execute the 2.1.u: make command, you can run the 9.2.1: speed_main program with the command
     speed/
package/package option seed
where package is equal to sacado.

2.1.r: BoostDir
The 8.4: CPPAD_TEST_VECTOR template class is used for extensive examples and testing of CppAD. The default definition for CPPAD_TEST_VECTOR is 6.23: CppAD::vector . If the command line argument
     BOOST_DIR=
BoostDir
is present, it must be such that
     
BoostDir/boost/numeric/ublas/vector.hpp
is a valid reference to the file vector.hpp. In this case, the default definition of CPPAD_TEST_VECTOR is replaced by
 
	boost::numeric::ublas::vector
(see boost (http://www.boost.org) ). If BoostDir is present, the argument --with-stdvector must not be present.

2.1.s: IpoptDir
If you have Ipopt (http://www.coin-or.org/projects/Ipopt.xml) installed on your system, you can specify a value for IpoptDir. It must be such that
     
IpoptDir/include/IpIpoptApplication.hpp
is a valid reference to IpIpoptApplication.hpp. In this case, 8.1.1.2: ipopt_cppad_simple.cpp will be included in the example testing.

2.1.t: CompilerFlags
If the command line argument CompilerFlags is present, it specifies compiler flags. For example,
     CXX_FLAGS="-Wall -ansi"
would specify that warning flags -Wall and -ansi should be included in all the C++ compile commands. The error and warning flags chosen must be valid options for the C++ compiler. The default value for CompilerFlags is the empty string.

2.1.u: make
The command
 
	make
will compile all of the examples and tests. An extensive set of examples and tests can be run as described under the headings 2.1.h: --with-Introduction , 2.1.i: --with-Example , 2.1.j: --with-TestMore , 2.1.k: --with-Speed , 2.1.l: --with-PrintFor , 2.1.o: AdolcDir , 2.1.p: FadbadDir , 2.1.q: SacadoDir , and 2.1.s: IpoptDir above.

2.1.v: make install
Once you are satisfied that the tests are giving correct results, you can install CppAD into easy to use directories by executing the command
 
	make install
This will install CppAD in the location specified by 2.1.f: PrefixDir . You must have permission to write in the PrefixDir directory to execute this command. You may optionally specify a destination directory for the install; i.e.,
     make install DESTDIR=
DestinationDirectory

Input File: omh/install_unix.omh
2.1.1: Using Subversion To Download Source Code

2.1.1.a: File Format
The files corresponding to this download procedure are in Unix format; i.e., each line ends with just a line feed.

2.1.1.b: Subversion
You must have subversion (http://subversion.tigris.org/) installed to use this download procedure. In Unix, you can check if subversion is already installed in your path by entering the command
 
	which svn


2.1.1.c: OMhelp
The documentation for CppAD is built from the source code files using OMhelp (http://www.seanet.com/~bradbell/omhelp/) . In Unix, you can check if OMhelp is already installed in your path by entering the command
 
	which omhelp


2.1.1.d: Current Version
The following command will download the current version of the CppAD source code:
     svn co https://projects.coin-or.org/svn/CppAD/
dir dir
where dir is replaced by trunk. To see if this has been done correctly, check for the following file:
     
dir/cppad/cppad.hpp
Since you downloaded the current version, you should set the version of CppAD to the current date. Using an editor of you choice, open the file
     
dir/configure
(if you plan to use the 2.2: Windows install instructions, edit dir/cppad/config.h instead of dir/configure). Search this file for text of the form yyyymmdd where yyyy are four decimal digits representing a year, mm is two decimal digits representing a month, and dd is two decimal digits representing a day. Replace each occurrence of this text with the decimal digits for the current year, month, and day (i.e., the eight decimal digits representing the current date).

2.1.1.e: Stable Version
Subversion downloads are available for the following stable versions of CppAD:
dir yyyymmdd
1.0    20060913
2.0    20071016
2.1    20071124
2.2    20071210
The following command will download a stable version of the CppAD source code:
     svn co https://projects.coin-or.org/svn/CppAD/stable/
dir dir
To see if this has been done correctly, check for the following file: if dir is 1.0 check for
     1.0/CppAD/CppAD.h
otherwise check for
     
dir/cppad/cppad.hpp
Since you downloaded a stable version, the version of CppAD configure, and all the other relevant files, is correct.

2.1.1.f: Build the Documentation
Now build the documentation for this version using the commands
     mkdir 
dir/doc
     cd 
dir/doc
     omhelp ../doc.omh -noframe -debug -l http://www.coin-or.org/CppAD/
     omhelp ../doc.omh -noframe -debug -l http://www.coin-or.org/CppAD/ -xml

2.1.1.g: Continue with Installation
Once the steps above are completed, you can proceed with the install instructions in the documentation you just built. Start by opening the one of the two files
     
dir/doc/index.xml
     
dir/doc/index.htm
in a web browser and proceeding to the 2.1: Unix or 2.2: Windows install instructions.
Input File: omh/subversion.omh
2.2: Windows Download and Test

2.2.a: Cygwin
If you are using Cygwin, or MinGW with MSYS, follow the 2.1: unix install instructions.

2.2.b: Download

2.2.b.a: Subversion
If you are familiar with subversion, you may want to follow the more complicated 2.1.1: subversion download instructions instead of the ones below.

2.2.b.b: Web Link
If you are not using the subversion download instructions, sure you are reading the web based version of this documentation by following the link web version (http://www.coin-or.org/CppAD/Doc/installwindows.htm) . Then proceed with the instruction that appear below this point.

2.2.b.c: Unix Tar Files
The download files below were first archived with tar and then compressed with gzip. The ascii files are in Unix format; i.e., each line ends with a line feed (instead of a carriage return and line feed as is standard for windows formatted files). Visual Studio can handel this formatting just fine, but you may want to convert the format to the windows standard if you are using and editor that has trouble viewing the files in Unix format. and then a line feed.
CPL License    cppad-20090131.0.cpl.tgz
GPL License    cppad-20090131.0.gpl.tgz
The following steps will decompress and extract the files using Winzip (http://www.winzip.com/index.htm) version 7.0 (other version of Winzip and other decompression programs will be similar).
  1. Download your choice between these two licenses listed above and store the result in a file on disk.
  2. Open the file using Winzip (using All archives) as the file type in the Open browser.
  3. Winzip will ask if it should decompress the file into a temporary folder and open it. Respond by selecting the Yes button.
  4. Now select the Extract button from the main menu.
  5. Place the name of the directory were you want the distribution in the Extract to field and then select the Extract button in the pop-up dialog. Winzip will create a subdirectory called cppad-20090131.0 where the files are placed.


2.2.c: Getting Started
The following steps will build the 3.1: get_started.cpp example. Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\introduction\get_started\get_started.sln
in Visual Studio and then select Build | Build get_started.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following command
 
	introduction\get_started\Debug\get_started


2.2.d: Introduction
The following steps will build the routines that verify the calculations in the exp_apx calculations in the 3: Introduction section. Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\introduction\exp_apx\exp_apx.sln
in Visual Studio. Then select Build | Build exp_apx.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following command
 
	introduction\exp_apx\Debug\exp_apx


2.2.e: Examples and Testing
The following steps will build an extensive set of examples and correctness tests. Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\example\example.sln
in Visual Studio. Then select Build | Build Example.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following command
 
	example\Debug\example


2.2.f: More Correctness Testing
Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\test_more\test_more.sln
in Visual Studio and then select Build | Build test_more.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following command
 
	test_more\Debug\test_more


2.2.g: Printing During Forward Mode
Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\print_for\print_for.sln
in Visual Studio. Then select Build | Build print_for.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following command
 
	print_for\Debug\print_for


2.2.h: CppAD Speed Test
Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\speed\cppad\cppad.sln
in Visual Studio. Then select Build | Build cppad.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following commands
 
	speed\cppad\Debug\cppad correct 123
	speed\cppad\Debug\cppad speed 123


2.2.i: Double Speed Test
Using Microsoft Visual C++, open the workspace
 
	cppad-20090131.0\speed\double\double.sln
in Visual Studio. Then select Build | Build double.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following commands
 
	speed\double\Debug\double correct 123
	speed\double\Debug\double speed 123


2.2.j: Speed Utility Example
Using Microsoft Visual C++, open the workspace cppad-20090131.0\speed\example\example.sln in Visual Studio. Then select Build | Build example.exe. Then in a Dos box, and in the cppad-20090131.0 directory, execute the following command
 
	speed\example\Debug\example

Input File: omh/install_windows.omh
3: An Introduction by Example to Algorithmic Differentiation

3.a: Preface

3.a.a: Algorithmic Differentiation
Algorithmic Differentiation (often referred to as Automatic Differentiation or just AD) uses the software representation of a function to obtain an efficient method for calculating its derivatives. These derivatives can be of arbitrary order and are analytic in nature (do not have any truncation error).

3.a.b: Forward Mode
A forward mode sweep computes the partial derivative of all the dependent variables with respect to one independent variable (or independent variable direction).

3.a.c: Reverse Mode
A reverse mode sweep computes the derivative of one dependent variable (or one dependent variable direction) with respect to all the independent variables.

3.a.d: Operation Count
The number of floating point operations for either a forward or reverse mode sweep is a small multiple of the number required to evaluate the original function. Thus, using reverse mode, you can evaluate the derivative of a scalar valued function with respect to thousands of variables in a small multiple of the work to evaluate the original function.

3.a.e: Efficiency
AD automatically takes advantage of the speed of your algorithmic representation of a function. For example, if you calculate a determinant using LU factorization, AD will use the LU representation for the derivative of the determinant (which is faster than using the definition of the determinant).

3.b: Purpose
This is an introduction by example to Algorithmic Differentiation. Its purpose is to aid in understand what AD calculates, how the calculations are preformed, and the amount of computation and memory required for a forward or reverse sweep.

3.c: Outline
  1. Demonstrate the use of CppAD to calculate derivatives of a polynomial: 3.1: get_started.cpp .
  2. Present two algorithms that approximate the exponential function. The first algorithm 3.2.1: exp_2.hpp is simpler and does not include any logical variables or loops. The second algorithm 3.3.1: exp_eps.hpp includes logical operations and a while loop. For each of these algorithms, do the following:
    1. Define the mathematical function corresponding to the algorithm (3.2: exp_2 and 3.3: exp_eps ).
    2. Write out the floating point operation sequence, and corresponding values, that correspond to executing the algorithm for a specific input (3.2.3: exp_2_for0 and 3.3.3: exp_eps_for0 ).
    3. Compute a forward sweep derivative of the operation sequence (3.2.4: exp_2_for1 and 3.3.4: exp_eps_for1 ).
    4. Compute a reverse sweep derivative of the operation sequence (3.2.5: exp_2_rev1 and 3.3.5: exp_eps_rev1 ).
    5. Use CppAD to compute both a forward and reverse sweep of the operation sequence (3.2.8: exp_2_cppad and 3.3.8: exp_eps_cppad ).
  3. The program 3.4: exp_apx_main.cpp runs all of the test routines that validate the calculations in the 3.2: exp_2 and 3.3: exp_eps presentation.


3.d: Reference
An in-depth review of AD theory and methods can be found in the book Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation , Andreas Griewank, SIAM Frontiers in Applied Mathematics, 2000.

3.e: Contents
get_started.cpp: 3.1A Simple Program Using CppAD to Compute Derivatives
exp_2: 3.2Second Order Exponential Approximation
exp_eps: 3.3An Epsilon Accurate Exponential Approximation
exp_apx_main.cpp: 3.4Run the exp_2 and exp_eps Tests

Input File: omh/introduction.omh
3.1: A Simple Program Using CppAD to Compute Derivatives

3.1.a: Purpose
Demonstrate the use of CppAD by computing the derivative of a simple example function.

3.1.b: Function
The example function  f : \R \rightarrow \R is defined by  \[
      f(x) = a_0 + a_1 * x^1 + \cdots + a_{k-1} * x^{k-1}
\] 
where a is a fixed vector of length k.

3.1.c: Derivative
The derivative of  f(x) is given by  \[
      f' (x) = a_1 + 2 * a_2 * x +  \cdots + (k-1) * a_{k-1} * x^{k-2} 
\] 


3.1.d: Value
For the particular case in this example,  k is equal to 5,  a = (1, 1, 1, 1, 1) , and  x = 3 . If follows that  \[
      f' ( 3 ) = 1 + 2 * 3 + 3 * 3^2 + 4 * 3^3 = 142
\] 


3.1.e: Poly
The routine Poly is defined below for this particular application. A general purpose polynomial evaluation routine is documented and distributed with CppAD (see 6.11: Poly ).

3.1.f: Exercises
Modify the program below to accomplish the following tasks using CppAD:
  1. Compute and print the derivative of  f(x) = 1 + x + x^2 + x^3 + x^4 at the point  x = 2 .
  2. Compute and print the derivative of  f(x) = 1 + x + x^2 / 2 at the point  x = .5 .
  3. Compute and print the derivative of  f(x) = \exp (x) - 1 - x - x^2 / 2 at the point  x = .5 .


3.1.g: Program
 
#include <iostream>      // standard input/output 
#include <vector>        // standard vector
#include <cppad/cppad.hpp> // the CppAD package http://www.coin-or.org/CppAD/

namespace { 
      // define y(x) = Poly(a, x) in the empty namespace
      template <class Type>
      Type Poly(const std::vector<double> &a, const Type &x)
      {     size_t k  = a.size();
            Type y   = 0.;  // initialize summation
            Type x_i = 1.;  // initialize x^i
            size_t i;
            for(i = 0; i < k; i++)
            {     y   += a[i] * x_i;  // y   = y + a_i * x^i
                  x_i *= x;           // x_i = x_i * x
            }
            return y;
      }
}
// main program
int main(void)
{     using CppAD::AD;           // use AD as abbreviation for CppAD::AD
      using std::vector;         // use vector as abbreviation for std::vector
      size_t i;                  // a temporary index

      // vector of polynomial coefficients
      size_t k = 5;              // number of polynomial coefficients
      vector<double> a(k);       // vector of polynomial coefficients
      for(i = 0; i < k; i++)
            a[i] = 1.;           // value of polynomial coefficients

      // domain space vector
      size_t n = 1;              // number of domain space variables
      vector< AD<double> > X(n); // vector of domain space variables
      X[0] = 3.;                 // value corresponding to operation sequence

      // declare independent variables and start recording operation sequence
      CppAD::Independent(X);

      // range space vector
      size_t m = 1;              // number of ranges space variables
      vector< AD<double> > Y(m); // vector of ranges space variables
      Y[0] = Poly(a, X[0]);      // value during recording of operations

      // store operation sequence in f: X -> Y and stop recording
      CppAD::ADFun<double> f(X, Y);

      // compute derivative using operation sequence stored in f
      vector<double> jac(m * n); // Jacobian of f (m by n matrix)
      vector<double> x(n);       // domain space vector
      x[0] = 3.;                 // argument value for derivative
      jac  = f.Jacobian(x);      // Jacobian for operation sequence

      // print the results
      std::cout << "f'(3) computed by CppAD = " << jac[0] << std::endl;

      // check if the derivative is correct
      int error_code;
      if( jac[0] == 142. )
            error_code = 0;      // return code for correct case
      else  error_code = 1;      // return code for incorrect case

      return error_code;
}


3.1.h: Output
Executing the program above will generate the following output:
 
	f'(3) computed by CppAD = 142

Input File: introduction/get_started/get_started.cpp
3.2: Second Order Exponential Approximation

3.2.a: Syntax
# include "exp_2.hpp"
y = exp_2(x)

3.2.b: Purpose
This is a simple example algorithm that is used to demonstrate Algorithmic Differentiation (see 3.3: exp_eps for a more complex example).

3.2.c: Mathematical Form
The exponential function can be defined by  \[
     \exp (x) = 1 + x^1 / 1 ! + x^2 / 2 ! + \cdots 
\] 
The second order approximation for the exponential function is  \[
{\rm exp\_2} (x) =  1 + x + x^2 / 2 
\] 


3.2.d: include
The include command in the syntax is relative to
     cppad-
yy-mm-dd/introduction/exp_apx
where cppad-yy-mm-dd is the distribution directory created during the beginning steps of the 2: installation of CppAD.

3.2.e: x
The argument x has prototype
     const 
Type &x
(see Type below). It specifies the point at which to evaluate the approximation for the second order exponential approximation.

3.2.f: y
The result y has prototype
     
Type y
It is the value of the exponential function approximation defined above.

3.2.g: Type
If u and v are Type objects and i is an int:
Operation Result Type Description
Type(i) Type object with value equal to i
u = v Type new u (and result) is value of v
u * v Type result is value of  u * v
u / v Type result is value of  u / v
u + v Type result is value of  u + v

3.2.h: Contents
exp_2.hpp: 3.2.1exp_2: Implementation
exp_2.cpp: 3.2.2exp_2: Test
exp_2_for0: 3.2.3exp_2: Operation Sequence and Zero Order Forward Mode
exp_2_for1: 3.2.4exp_2: First Order Forward Mode
exp_2_rev1: 3.2.5exp_2: First Order Reverse Mode
exp_2_for2: 3.2.6exp_2: Second Order Forward Mode
exp_2_rev2: 3.2.7exp_2: Second Order Reverse Mode
exp_2_cppad: 3.2.8exp_2: CppAD Forward and Reverse Sweeps

3.2.i: Implementation
The file 3.2.1: exp_2.hpp contains a C++ implementation of this function.

3.2.j: Test
The file 3.2.2: exp_2.cpp contains a test of this implementation. It returns true for success and false for failure.

3.2.k: Exercises
  1. Suppose that we make the call
     
    	double x = .1;
    	double y = exp_2(x);
    
    What is the value assigned to v1, v2, ... ,v5 in 3.2.1: exp_2.hpp ?
  2. Extend the routine exp_2.hpp to a routine exp_3.hpp that computes  \[
         1 + x^2 / 2 ! + x^3 / 3 !
    \] 
    Do this in a way that only assigns one value to each variable (as exp_2 does).
  3. Suppose that we make the call
     
    	double x = .5;
    	double y = exp_3(x);
    
    using exp_3 created in the previous problem. What is the value assigned to the new variables in exp_3 (variables that are in exp_3 and not in exp_2) ?

Input File: introduction/exp_apx/exp_2.hpp
3.2.1: exp_2: Implementation
 
template <class Type>
Type exp_2(const Type &x) 
{       Type v1  = x;                // v1 = x
        Type v2  = Type(1) + v1;     // v2 = 1 + x
        Type v3  = v1 * v1;          // v3 = x^2
        Type v4  = v3 / Type(2);     // v4 = x^2 / 2 
        Type v5  = v2 + v4;          // v5 = 1 + x + x^2 / 2
        return v5;                   // exp_2(x) = 1 + x + x^2 / 2
}

Input File: introduction/exp_apx/exp_2.omh
3.2.2: exp_2: Test
 
# include <cmath>           // define fabs function
# include "exp_2.hpp"       // definition of exp_2 algorithm
bool exp_2(void)
{	double x     = .5;
	double check = 1 + x + x * x / 2.; 
	bool   ok    = std::fabs( exp_2(x) - check ) <= 1e-10; 
	return ok;
}

Input File: introduction/exp_apx/exp_2.omh
3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode

3.2.3.a: Mathematical Form
The operation sequence (see below) corresponding to the algorithm 3.2.1: exp_2.hpp is the same for all values of x. The mathematical form for the corresponding function is  \[
     f(x) = 1 + x + x^2 / 2
\] 
An algorithmic differentiation package does not operate on the mathematical function  f(x) but rather on the particular algorithm used to compute the function (in this case 3.2.1: exp_2.hpp ).

3.2.3.b: Zero Order Expansion
In general, a zero order forward sweep is given a vector  x^{(0)} \in \R^n and it returns the corresponding vector  y^{(0)} \in \R^m given by  \[
     y^{(0)} = f( x^{(0)} )
\]
The superscript  (0) denotes zero order derivative; i.e., it is equal to the value of the corresponding variable. For the example we are considering here, both  n and  m are equal to one.

3.2.3.c: Operation Sequence
An atomic Type operation is an operation that has a Type result and is not made up of other more basic operations. A sequence of atomic Type operations is called a Type operation sequence. Given an C++ algorithm and its inputs, there is a corresponding Type operation sequence for each type. If Type is clear from the context, we drop it and just refer to the operation sequence.

We consider the case where 3.2.1: exp_2.hpp is executed with  x^{(0)} = .5 . The table below contains the corresponding operation sequence and the results of a zero order sweep.

3.2.3.c.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.

3.2.3.c.b: Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.

3.2.3.c.c: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.2.3.c.d: Zero Order
The Zero Order column contains the zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e, in order of increasing index in the operation sequence.

3.2.3.c.e: Sweep
Index    Code    Operation    Zero Order
1    Type v1 = x;  v_1 = x   v_1^{(0)} = 0.5
2    Type v2 = Type(1) + v1;  v_2 = 1 + v_1   v_2^{(0)} = 1.5
3    Type v3 = v1 * v1;  v_3 = v_1 * v_1   v_3^{(0)} = 0.25
4    Type v4 = v3 / Type(2);  v_4 = v_3 / 2  v_4^{(0)} = 0.125
5    Type v5 = v2 + v4;  v_5 = v_2 + v_4   v_5^{(0)} = 1.625
3.2.3.d: Return Value
The return value for this case is  \[
     1.625 =
     v_5^{(0)} =
     f( x^{(0)} )
\] 


3.2.3.e: Verification
The file 3.2.3.1: exp_2_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.2.3.f: Exercises
  1. Suppose that  x^{(0)} = .2 , what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for  \[
         v_1^{(0)} , v_2^{(0)} , \cdots , v_5^{(0)}
    \]
  2. Create a modified version of 3.2.3.1: exp_2_for0.cpp that verifies the values you obtained for the previous exercise.
  3. Create and run a main program that reports the result of calling the modified version of 3.2.3.1: exp_2_for0.cpp in the previous exercise.

Input File: introduction/exp_apx/exp_2.omh
3.2.3.1: exp_2: Verify Zero Order Forward Sweep
 # include <cmath>            // for fabs function
bool exp_2_for0(double *v0)  // double v0[6]
{	bool  ok = true;
	double x = .5;

	v0[1] = x;                                  // v1 = x
	ok  &= std::fabs( v0[1] - 0.5) < 1e-10;

	v0[2] = 1. + v0[1];                         // v2 = 1 + v1
	ok  &= std::fabs( v0[2] - 1.5) < 1e-10;

	v0[3] = v0[1] * v0[1];                      // v3 = v1 * v1
	ok  &= std::fabs( v0[3] - 0.25) < 1e-10;

	v0[4] = v0[3] / 2.;                         // v4 = v3 / 2
	ok  &= std::fabs( v0[4] - 0.125) < 1e-10;

	v0[5] = v0[2] + v0[4];                      // v5  = v2 + v4
	ok  &= std::fabs( v0[5] - 1.625) < 1e-10;

	return ok;
}
bool exp_2_for0(void)
{	double v0[6];
	return exp_2_for0(v0);
}

Input File: introduction/exp_apx/exp_2_for0.cpp
3.2.4: exp_2: First Order Forward Mode

3.2.4.a: First Order Expansion
We define  x(t) near  t = 0 by the first order expansion  \[
     x(t) = x^{(0)} + x^{(1)} * t
\]
it follows that  x^{(0)} is the zero, and  x^{(1)} the first, order derivative of  x(t) at  t = 0 .

3.2.4.b: Purpose
In general, a first order forward sweep is given the 3.2.3.b: zero order derivative for all of the variables in an operation sequence, and the first order derivatives for the independent variables. It uses these to compute the first order derivatives, and thereby obtain the first order expansion, for all the other variables in the operation sequence.

3.2.4.c: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_2.hpp to compute  \[
     f(x) = 1 + x + x^2 / 2
\] 
The corresponding derivative function is  \[
     \partial_x f (x) =   1 + x
\] 
An algorithmic differentiation package does not operate on the mathematical form of the function, or its derivative, but rather on the 3.2.3.c: operation sequence for the for the algorithm that is used to evaluate the function.

3.2.4.d: Operation Sequence
We consider the case where 3.2.1: exp_2.hpp is executed with  x = .5 . The corresponding operation sequence and zero order forward mode values (see 3.2.3.c.e: zero order sweep ) are inputs and are used by a first order forward sweep.

3.2.4.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.2.4.d.b: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.2.4.d.c: Zero Order
The Zero Order column contains the zero order derivatives for the corresponding variable in the operation sequence (see 3.2.3.c.e: zero order sweep ).

3.2.4.d.d: Derivative
The Derivative column contains the mathematical function corresponding to the derivative with respect to  t , at  t = 0 , for each variable in the sequence.

3.2.4.d.e: First Order
The First Order column contains the first order derivatives for the corresponding variable in the operation sequence; i.e.,  \[
     v_j (t) = v_j^{(0)} + v_j^{(1)} t
\] 
We use  x^{(0)} = 1 so that differentiation with respect to  t , at  t = 0 , is the same partial differentiation with respect to  x at  x = x^{(0)} .

3.2.4.d.f: Sweep
Index    Operation    Zero Order    Derivative    First Order
1     v_1 = x  0.5  v_1^{(1)} = x^{(1)}    v_1^{(1)} = 1
2     v_2 = 1 + v_1 1.5  v_2^{(1)} = v_1^{(1)}  v_2^{(1)} = 1
3     v_3 = v_1 * v_1 0.25  v_3^{(1)} = 2 * v_1^{(0)} * v_1^{(1)}  v_3^{(1)} = 1
4     v_4 = v_3 / 2 0.125  v_4^{(1)} = v_3^{(1)} / 2  v_4^{(1)} = 0.5
5    v_5 = v_2 + v_4 1.625  v_5^{(1)} = v_2^{(1)} + v_4^{(1)}  v_5^{(1)} = 1.5
3.2.4.e: Return Value
The derivative of the return value for this case is  \[
\begin{array}{rcl}
     1.5 
     & = &
     v_5^{(1)} =
     \left[ \D{v_5}{t} \right]_{t=0} = 
     \left[ \D{}{t} f ( x^{(0)} + x^{(1)} t ) \right]_{t=0}
     \\
     & = &
     f^{(1)} ( x^{(0)} ) * x^{(1)} = 
     f^{(1)} ( x^{(0)} )
\end{array}
\] 
(We have used the fact that  x^{(1)} = 1 .)

3.2.4.f: Verification
The file 3.2.4.1: exp_2_for1.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.2.4.g: Exercises
  1. Which statement in the routine defined by 3.2.4.1: exp_2_for1.cpp uses the values that are calculated by the routine defined by 3.2.3.1: exp_2_for0.cpp ?
  2. Suppose that  x = .1 , what are the results of a zero and first order forward sweep for the operation sequence above; i.e., what are the corresponding values for  v_1^{(0)}, v_2^{(0)}, \cdots , v_5^{(0)} and  v_1^{(1)}, v_2^{(1)}, \cdots , v_5^{(1)} ?
  3. Create a modified version of 3.2.4.1: exp_2_for1.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.4.1: exp_2_for1.cpp .

Input File: introduction/exp_apx/exp_2.omh
3.2.4.1: exp_2: Verify First Order Forward Sweep
 # include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
bool exp_2_for1(double *v1)         // double v1[6]
{	bool ok = true;
	double v0[6];

	// set the value of v0[j] for j = 1 , ... , 5
	ok &= exp_2_for0(v0);

	v1[1] = 1.;                                     // v1 = x
	ok    &= std::fabs( v1[1] - 1. ) <= 1e-10;

	v1[2] = v1[1];                                  // v2 = 1 + v1
	ok    &= std::fabs( v1[2] - 1. ) <= 1e-10;

	v1[3] = v1[1] * v0[1] + v0[1] * v1[1];          // v3 = v1 * v1
	ok    &= std::fabs( v1[3] - 1. ) <= 1e-10;

	v1[4] = v1[3] / 2.;                             // v4 = v3 / 2
	ok    &= std::fabs( v1[4] - 0.5) <= 1e-10;

	v1[5] = v1[2] + v1[4];                          // v5 = v2 + v4
	ok    &= std::fabs( v1[5] - 1.5) <= 1e-10;

	return ok;
}
bool exp_2_for1(void)
{	double v1[6];
	return exp_2_for1(v1);
}

Input File: introduction/exp_apx/exp_2_for1.cpp
3.2.5: exp_2: First Order Reverse Mode

3.2.5.a: Purpose
First order reverse mode uses the 3.2.3.c: operation sequence , and zero order forward sweep values, to compute the first order derivative of one dependent variable with respect to all the independent variables. The computations are done in reverse of the order of the computations in the original algorithm.

3.2.5.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_2.hpp to compute  \[
     f(x) = 1 + x + x^2 / 2
\] 
The corresponding derivative function is  \[
     \partial_x f (x) =   1 + x
\] 


3.2.5.c: f_5
For our example, we chose to compute the derivative of the value returned by 3.2.1: exp_2.hpp which is equal to the symbol  v_5 in the 3.2.3.c: exp_2 operation sequence . We begin with the function  f_5  where  v_5 is both an argument and the value of the function; i.e.,  \[
\begin{array}{rcl}
f_5 ( v_1 , v_2 , v_3 , v_4 , v_5 ) & = & v_5 
\\
\D{f_5}{v_5} & = & 1
\end{array}
\] 
All the other partial derivatives of  f_5  are zero.

3.2.5.d: Index 5: f_4
Reverse mode starts with the last operation in the sequence. For the case in question, this is the operation with index 5,  \[
     v_5 = v_2 + v_4
\] 
We define the function  f_4 ( v_1 , v_2 , v_3 , v_4 )  as equal to  f_5  except that  v_5  is eliminated using this operation; i.e.  \[
f_4  = 
f_5 [  v_1 , v_2 , v_3 , v_4 , v_5 ( v_2 , v_4 ) ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_4}{v_2} 
& = & \D{f_5}{v_2} + 
     \D{f_5}{v_5} * \D{v_5}{v_2} 
& = 1
\\
\D{f_4}{v_4} 
& = & \D{f_5}{v_4} + 
     \D{f_5}{v_5} * \D{v_5}{v_4} 
& = 1
\end{array}
\] 
All the other partial derivatives of  f_4 are zero.

3.2.5.e: Index 4: f_3
The next operation has index 4,  \[
     v_4 = v_3 / 2
\] 
We define the function  f_3 (  v_1 , v_2 , v_3 )  as equal to  f_4  except that  v_4  is eliminated using this operation; i.e.,  \[
f_3 = 
f_4 [ v_1 , v_2 , v_3 , v_4 ( v_3 ) ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_3}{v_1}
& = & \D{f_4}{v_1}
& = 0
\\
\D{f_3}{v_2} 
& = & \D{f_4}{v_2}
& = 1
\\
\D{f_3}{v_3} 
& = & \D{f_4}{v_3} + 
     \D{f_4}{v_4} * \D{v_4}{v_3} 
& = 0.5
\end{array}
\] 


3.2.5.f: Index 3: f_2
The next operation has index 3,  \[
     v_3 = v_1 * v_1
\] 
We define the function  f_2 ( v_1 , v_2 )  as equal to  f_3  except that  v_3  is eliminated using this operation; i.e.,  \[
f_2 = 
f_3 [ v_1 , v_2 , v_3 ( v_1 ) ]
\] 
Note that the value of  v_1 is equal to  x which is .5 for this evaluation. It follows that  \[
\begin{array}{rcll}
\D{f_2}{v_1}
& = & \D{f_3}{v_1} +
     \D{f_3}{v_3} * \D{v_3}{v_1} 
& = 0.5
\\
\D{f_2}{v_2} 
& = & \D{f_3}{v_2}
& = 1
\end{array}
\] 


3.2.5.g: Index 2: f_1
The next operation has index 2,  \[
     v_2 = 1 + v_1
\] 
We define the function  f_1 ( v_1 )  as equal to  f_2  except that  v_2  is eliminated using this operation; i.e.,  \[
f_1 = 
f_2 [ v_1 , v_2 ( v_1 ) ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_1}{v_1}
& = & \D{f_2}{v_1} +
     \D{f_2}{v_2} * \D{v_2}{v_1} 
& = 1.5
\end{array}
\] 
Note that  v_1 is equal to  x , so the derivative of this is the derivative of the function defined by 3.2.1: exp_2.hpp at  x = .5 .

3.2.5.h: Verification
The file 3.2.5.1: exp_2_rev1.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of  f_j that might not be equal to the corresponding partials of  f_{j+1} ; i.e., the other partials of  f_j must be equal to the corresponding partials of  f_{j+1} .

3.2.5.i: Exercises
  1. Which statement in the routine defined by 3.2.5.1: exp_2_rev1.cpp uses the values that are calculated by the routine defined by 3.2.3.1: exp_2_for0.cpp ?
  2. Consider the case where  x = .1 and we first preform a zero order forward sweep for the operation sequence used above. What are the results of a first order reverse sweep; i.e., what are the corresponding derivatives of  f_5 , f_4 , \ldots , f_1 .
  3. Create a modified version of 3.2.5.1: exp_2_rev1.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.5.1: exp_2_rev1.cpp .

Input File: introduction/exp_apx/exp_2.omh
3.2.5.1: exp_2: Verify First Order Reverse Sweep
 # include <cstddef>                 // define size_t
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
bool exp_2_rev1(void)
{	bool ok = true;

	// set the value of v0[j] for j = 1 , ... , 5
	double v0[6];
	ok &= exp_2_for0(v0);

	// initial all partial derivatives as zero
	double f_v[6];
	size_t j;
	for(j = 0; j < 6; j++)
		f_v[j] = 0.;

	// set partial derivative for f5
	f_v[5] = 1.;
	ok &= std::fabs( f_v[5] - 1. ) <= 1e-10; // f5_v5

	// f4 = f5( v1 , v2 , v3 , v4 , v2 + v4 )
	f_v[2] += f_v[5] * 1.;
	f_v[4] += f_v[5] * 1.;
	ok &= std::fabs( f_v[2] - 1. ) <= 1e-10; // f4_v2
	ok &= std::fabs( f_v[4] - 1. ) <= 1e-10; // f4_v4

	// f3 = f4( v1 , v2 , v3 , v3 / 2 )
	f_v[3] += f_v[4] / 2.;
	ok &= std::fabs( f_v[3] - 0.5) <= 1e-10; // f3_v3

	// f2 = f3( v1 , v2 , v1 * v1 )
	f_v[1] += f_v[3] * 2. * v0[1];
	ok &= std::fabs( f_v[1] - 0.5) <= 1e-10; // f2_v1

	// f1 = f2( v1 , 1 + v1 )
	f_v[1] += f_v[2] * 1.;
	ok &= std::fabs( f_v[1] - 1.5) <= 1e-10; // f1_v1

	return ok;
}

Input File: introduction/exp_apx/exp_2_rev1.cpp
3.2.6: exp_2: Second Order Forward Mode

3.2.6.a: Second Order Expansion
We define  x(t) near  t = 0 by the second order expansion  \[
     x(t) = x^{(0)} + x^{(1)} * t + x^{(2)} * t^2 / 2
\]
It follows that for  k = 0 , 1 , 2 ,  \[
     x^{(k)} = \dpow{k}{t} x (0)
\] 


3.2.6.b: Purpose
In general, a second order forward sweep is given the 3.2.4.a: first order expansion for all of the variables in an operation sequence, and the second order derivatives for the independent variables. It uses these to compute the second order derivative, and thereby obtain the second order expansion, for all the variables in the operation sequence.

3.2.6.c: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_2.hpp to compute  \[
     f(x) = 1 + x + x^2 / 2
\] 
The corresponding second derivative function is  \[
     \Dpow{2}{x} f (x) =   1
\] 


3.2.6.d: Operation Sequence
We consider the case where 3.2.1: exp_2.hpp is executed with  x = .5 . The corresponding operation sequence, zero order forward sweep values, and first order forward sweep values are inputs and are used by a second order forward sweep.

3.2.6.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.2.6.d.b: Zero
The Zero column contains the zero order sweep results for the corresponding variable in the operation sequence (see 3.2.3.c.e: zero order sweep ).

3.2.6.d.c: Operation
The Operation column contains the first order sweep operation for this variable.

3.2.6.d.d: First
The First column contains the first order sweep results for the corresponding variable in the operation sequence (see 3.2.4.d.f: first order sweep ).

3.2.6.d.e: Derivative
The Derivative column contains the mathematical function corresponding to the second derivative with respect to  t , at  t = 0 , for each variable in the sequence.

3.2.6.d.f: Second
The Second column contains the second order derivatives for the corresponding variable in the operation sequence; i.e., the second order expansion for the i-th variable is given by  \[
     v_i (t) = v_i^{(0)} + v_i^{(1)} * t +  v_i^{(2)} * t^2 / 2
\] 
We use  x^{(0)} = 1 , and  x^{(2)} = 0 so that second order differentiation with respect to  t , at  t = 0 , is the same as the second partial differentiation with respect to  x at  x = x^{(0)} .

3.2.6.d.g: Sweep
Index    Zero    Operation    First    Derivative    Second
1 0.5     v_1^{(1)} = x^{(1)}  1  v_1^{(2)} = x^{(2)}  v_1^{(2)} = 0
2 1.5     v_2^{(1)} = v_1^{(1)} 1  v_2^{(2)} = v_1^{(2)}  v_2^{(2)} = 0
3 0.25     v_3^{(1)} = 2 * v_1^{(0)} * v_1^{(1)} 1  v_3^{(2)} = 2 * (v_1^{(1)} * v_1^{(1)} + v_1^{(0)} * v_1^{(2)} )  v_3^{(2)} = 2
4 0.125     v_4^{(1)} = v_3^{(1)} / 2 .5  v_4^{(2)} = v_3^{(2)} / 2  v_4^{(2)} = 1
5 1.625    v_5^{(1)} = v_2^{(1)} + v_4^{(1)} 1.5  v_5^{(2)} = v_2^{(2)} + v_4^{(2)}  v_5^{(2)} = 1
3.2.6.e: Return Value
The second derivative of the return value for this case is  \[
\begin{array}{rcl}
     1 
     & = &
     v_5^{(2)} =
     \left[ \Dpow{2}{t} v_5 \right]_{t=0} =
     \left[ \Dpow{2}{t} f( x^{(0)} + x^{(1)} * t ) \right]_{t=0}
     \\
     & = &
     x^{(1)} * \Dpow{2}{x} f ( x^{(0)} ) * x^{(1)} = 
     \Dpow{2}{x} f ( x^{(0)} )
\end{array}
\] 
(We have used the fact that  x^{(1)} = 1 and  x^{(2)} = 0 .)

3.2.6.f: Verification
The file 3.2.6.1: exp_2_for2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.2.6.g: Exercises
  1. Which statement in the routine defined by 3.2.6.1: exp_2_for2.cpp uses the values that are calculated by the routine defined by 3.2.4.1: exp_2_for1.cpp ?
  2. Suppose that  x = .1 , what are the results of a zero, first, and second order forward sweep for the operation sequence above; i.e., what are the corresponding values for  v_i^{(k)} for  i = 1, \ldots , 5 and  k = 0, 1, 2 .
  3. Create a modified version of 3.2.6.1: exp_2_for2.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.6.1: exp_2_for2.cpp .

Input File: introduction/exp_apx/exp_2.omh
3.2.6.1: exp_2: Verify Second Order Forward Sweep
 # include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
extern bool exp_2_for1(double *v1); // computes first order forward sweep
bool exp_2_for2(void)
{	bool ok = true;
	double v0[6], v1[6], v2[6];

	// set the value of v0[j], v1[j], for j = 1 , ... , 5
	ok &= exp_2_for0(v0);
	ok &= exp_2_for1(v1);

	v2[1] = 0.;                                     // v1 = x
	ok    &= std::fabs( v2[1] - 0. ) <= 1e-10;

	v2[2] = v2[1];                                  // v2 = 1 + v1
	ok    &= std::fabs( v2[2] - 0. ) <= 1e-10;

	v2[3] = 2.*(v0[1]*v2[1] + v1[1]*v1[1]);         // v3 = v1 * v1
	ok    &= std::fabs( v2[3] - 2. ) <= 1e-10;

	v2[4] = v2[3] / 2.;                             // v4 = v3 / 2
	ok    &= std::fabs( v2[4] - 1. ) <= 1e-10;

	v2[5] = v2[2] + v2[4];                          // v5 = v2 + v4
	ok    &= std::fabs( v2[5] - 1. ) <= 1e-10;

	return ok;
}

Input File: introduction/exp_apx/exp_2_for2.cpp
3.2.7: exp_2: Second Order Reverse Mode

3.2.7.a: Purpose
In general, a second order reverse sweep is given the 3.2.4.a: first order expansion for all of the variables in an operation sequence. Given a choice of a particular variable, it computes the derivative, of that variables first order expansion coefficient, with respect to all of the independent variables.

3.2.7.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_2.hpp to compute  \[
     f(x) = 1 + x + x^2 / 2
\] 
The corresponding second derivative is  \[
     \Dpow{2}{x} f (x) =   1
\] 


3.2.7.c: f_5
For our example, we chose to compute the derivative of  v_5^{(1)} with respect to all the independent variable. For the case computed for the 3.2.4.d.f: first order sweep ,  v_5^{(1)} is the derivative of the value returned by 3.2.1: exp_2.hpp . This the value computed will be the second derivative of the value returned by 3.2.1: exp_2.hpp . We begin with the function  f_5  where  v_5^{(1)} is both an argument and the value of the function; i.e.,  \[
\begin{array}{rcl}
f_5 \left( 
     v_1^{(0)}, v_1^{(1)} , \ldots  , v_5^{(0)} , v_5^{(1)} 
\right) 
& = & v_5^{(1)} 
\\
\D{f_5}{v_5^{(1)}} & = & 1
\end{array}
\] 
All the other partial derivatives of  f_5  are zero.

3.2.7.d: Index 5: f_4
Second order reverse mode starts with the last operation in the sequence. For the case in question, this is the operation with index 5. The zero and first order sweep representations of this operation are  \[
\begin{array}{rcl}
     v_5^{(0)} & = & v_2^{(0)} + v_4^{(0)}
     \\
     v_5^{(1)} & = & v_2^{(1)} + v_4^{(1)}
\end{array}
\] 
We define the function  f_4 \left( v_1^{(0)} , \ldots , v_4^{(1)} \right)  as equal to  f_5  except that  v_5^{(0)}  and  v_5^{(1)} are eliminated using this operation; i.e.  \[
f_4  = 
f_5 \left[   v_1^{(0)} , \ldots , v_4^{(1)} , 
     v_5^{(0)} \left( v_2^{(0)} , v_4^{(0)} \right) ,  
     v_5^{(1)} \left( v_2^{(1)} , v_4^{(1)} \right) 
\right] 
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_4}{v_2^{(1)}} 
& = & \D{f_5}{v_2^{(1)}} + 
     \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_2^{(1)}} 
& = 1
\\
\D{f_4}{v_4^{(1)}} 
& = & \D{f_5}{v_4^{(1)}} + 
     \D{f_5}{v_5^{(1)}} * \D{v_5}{v_4^{(1)}} 
& = 1
\end{array}
\] 
All the other partial derivatives of  f_4 are zero.

3.2.7.e: Index 4: f_3
The next operation has index 4,  \[
\begin{array}{rcl}
     v_4^{(0)} & = & v_3^{(0)} / 2
     \\
     v_4^{(1)} & = & v_3^{(1)} / 2
\end{array}
\] 
We define the function  f_3 \left(  v_1^{(0)} , \ldots , v_3^{(1)} \right)  as equal to  f_4  except that  v_4^{(0)} and  v_4^{(1)} are eliminated using this operation; i.e.,  \[
f_3 = 
f_4 \left[ v_1^{(0)} , \ldots , v_3^{(1)} ,
     v_4^{(0)} \left( v_3^{(0)} \right) ,
     v_4^{(1)} \left( v_3^{(1)} \right)
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_3}{v_2^{(1)}} 
& = & \D{f_4}{v_2^{(1)}}
& = 1
\\
\D{f_3}{v_3^{(1)}} 
& = & \D{f_4}{v_3^{(1)}} + 
     \D{f_4}{v_4^{(1)}} * \D{v_4^{(1)}}{v_3^{(1)}} 
& = 0.5
\end{array}
\] 
All the other partial derivatives of  f_3 are zero.

3.2.7.f: Index 3: f_2
The next operation has index 3,  \[
\begin{array}{rcl}
     v_3^{(0)} & = & v_1^{(0)} * v_1^{(0)}
     \\
     v_3^{(1)} & = & 2 * v_1^{(0)} * v_1^{(1)}
\end{array}
\] 
We define the function  f_2 \left(  v_1^{(0)} , \ldots , v_2^{(1)} \right)  as equal to  f_3  except that  v_3^{(0)}  and  v_3^{(1)} are eliminated using this operation; i.e.,  \[
f_2 = 
f_3 \left[ v_1^{(0)} , \ldots , v_2^{(1)} ,
     v_3^{(0)} ( v_1^{(0)} ) ,
     v_3^{(1)} ( v_1^{(0)} , v_1^{(1)} ) 
\right]
\] 
Note that, from the 3.2.4.d.f: first order forward sweep , the value of  v_1^{(0)} is equal to  .5 and  v_1^{(1)} is equal 1. It follows that  \[
\begin{array}{rcll}
\D{f_2}{v_1^{(0)}}
& = & 
\D{f_3}{v_1^{(0)}} +
     \D{f_3}{v_3^{(0)}} * \D{v_3^{(0)}}{v_1^{(0)}}  +
     \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_1^{(0)}}  
& = 1
\\
\D{f_2}{v_1^{(1)}}
& = & 
\D{f_3}{v_1^{(1)}} +
     \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_1^{(1)}}  
& = 0.5 
\\
\D{f_2}{v_2^{(0)}} 
& = & \D{f_3}{v_2^{(0)}}
& = 0
\\
\D{f_2}{v_2^{(1)}} 
& = & \D{f_3}{v_2^{(1)}}
& = 1
\end{array}
\] 


3.2.7.g: Index 2: f_1
The next operation has index 2,  \[
\begin{array}{rcl}
     v_2^{(0)} & = & 1 + v_1^{(0)}
     \\
     v_2^{(1)} & = & v_1^{(1)}
\end{array}
\] 
We define the function  f_1 ( v_1^{(0)} , v_1^{(1)} )  as equal to  f_2  except that  v_2^{(0)}  and  v_2^{(1)}  are eliminated using this operation; i.e.,  \[
f_1 = 
f_2 \left[ v_1^{(0)} , v_1^{(1)} , 
     v_2^{(0)} ( v_1^{(0)} )  ,
     v_2^{(1)} ( v_1^{(1)} )
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_1}{v_1^{(0)}}
& = & \D{f_2}{v_1^{(0)}} +
     \D{f_2}{v_2^{(0)}} * \D{v_2^{(0)}}{v_1^{(0)}} 
& = 1
\\
\D{f_1}{v_1^{(1)}}
& = & \D{f_2}{v_1^{(1)}} +
     \D{f_2}{v_2^{(1)}} * \D{v_2^{(1)}}{v_1^{(1)}} 
& = 1.5 
\end{array}
\] 
Note that  v_1 is equal to  x , so the second derivative of the function defined by 3.2.1: exp_2.hpp at  x = .5 is given by  \[
\Dpow{2}{x} v_5^{(0)} 
=
\D{ v_5^{(1)} }{x} 
=
\D{ v_5^{(1)} }{v_1^{(0)}} 

\D{f_1}{v_1^{(0)}} = 1
\] 
There is a theorem about Algorithmic Differentiation that explains why the other partial of  f_1 is equal to the first derivative of the function defined by 3.2.1: exp_2.hpp at  x = .5 .

3.2.7.h: Verification
The file 3.2.7.1: exp_2_rev2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of  f_j that might not be equal to the corresponding partials of  f_{j+1} ; i.e., the other partials of  f_j must be equal to the corresponding partials of  f_{j+1} .

3.2.7.i: Exercises
  1. Which statement in the routine defined by 3.2.7.1: exp_2_rev2.cpp uses the values that are calculated by the routine defined by 3.2.3.1: exp_2_for0.cpp ? Which statements use values that are calculate by the routine defined in 3.2.4.1: exp_2_for1.cpp ?
  2. Consider the case where  x = .1 and we first preform a zero order forward sweep, then a first order sweep, for the operation sequence used above. What are the results of a second order reverse sweep; i.e., what are the corresponding derivatives of  f_5 , f_4 , \ldots , f_1 .
  3. Create a modified version of 3.2.7.1: exp_2_rev2.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.7.1: exp_2_rev2.cpp .

Input File: introduction/exp_apx/exp_2.omh
3.2.7.1: exp_2: Verify Second Order Reverse Sweep
 # include <cstddef>                 // define size_t
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
extern bool exp_2_for1(double *v1); // computes first order forward sweep
bool exp_2_rev2(void)
{	bool ok = true;

	// set the value of v0[j], v1[j] for j = 1 , ... , 5
	double v0[6], v1[6];
	ok &= exp_2_for0(v0);
	ok &= exp_2_for1(v1);

	// initial all partial derivatives as zero
	double f_v0[6], f_v1[6];
	size_t j;
	for(j = 0; j < 6; j++)
	{	f_v0[j] = 0.;
		f_v1[j] = 0.;
	}

	// set partial derivative for f_5
	f_v1[5] = 1.;
	ok &= std::fabs( f_v1[5] - 1. ) <= 1e-10; // partial f_5 w.r.t v_5^1

	// f_4 = f_5( v_1^0 , ... , v_4^1 , v_2^0 + v_4^0 , v_2^1 + v_4^1 )
	f_v0[2] += f_v0[5] * 1.;
	f_v0[4] += f_v0[5] * 1.;
	f_v1[2] += f_v1[5] * 1.;
	f_v1[4] += f_v1[5] * 1.;
	ok &= std::fabs( f_v0[2] - 0. ) <= 1e-10; // partial f_4 w.r.t. v_2^0
	ok &= std::fabs( f_v0[4] - 0. ) <= 1e-10; // partial f_4 w.r.t. v_4^0
	ok &= std::fabs( f_v1[2] - 1. ) <= 1e-10; // partial f_4 w.r.t. v_2^1
	ok &= std::fabs( f_v1[4] - 1. ) <= 1e-10; // partial f_4 w.r.t. v_4^1

	// f_3 = f_4( v_1^0 , ... , v_3^1, v_3^0 / 2 , v_3^1 / 2 )
	f_v0[3] += f_v0[4] / 2.;
	f_v1[3] += f_v1[4] / 2.;
	ok &= std::fabs( f_v0[3] - 0.  ) <= 1e-10; // partial f_3 w.r.t. v_3^0
	ok &= std::fabs( f_v1[3] - 0.5 ) <= 1e-10; // partial f_3 w.r.t. v_3^1

	// f_2 = f_3(  v_1^0 , ... , v_2^1, v_1^0 * v_1^0 , 2 * v_1^0 * v_1^1 )
	f_v0[1] += f_v0[3] * 2. * v0[1];
	f_v0[1] += f_v1[3] * 2. * v1[1];
	f_v1[1] += f_v1[3] * 2. * v0[1];
	ok &= std::fabs( f_v0[1] - 1.  ) <= 1e-10; // partial f_2 w.r.t. v_1^0
	ok &= std::fabs( f_v1[1] - 0.5 ) <= 1e-10; // partial f_2 w.r.t. v_1^1

	// f_1 = f_2( v_1^0 , v_1^1 , 1 + v_1^0 , v_1^1 )
	f_v0[1] += f_v0[2] * 1.;
	f_v1[1] += f_v1[2] * 1.;
	ok &= std::fabs( f_v0[1] - 1. ) <= 1e-10; // partial f_1 w.r.t. v_1^0
	ok &= std::fabs( f_v1[1] - 1.5) <= 1e-10; // partial f_1 w.r.t. v_1^1

	return ok;
}

Input File: introduction/exp_apx/exp_2_rev2.cpp
3.2.8: exp_2: CppAD Forward and Reverse Sweeps
.

3.2.8.a: Purpose
Use CppAD forward and reverse modes to compute the partial derivative with respect to  x , at the point  x = .5 , of the function
     exp_2(
x)
as defined by the 3.2.1: exp_2.hpp include file.

3.2.8.b: Exercises
  1. Create and test a modified version of the routine below that computes the same order derivatives with respect to  x , at the point  x = .1 of the function
         exp_2(
    x)
  2. Create a routine called
         exp_3(
    x)
    that evaluates the function  \[
         f(x) = 1 + x^2 / 2 + x^3 / 6
    \] 
    Test a modified version of the routine below that computes the derivative of  f(x) at the point  x = .5 .
 

# include <cppad/cppad.hpp>  // http://www.coin-or.org/CppAD/ 
# include "exp_2.hpp"        // second order exponential approximation
bool exp_2_cppad(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::vector;    // can use any simple vector template class
	using CppAD::NearEqual; // checks if values are nearly equal

	// domain space vector
	size_t n = 1; // dimension of the domain space
	vector< AD<double> > X(n);
	X[0] = .5;    // value of x for this operation sequence

	// declare independent variables and start recording operation sequence
	CppAD::Independent(X);

	// evaluate our exponential approximation
	AD<double> x   = X[0];
	AD<double> apx = exp_2(x);  

	// range space vector
	size_t m = 1;  // dimension of the range space
	vector< AD<double> > Y(m);
	Y[0] = apx;    // variable that represents only range space component

	// Create f: X -> Y corresponding to this operation sequence
	// and stop recording. This also executes a zero order forward 
	// sweep using values in X for x.
	CppAD::ADFun<double> f(X, Y);

	// first order forward sweep that computes
	// partial of exp_2(x) with respect to x
	vector<double> dx(n);  // differential in domain space
	vector<double> dy(m);  // differential in range space
	dx[0] = 1.;            // direction for partial derivative
	dy    = f.Forward(1, dx);
	double check = 1.5;
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// first order reverse sweep that computes the derivative
	vector<double>  w(m);   // weights for components of the range
	vector<double> dw(n);   // derivative of the weighted function
	w[0] = 1.;              // there is only one weight
	dw   = f.Reverse(1, w); // derivative of w[0] * exp_2(x)
	check = 1.5;            // partial of exp_2(x) with respect to x
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// second order forward sweep that computes
	// second partial of exp_2(x) with respect to x
	vector<double> x2(n);     // second order Taylor coefficients 
	vector<double> y2(m);  
	x2[0] = 0.;               // evaluate second partial .w.r.t. x
	y2    = f.Forward(2, x2);
	check = 0.5 * 1.;         // Taylor coef is 1/2 second derivative 
	ok   &= NearEqual(y2[0], check, 1e-10, 1e-10);

	// second order reverse sweep that computes
	// derivative of partial of exp_2(x) w.r.t. x
	dw.resize(2 * n);         // space for first and second derivatives
	dw    = f.Reverse(2, w);
	check = 1.;               // result should be second derivative
	ok   &= NearEqual(dw[0*2+1], check, 1e-10, 1e-10);

	return ok;
}


Input File: introduction/exp_apx/exp_2_cppad.cpp
3.3: An Epsilon Accurate Exponential Approximation

3.3.a: Syntax
# include "exp_eps.hpp"
y = exp_eps(xepsilon)

3.3.b: Purpose
This is a an example algorithm that is used to demonstrate how Algorithmic Differentiation works with loops and boolean decision variables (see 3.2: exp_2 for a simpler example).

3.3.c: Mathematical Function
The exponential function can be defined by  \[
     \exp (x) = 1 + x^1 / 1 ! + x^2 / 2 ! + \cdots 
\] 
We define  k ( x, \varepsilon )   as the smallest non-negative integer such that  \varepsilon \geq x^k / k ! ; i.e.,  \[
k( x, \varepsilon ) = 
     \min \{ k \in {\rm Z}_+ \; | \; \varepsilon \geq x^k / k ! \}
\] 
The mathematical form for our approximation of the exponential function is  \[
\begin{array}{rcl}
{\rm exp\_eps} (x , \varepsilon ) & = & \left\{
\begin{array}{ll}
\frac{1}{ {\rm exp\_eps} (-x , \varepsilon ) } 
     & {\rm if} \; x < 0 
\\
1 + x^1 / 1 ! + \cdots + x^{k( x, \varepsilon)} / k( x, \varepsilon ) !
     & {\rm otherwise}
\end{array}
\right.
\end{array}
\] 


3.3.d: include
The include command in the syntax is relative to
     cppad-
yy-mm-dd/introduction/exp_apx
where cppad-yy-mm-dd is the distribution directory created during the beginning steps of the 2: installation of CppAD.

3.3.e: x
The argument x has prototype
     const 
Type &x
(see Type below). It specifies the point at which to evaluate the approximation for the exponential function.

3.3.f: epsilon
The argument epsilon has prototype
     const 
Type &epsilon
It specifies the accuracy with which to approximate the exponential function value; i.e., it is the value of  \varepsilon in the exponential function approximation defined above.

3.3.g: y
The result y has prototype
     
Type y
It is the value of the exponential function approximation defined above.

3.3.h: Type
If u and v are Type objects and i is an int:
Operation Result Type Description
Type(i) Type object with value equal to i
u > v bool true, if u greater than v, an false otherwise
u = v Type new u (and result) is value of v
u * v Type result is value of  u * v
u / v Type result is value of  u / v
u + v Type result is value of  u + v
-u Type result is value of  - u

3.3.i: Implementation
The file 3.3.1: exp_eps.hpp contains a C++ implementation of this function.

3.3.j: Test
The file 3.3.2: exp_eps.cpp contains a test of this implementation. It returns true for success and false for failure.

3.3.k: Exercises
  1. Using the definition of  k( x, \varepsilon ) above, what is the value of  k(.5, 1) ,  k(.5, .1) , and  k(.5, .01) ?
  2. Suppose that we make the following call to exp_eps:
     
    	double x       = 1.;
    	double epsilon = .01;
    	double y = exp_eps(x, epsilon);
    
    What is the value assigned to k, temp, term, and sum the first time through the while loop in 3.3.1: exp_eps.hpp ?
  3. Continuing the previous exercise, what is the value assigned to k, temp, term, and sum the second time through the while loop in 3.3.1: exp_eps.hpp ?

Input File: introduction/exp_apx/exp_eps.hpp
3.3.1: exp_eps: Implementation
 
template <class Type>
Type exp_eps(const Type &x, const Type &epsilon)
{	// abs_x = |x|
	Type abs_x = x;
	if( Type(0) > x )
		abs_x = - x;
	// initialize
	int  k    = 0;          // initial order 
	Type term = 1.;         // term = |x|^k / k !
	Type sum  = term;       // initial sum
	while(term > epsilon)
	{	k         = k + 1;          // order for next term
		Type temp = term * abs_x;   // term = |x|^k / (k-1)!
		term      = temp / Type(k); // term = |x|^k / k !
		sum       = sum + term;     // sum  = 1 + ... + |x|^k / k !
	}
	// In the case where x is negative, use exp(x) = 1 / exp(-|x|)
	if( Type(0) > x ) 
		sum = Type(1) / sum;
	return sum;
}

Input File: introduction/exp_apx/exp_eps.omh
3.3.2: exp_eps: Test of exp_eps
 
# include <cmath>             // for fabs function
# include "exp_eps.hpp"       // definition of exp_eps algorithm
bool exp_eps(void)
{	double x       = .5;
	double epsilon = .2;
	double check   = 1 + .5 + .125; // include 1 term less than epsilon
	bool   ok      = std::fabs( exp_eps(x, epsilon) - check ) <= 1e-10; 
	return ok;
}

Input File: introduction/exp_apx/exp_eps.omh
3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep

3.3.3.a: Mathematical Form
Suppose that we use the algorithm 3.3.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical form for the operation sequence corresponding to the exp_eps is  \[
     f( x , \varepsilon ) = 1 + x + x^2 / 2  
\] 
Note that, for these particular values of x and epsilon, this is the same as the mathematical form for 3.2.3.a: exp_2 .

3.3.3.b: Operation Sequence
We consider the 9.4.g.b: operation sequence corresponding to the algorithm 3.3.1: exp_eps.hpp with the argument x is equal to .5 and epsilon is equal to .2.

3.3.3.b.a: Variable
We refer to values that depend on the input variables x and epsilon as variables.

3.3.3.b.b: Parameter
We refer to values that do not depend on the input variables x or epsilon as parameters. Operations where the result is a parameter are not included in the zero order sweep below.

3.3.3.b.c: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.

3.3.3.b.d: Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.

3.3.3.b.e: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.3.3.b.f: Zero Order
The Zero Order column contains the 3.2.3.b: zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e., in order of increasing index.

3.3.3.b.g: Sweep
Index    Code    Operation    Zero Order
1    abs_x = x;  v_1 = x   v_1^{(0)} = 0.5
2    temp = term * abs_x;  v_2 = 1 * v_1   v_2^{(0)} = 0.5
3    term = temp / Type(k);  v_3 = v_2 / 1  v_3^{(0)} = 0.5
4    sum = sum + term;  v_4 = 1 + v_3   v_4^{(0)} = 1.5
5    temp = term * abs_x;  v_5 = v_3 * v_1   v_5^{(0)} = 0.25
6    term = temp / Type(k);  v_6 = v_5 / 2  v_6^{(0)} = 0.125
7    sum = sum + term;  v_7 = v_4 + v_6   v_7^{(0)} = 1.625
3.3.3.c: Return Value
The return value for this case is  \[
     1.625 =
     v_7^{(0)} =
     f ( x^{(0)} , \varepsilon^{(0)} )
\] 


3.3.3.d: Comparisons
If x were negative, or if epsilon were a much smaller or much larger value, the results of the following comparisons could be different:
 
	if( Type(0) > x ) 
	while(term > epsilon)
This in turn would result in a different operation sequence. Thus the operation sequence above only corresponds to 3.3.1: exp_eps.hpp for values of x and epsilon within a certain range. Note that there is a neighborhood of  x = 0.5 for which the comparisons would have the same result and hence the operation sequence would be the same.

3.3.3.e: Verification
The file 3.3.3.1: exp_eps_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.3.3.f: Exercises
  1. Suppose that  x^{(0)} = .1 , what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for  v_1^{(0)} , v_2^{(0)} , \ldots , v_7^{(0)} .
  2. Create a modified version of 3.3.3.1: exp_eps_for0.cpp that verifies the values you obtained for the previous exercise.
  3. Create and run a main program that reports the result of calling the modified version of 3.3.3.1: exp_eps_for0.cpp in the previous exercise.

Input File: introduction/exp_apx/exp_eps.omh
3.3.3.1: exp_eps: Verify Zero Order Forward Sweep
 # include <cmath>                // for fabs function
bool exp_eps_for0(double *v0)    // double v0[8]
{	bool  ok = true;
	double x = .5;

	v0[1] = x;                                  // abs_x = x;
	ok  &= std::fabs( v0[1] - 0.5) < 1e-10;

	v0[2] = 1. * v0[1];                         // temp = term * abs_x;
	ok  &= std::fabs( v0[2] - 0.5) < 1e-10;

	v0[3] = v0[2] / 1.;                         // term = temp / Type(k);
	ok  &= std::fabs( v0[3] - 0.5) < 1e-10;

	v0[4] = 1. + v0[3];                         // sum = sum + term;
	ok  &= std::fabs( v0[4] - 1.5) < 1e-10;

	v0[5] = v0[3] * v0[1];                      // temp = term * abs_x;
	ok  &= std::fabs( v0[5] - 0.25) < 1e-10;

	v0[6] = v0[5] / 2.;                         // term = temp / Type(k);
	ok  &= std::fabs( v0[6] - 0.125) < 1e-10;

	v0[7] = v0[4] + v0[6];                      // sum = sum + term;
	ok  &= std::fabs( v0[7] - 1.625) < 1e-10;

	return ok;
}
bool exp_eps_for0(void)
{	double v0[8];
	return exp_eps_for0(v0);
}

Input File: introduction/exp_apx/exp_eps_for0.cpp
3.3.4: exp_eps: First Order Forward Sweep

3.3.4.a: First Order Expansion
We define  x(t) and  \varepsilon(t) ] near  t = 0 by the first order expansions  \[
\begin{array}{rcl}
     x(t) & = & x^{(0)} + x^{(1)} * t 
     \\
     \varepsilon(t) & = & \varepsilon^{(0)} + \varepsilon^{(1)} * t
\end{array}
\]
It follows that  x^{(0)} (  \varepsilon^{(0)} ) is the zero, and  x^{(1)} (  \varepsilon^{(1)} ) the first, order derivative of  x(t) at  t = 0 (  \varepsilon (t) ) at  t = 0 .

3.3.4.b: Mathematical Form
Suppose that we use the algorithm 3.3.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is  \[
f ( x , \varepsilon ) =   1 + x + x^2 / 2  
\] 
The corresponding partial derivative with respect to  x , and the value of the derivative, are  \[
\partial_x f ( x , \varepsilon ) =   1 + x  = 1.5
\] 


3.3.4.c: Operation Sequence

3.3.4.c.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.3.4.c.b: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.3.4.c.c: Zero Order
The Zero Order column contains the zero order derivatives for the corresponding variable in the operation sequence (see 3.2.4.d.f: zero order sweep ).

3.3.4.c.d: Derivative
The Derivative column contains the mathematical function corresponding to the derivative with respect to  t , at  t = 0 , for each variable in the sequence.

3.3.4.c.e: First Order
The First Order column contains the first order derivatives for the corresponding variable in the operation sequence; i.e.,  \[
     v_j (t) = v_j^{(0)} + v_j^{(1)} t
\] 
We use  x^{(1)} = 1 and  \varepsilon^{(1)} = 0 , so that differentiation with respect to  t , at  t = 0 , is the same partial differentiation with respect to  x at  x = x^{(0)} .

3.3.4.c.f: Sweep
Index    Operation    Zero Order    Derivative    First Order
1     v_1 = x  0.5  v_1^{(1)} = x^{(1)}    v_1^{(1)} = 1
2     v_2 = 1 * v_1 0.5  v_2^{(1)} = 1 * v_1^{(1)}  v_2^{(1)} = 1
3     v_3 = v_2 / 1 0.5  v_3^{(1)} = v_2^{(1)} / 1  v_3^{(1)} = 1
4     v_4 = 1 + v_3 1.5  v_4^{(1)} = v_3^{(1)}   v_4^{(1)} = 1
5     v_5 = v_3 * v_1 0.25  v_5^{(1)} = v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)}  v_5^{(1)} = 1
6     v_6 = v_5 / 2 0.125  v_6^{(1)} = v_5^{(1)} / 2  v_6^{(1)} = 0.5
7     v_7 = v_4 + v_6 1.625   v_7^{(1)} = v_4^{(1)} + v_6^{(1)}   v_7^{(1)} = 1.5
3.3.4.d: Return Value
The derivative of the return value for this case is  \[
\begin{array}{rcl}
1.5 
& = & 
v_7^{(1)} =
\left[ \D{v_7}{t} \right]_{t=0} =
\left[ \D{}{t} f( x^{(0)} + x^{(1)} * t , \varepsilon^{(0)} ) \right]_{t=0}
\\
& = &
\partial_x f ( x^{(0)} , \varepsilon^{(0)} ) * x^{(1)} = 
\partial_x f ( x^{(0)} , \varepsilon^{(0)} )
\end{array}
\] 
(We have used the fact that  x^{(1)} = 1 and  \varepsilon^{(1)} = 0 .)

3.3.4.e: Verification
The file 3.3.4.1: exp_eps_for1.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.3.4.f: Exercises
  1. Suppose that  x = .1 , what are the results of a zero and first order forward mode sweep for the operation sequence above; i.e., what are the corresponding values for  v_1^{(0)}, v_2^{(0)}, \cdots , v_7^{(0)} and  v_1^{(1)}, v_2^{(1)}, \cdots , v_7^{(1)} ?
  2. Create a modified version of 3.3.4.1: exp_eps_for1.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.3.4.1: exp_eps_for1.cpp .
  3. Suppose that  x = .1 and  \epsilon = .2 , what is the operation sequence corresponding to
         exp_eps(
    xepsilon)

Input File: introduction/exp_apx/exp_eps.omh
3.3.4.1: exp_eps: Verify First Order Forward Sweep
 # include <cmath>                     // for fabs function
extern bool exp_eps_for0(double *v0); // computes zero order forward sweep
bool exp_eps_for1(double *v1)         // double v[8]
{	bool ok = true;
	double v0[8];

	// set the value of v0[j] for j = 1 , ... , 7
	ok &= exp_eps_for0(v0);

	v1[1] = 1.;                                      // v1 = x
	ok    &= std::fabs( v1[1] - 1. ) <= 1e-10;

	v1[2] = 1. * v1[1];                              // v2 = 1 * v1
	ok    &= std::fabs( v1[2] - 1. ) <= 1e-10;

	v1[3] = v1[2] / 1.;                              // v3 = v2 / 1
	ok    &= std::fabs( v1[3] - 1. ) <= 1e-10;

	v1[4] = v1[3];                                   // v4 = 1 + v3
	ok    &= std::fabs( v1[4] - 1. ) <= 1e-10;

	v1[5] = v1[3] * v0[1] + v0[3] * v1[1];           // v5 = v3 * v1
	ok    &= std::fabs( v1[5] - 1. ) <= 1e-10;

	v1[6] = v1[5] / 2.;                              // v6 = v5 / 2
	ok    &= std::fabs( v1[6] - 0.5 ) <= 1e-10;

	v1[7] = v1[4] + v1[6];                           // v7 = v4 + v6
	ok    &= std::fabs( v1[7] - 1.5 ) <= 1e-10;

	return ok;
}
bool exp_eps_for1(void)
{	double v1[8];
	return exp_eps_for1(v1);
}

Input File: introduction/exp_apx/exp_eps_for1.cpp
3.3.5: exp_eps: First Order Reverse Sweep

3.3.5.a: Purpose
First order reverse mode uses the 3.3.3.b: operation sequence , and zero order forward sweep values, to compute the first order derivative of one dependent variable with respect to all the independent variables. The computations are done in reverse of the order of the computations in the original algorithm.

3.3.5.b: Mathematical Form
Suppose that we use the algorithm 3.3.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is  \[
f ( x , \varepsilon ) =   1 + x + x^2 / 2  
\] 
The corresponding partial derivatives, and the value of the derivatives, are  \[
\begin{array}{rcl}
\partial_x f ( x , \varepsilon ) & = &  1 + x  = 1.5
\\
\partial_\varepsilon f ( x , \varepsilon ) & = &  0
\end{array}
\] 


3.3.5.c: epsilon
Since  \varepsilon is an independent variable, it could included as an argument to all of the  f_j functions below. The result would be that all the partials with respect to  \varepsilon would be zero and hence we drop it to simplify the presentation.

3.3.5.d: f_7
In reverse mode we choose one dependent variable and compute its derivative with respect to all the independent variables. For our example, we chose the value returned by 3.3.1: exp_eps.hpp which is  v_7 . We begin with the function  f_7 where  v_7 is both an argument and the value of the function; i.e.,  \[
\begin{array}{rcl}
f_7 ( v_1 , v_2 , v_3 , v_4 , v_5 , v_6 , v_7 ) & = & v_7 
\\
\D{f_7}{v_7} & = & 1
\end{array}
\] 
All the other partial derivatives of  f_7 are zero.

3.3.5.e: Index 7: f_6
The last operation has index 7,  \[
     v_7 =   v_4 + v_6  
\] 
We define the function  f_6 ( v_1 , v_2 , v_3 , v_4 , v_5 , v_6 )  as equal to  f_7 except that  v_7 is eliminated using this operation; i.e.  \[
f_6  = 
f_7 [ v_1 , v_2 , v_3 , v_4 , v_5 , v_6 , v_7 ( v_4 , v_6 ) ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_6}{v_4} 
& = & \D{f_7}{v_4} + 
     \D{f_7}{v_7} * \D{v_7}{v_4} 
& = 1
\\
\D{f_6}{v_6} 
& = & \D{f_7}{v_6} + 
     \D{f_7}{v_7} * \D{v_7}{v_6} 
& = 1
\end{array}
\] 
All the other partial derivatives of  f_6 are zero.

3.3.5.f: Index 6: f_5
The previous operation has index 6,  \[
     v_6 = v_5 / 2
\] 
We define the function  f_5 (  v_1 , v_2 , v_3 , v_4 , v_5 )  as equal to  f_6  except that  v_6  is eliminated using this operation; i.e.,  \[
f_5 = 
f_6 [  v_1 , v_2 , v_3 , v_4 , v_5 , v_6 ( v_5 ) ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_5}{v_4} 
& = & \D{f_6}{v_4} 
& = 1
\\
\D{f_5}{v_5} 
& = & \D{f_6}{v_5} + 
     \D{f_6}{v_6} * \D{v_6}{v_5} 
& = 0.5
\end{array}
\] 
All the other partial derivatives of  f_5 are zero.

3.3.5.g: Index 5: f_4
The previous operation has index 5,  \[
     v_5 = v_3 * v_1
\] 
We define the function  f_4 (  v_1 , v_2 , v_3 , v_4 )  as equal to  f_5  except that  v_5  is eliminated using this operation; i.e.,  \[
f_4 = 
f_5 [  v_1 , v_2 , v_3 , v_4 , v_5 ( v_3 , v_1 )  ]
\] 
Given the information from the forward sweep, we have  v_3 =  0.5  and  v_1 = 0.5  . It follows that  \[
\begin{array}{rcll}
\D{f_4}{v_1} 
& = & \D{f_5}{v_1} + 
     \D{f_5}{v_5} * \D{v_5}{v_1} 
& =  0.25
\\
\D{f_4}{v_2} & = & \D{f_5}{v_2}  & = 0
\\
\D{f_4}{v_3} 
& = & \D{f_5}{v_3} + 
     \D{f_5}{v_5} * \D{v_5}{v_3} 
& =  0.25
\\
\D{f_4}{v_4}
& = & \D{f_5}{v_4} 
& = 1
\end{array}
\] 


3.3.5.h: Index 4: f_3
The previous operation has index 4,  \[
     v_4 = 1 + v_3
\] 
We define the function  f_3 (  v_1 , v_2 , v_3 )  as equal to  f_4  except that  v_4  is eliminated using this operation; i.e.,  \[
f_3 = 
f_4 [  v_1 , v_2 , v_3 , v_4 ( v_3 )  ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_3}{v_1} 
& = & \D{f_4}{v_1} 
& =  0.25
\\
\D{f_3}{v_2} & = & \D{f_4}{v_2}  & = 0
\\
\D{f_3}{v_3} 
& = & \D{f_4}{v_3} + 
     \D{f_4}{v_4} * \D{v_4}{v_3} 
& =  1.25
\end{array}
\] 


3.3.5.i: Index 3: f_2
The previous operation has index 3,  \[
     v_3 = v_2 / 1
\] 
We define the function  f_2 (  v_1 , v_2 )  as equal to  f_3  except that  v_3  is eliminated using this operation; i.e.,  \[
f_2 = 
f_4 [  v_1 , v_2 , v_3 ( v_2 )  ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_2}{v_1} 
& = & \D{f_3}{v_1} 
& =  0.25
\\
\D{f_2}{v_2} & = & \D{f_3}{v_2}  +
     \D{f_3}{v_3} * \D{v_3}{v_2}
& = 1.25
\end{array}
\] 


3.3.5.j: Index 2: f_1
The previous operation has index 1,  \[
     v_2 = 1 * v_1
\] 
We define the function  f_1 (  v_1 )  as equal to  f_2  except that  v_2  is eliminated using this operation; i.e.,  \[
f_1 = 
f_2 [  v_1 , v_2 ( v_1 )  ]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_1}{v_1} & = & \D{f_2}{v_1}  +
     \D{f_2}{v_2} * \D{v_2}{v_1}
& = 1.5
\end{array}
\] 
Note that  v_1 is equal to  x , so the derivative of exp_eps(xepsilon) at x equal to .5 and epsilon equal .2 is 1.5 in the x direction and zero in the epsilon direction. We also note that 3.3.4: forward forward mode gave the same result for the partial in the x direction.

3.3.5.k: Verification
The file 3.3.5.1: exp_eps_rev1.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of  f_j that might not be equal to the corresponding partials of  f_{j+1} ; i.e., the other partials of  f_j must be equal to the corresponding partials of  f_{j+1} .

3.3.5.l: Exercises
  1. Consider the case where  x = .1 and we first preform a zero order forward mode sweep for the operation sequence used above (in reverse order). What are the results of a first order reverse mode sweep; i.e., what are the corresponding values for  \D{f_j}{v_k} for all  j, k such that  \D{f_j}{v_k} \neq 0 .
  2. Create a modified version of 3.3.5.1: exp_eps_rev1.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.3.5.1: exp_eps_rev1.cpp .

Input File: introduction/exp_apx/exp_eps.omh
3.3.5.1: exp_eps: Verify First Order Reverse Sweep
 # include <cstddef>                     // define size_t
# include <cmath>                       // for fabs function
extern bool exp_eps_for0(double *v0);   // computes zero order forward sweep
bool exp_eps_rev1(void)
{	bool ok = true;

	// set the value of v0[j] for j = 1 , ... , 7
	double v0[8];
	ok &= exp_eps_for0(v0);

	// initial all partial derivatives as zero
	double f_v[8];
	size_t j;
	for(j = 0; j < 8; j++)
		f_v[j] = 0.;

	// set partial derivative for f7
	f_v[7] = 1.;
	ok    &= std::fabs( f_v[7] - 1. ) <= 1e-10;     // f7_v7

	// f6( v1 , v2 , v3 , v4 , v5 , v6 )
	f_v[4] += f_v[7] * 1.;
	f_v[6] += f_v[7] * 1.;
	ok     &= std::fabs( f_v[4] - 1.  ) <= 1e-10;   // f6_v4
	ok     &= std::fabs( f_v[6] - 1.  ) <= 1e-10;   // f6_v6

	// f5( v1 , v2 , v3 , v4 , v5 )
	f_v[5] += f_v[6] / 2.;
	ok     &= std::fabs( f_v[5] - 0.5 ) <= 1e-10;   // f5_v5

	// f4( v1 , v2 , v3 , v4 )
	f_v[1] += f_v[5] * v0[3];
	f_v[3] += f_v[5] * v0[1];
	ok     &= std::fabs( f_v[1] - 0.25) <= 1e-10;   // f4_v1
	ok     &= std::fabs( f_v[3] - 0.25) <= 1e-10;   // f4_v3

	// f3( v1 , v2 , v3 )
	f_v[3] += f_v[4] * 1.;
	ok     &= std::fabs( f_v[3] - 1.25) <= 1e-10;   // f3_v3

	// f2( v1 , v2 )
	f_v[2] += f_v[3] / 1.;
	ok     &= std::fabs( f_v[2] - 1.25) <= 1e-10;   // f2_v2

	// f1( v1 )
	f_v[1] += f_v[2] * 1.;
	ok     &= std::fabs( f_v[1] - 1.5 ) <= 1e-10;   // f1_v2

	return ok;
}

Input File: introduction/exp_apx/exp_eps_rev1.cpp
3.3.6: exp_eps: Second Order Forward Mode

3.3.6.a: Second Order Expansion
We define  x(t) and  \varepsilon(t) ] near  t = 0 by the second order expansions  \[
\begin{array}{rcl}
     x(t) & = & x^{(0)} + x^{(1)} * t  + x^{(2)} * t^2 / 2
     \\
     \varepsilon(t) & = & \varepsilon^{(0)} + \varepsilon^{(1)} * t
                      +   \varepsilon^{(2)} * t^2 / 2
\end{array}
\]
It follows that for  k = 0 , 1 , 2 ,  \[
\begin{array}{rcl}
     x^{(k)}  & = & \dpow{k}{t} x (0)
     \\
     \varepsilon^{(k)}  & = & \dpow{k}{t} \varepsilon (0)
\end{array}
\] 


3.3.6.b: Purpose
In general, a second order forward sweep is given the 3.2.4.a: first order expansion for all of the variables in an operation sequence, and the second order derivatives for the independent variables. It uses these to compute the second order derivative, and thereby obtain the second order expansion, for all the variables in the operation sequence.

3.3.6.c: Mathematical Form
Suppose that we use the algorithm 3.3.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is  \[
f ( x , \varepsilon ) =   1 + x + x^2 / 2  
\] 
The corresponding second partial derivative with respect to  x , and the value of the derivative, are  \[
\Dpow{2}{x} f ( x , \varepsilon ) =   1.
\] 


3.3.6.d: Operation Sequence

3.3.6.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.3.6.d.b: Zero
The Zero column contains the zero order sweep results for the corresponding variable in the operation sequence (see 3.2.3.c.e: zero order sweep ).

3.3.6.d.c: Operation
The Operation column contains the first order sweep operation for this variable.

3.3.6.d.d: First
The First column contains the first order sweep results for the corresponding variable in the operation sequence (see 3.2.4.d.f: first order sweep ).

3.3.6.d.e: Derivative
The Derivative column contains the mathematical function corresponding to the second derivative with respect to  t , at  t = 0 , for each variable in the sequence.

3.3.6.d.f: Second
The Second column contains the second order derivatives for the corresponding variable in the operation sequence; i.e., the second order expansion for the i-th variable is given by  \[
     v_i (t) = v_i^{(0)} + v_i^{(1)} * t +  v_i^{(2)} * t^2 / 2
\] 
We use  x^{(1)} = 1 ,  x^{(2)} = 0 , use  \varepsilon^{(1)} = 1 , and  \varepsilon^{(2)} = 0 so that second order differentiation with respect to  t , at  t = 0 , is the same as the second partial differentiation with respect to  x at  x = x^{(0)} .

3.3.6.d.g: Sweep
Index    Zero    Operation    First    Derivative    Second
1 0.5  v_1^{(1)} = x^{(1)}   1  v_2^{(2)} = x^{(2)}   0
2 0.5  v_2^{(1)} = 1 * v_1^{(1)} 1  v_2^{(2)} = 1 * v_1^{(2)} 0
3 0.5  v_3^{(1)} = v_2^{(1)} / 1 1  v_3^{(2)} = v_2^{(2)} / 1 0
4 1.5  v_4^{(1)} = v_3^{(1)}  1  v_4^{(2)} = v_3^{(2)}  0
5 0.25  v_5^{(1)} = v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)} 1  v_5^{(2)} = v_3^{(2)} * v_1^{(0)} + 2 * v_3^{(1)} * v_1^{(1)}
                      + v_3^{(0)} * v_1^{(2)}
2
6 0.125  v_6^{(1)} = v_5^{(1)} / 2 0.5  v_6^{(2)} = v_5^{(2)} / 2 1
7 1.625   v_7^{(1)} = v_4^{(1)} + v_6^{(1)} 1.5   v_7^{(2)} = v_4^{(2)} + v_6^{(2)} 1
3.3.6.e: Return Value
The second derivative of the return value for this case is  \[
\begin{array}{rcl}

& = & 
v_7^{(2)} =
\left[ \Dpow{2}{t} v_7 \right]_{t=0} =
\left[ \Dpow{2}{t} f( x^{(0)} + x^{(1)} * t , \varepsilon^{(0)} ) \right]_{t=0}
\\
& = &
x^{(1)} * \Dpow{2}{x} f ( x^{(0)} ,  \varepsilon^{(0)} ) * x^{(1)} =
\Dpow{2}{x} f ( x^{(0)} ,  \varepsilon^{(0)} )
\end{array}
\] 
(We have used the fact that  x^{(1)} = 1 ,  x^{(2)} = 0 ,  \varepsilon^{(1)} = 1 , and  \varepsilon^{(2)} = 0 .)

3.3.6.f: Verification
The file 3.3.6.1: exp_eps_for2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.3.6.g: Exercises
  1. Which statement in the routine defined by 3.3.6.1: exp_eps_for2.cpp uses the values that are calculated by the routine defined by 3.3.4.1: exp_eps_for1.cpp ?
  2. Suppose that  x = .1 , what are the results of a zero, first, and second order forward sweep for the operation sequence above; i.e., what are the corresponding values for  v_i^{(k)} for  i = 1, \ldots , 7 and  k = 0, 1, 2 .
  3. Create a modified version of 3.3.6.1: exp_eps_for2.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.3.6.1: exp_eps_for2.cpp .

Input File: introduction/exp_apx/exp_eps.omh
3.3.6.1: exp_eps: Verify Second Order Forward Sweep
 # include <cmath>                     // for fabs function
extern bool exp_eps_for0(double *v0); // computes zero order forward sweep
extern bool exp_eps_for1(double *v1); // computes first order forward sweep
bool exp_eps_for2(void)
{	bool ok = true;
	double v0[8], v1[8], v2[8];

	// set the value of v0[j], v1[j] for j = 1 , ... , 7
	ok &= exp_eps_for0(v0);
	ok &= exp_eps_for1(v1);

	v2[1] = 0.;                                      // v1 = x
	ok    &= std::fabs( v2[1] - 0. ) <= 1e-10;

	v2[2] = 1. * v2[1];                              // v2 = 1 * v1
	ok    &= std::fabs( v2[2] - 0. ) <= 1e-10;

	v2[3] = v2[2] / 1.;                              // v3 = v2 / 1
	ok    &= std::fabs( v2[3] - 0. ) <= 1e-10;

	v2[4] = v2[3];                                   // v4 = 1 + v3
	ok    &= std::fabs( v2[4] - 0. ) <= 1e-10;

	v2[5] = v2[3] * v0[1] + 2. * v1[3] * v1[1]       // v5 = v3 * v1
	      + v0[0] * v2[1];           
	ok    &= std::fabs( v2[5] - 2. ) <= 1e-10;

	v2[6] = v2[5] / 2.;                              // v6 = v5 / 2
	ok    &= std::fabs( v2[6] - 1. ) <= 1e-10;

	v2[7] = v2[4] + v2[6];                           // v7 = v4 + v6
	ok    &= std::fabs( v2[7] - 1. ) <= 1e-10;

	return ok;
}

Input File: introduction/exp_apx/exp_eps_for2.cpp
3.3.7: exp_eps: Second Order Reverse Sweep

3.3.7.a: Purpose
In general, a second order reverse sweep is given the 3.3.4.a: first order expansion for all of the variables in an operation sequence. Given a choice of a particular variable, it computes the derivative, of that variables first order expansion coefficient, with respect to all of the independent variables.

3.3.7.b: Mathematical Form
Suppose that we use the algorithm 3.3.1: exp_eps.hpp to compute exp_eps(xepsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is  \[
f ( x , \varepsilon ) =   1 + x + x^2 / 2  
\] 
The corresponding derivative of the partial derivative with respect to  x is  \[
\begin{array}{rcl}
\Dpow{2}{x} f ( x , \varepsilon ) & = &  1
\\
\partial_\varepsilon \partial_x f ( x , \varepsilon ) & = &  0
\end{array}
\] 


3.3.7.c: epsilon
Since  \varepsilon is an independent variable, it could included as an argument to all of the  f_j functions below. The result would be that all the partials with respect to  \varepsilon would be zero and hence we drop it to simplify the presentation.

3.3.7.d: f_7
In reverse mode we choose one dependent variable and compute its derivative with respect to all the independent variables. For our example, we chose the value returned by 3.3.1: exp_eps.hpp which is  v_7 . We begin with the function  f_7 where  v_7 is both an argument and the value of the function; i.e.,  \[
\begin{array}{rcl}
f_7 \left( 
     v_1^{(0)} , v_1^{(1)} , \ldots , v_7^{(0)} , v_7^{(1)} 
\right) 
& = & v_7^{(1)} 
\\
\D{f_7}{v_7^{(1)}} & = & 1
\end{array}
\] 
All the other partial derivatives of  f_7 are zero.

3.3.7.e: Index 7: f_6
The last operation has index 7,  \[
\begin{array}{rcl}
     v_7^{(0)} & = &   v_4^{(0)} + v_6^{(0)}  
     \\
     v_7^{(1)} & = &   v_4^{(1)} + v_6^{(1)}  
\end{array}
\] 
We define the function  f_6 \left( v_1^{(0)} , \ldots , v_6^{(1)} \right)  as equal to  f_7 except that  v_7^{(0)} and  v_7^{(1)} are eliminated using this operation; i.e.  \[
f_6  = 
f_7 \left[ v_1^{(0)} , \ldots , v_6^{(1)} , 
     v_7^{(0)} \left( v_4^{(0)} , v_6^{(0)} \right)  ,
     v_7^{(1)} \left( v_4^{(1)} , v_6^{(1)} \right)  
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_6}{v_4^{(1)}} 
& = & \D{f_7}{v_4^{(1)}} + 
     \D{f_7}{v_7^{(1)}} * \D{v_7^{(1)}}{v_4^{(1)}} 
& = 1
\\
\D{f_6}{v_6^{(1)}} 
& = & \D{f_7}{v_6^{(1)}} + 
     \D{f_7}{v_7^{(1)}} * \D{v_7^{(1)}}{v_6^{(1)}} 
& = 1
\end{array}
\] 
All the other partial derivatives of  f_6 are zero.

3.3.7.f: Index 6: f_5
The previous operation has index 6,  \[
\begin{array}{rcl}
     v_6^{(0)} & = & v_5^{(0)} / 2
     \\
     v_6^{(1)} & = & v_5^{(1)} / 2
\end{array}
\] 
We define the function  f_5 \left( v_1^{(0)} , \ldots , v_5^{(1)} \right)  as equal to  f_6  except that  v_6^{(0)} and  v_6^{(1)} are eliminated using this operation; i.e.  \[
f_5 = 
f_6 \left[ v_1^{(0)} , \ldots , v_5^{(1)} , 
     v_6^{(0)} \left( v_5^{(0)} \right) ,
     v_6^{(1)} \left( v_5^{(1)} \right) 
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_5}{v_4^{(1)}} 
& = & \D{f_6}{v_4^{(1)}} 
& = 1
\\
\D{f_5}{v_5^{(1)}} 
& = & \D{f_6}{v_5} + 
     \D{f_6}{v_6^{(1)}} * \D{v_6^{(1)}}{v_5^{(1)}} 
& = 0.5
\end{array}
\] 
All the other partial derivatives of  f_5 are zero.

3.3.7.g: Index 5: f_4
The previous operation has index 5,  \[
\begin{array}{rcl}
     v_5^{(0)} & = & v_3^{(0)} * v_1^{(0)}
     \\
     v_5^{(1)} & = & v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)}
\end{array}
\] 
We define the function  f_4 \left( v_1^{(0)} , \ldots , v_4^{(1)} \right)  as equal to  f_5  except that  v_5^{(0)} and  v_5^{(1)} are eliminated using this operation; i.e.  \[
f_4 = 
f_5 \left[  v_1^{(0)} , \ldots , v_4^{(1)} ,
     v_5^{(0)} \left( v_1^{(0)}, v_3^{(0)} \right) ,
     v_5^{(1)} \left( v_1^{(0)}, v_1^{(1)}, v_3^{(0)} , v_3^{(1)} \right) ,
\right]
\] 
Given the information from the forward sweep, we have  v_1^{(0)} =  0.5 ,  v_3^{(0)} =  0.5 ,  v_1^{(1)} =  1 ,  v_3^{(1)} =  1 , and the fact that the partial of  f_5 with respect to  v_5^{(0)} is zero, we have  \[
\begin{array}{rcll}
\D{f_4}{v_1^{(0)}}
& = & \D{f_5}{v_1^{(0)}}
  +   \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_1^{(0)}} 
& = 0.5
\\
\D{f_4}{v_1^{(1)}}
& = & \D{f_5}{v_1^{(1)}}
  +   \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_1^{(1)}} 
& = 0.25
\\
\D{f_4}{v_3^{(0)}}
& = & \D{f_5}{v_3^{(0)}}
  +   \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_3^{(0)}} 
& = 0.5
\\
\D{f_4}{v_3^{(1)}}
& = & \D{f_3}{v_1^{(1)}}
  +   \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_3^{(1)}} 
& = 0.25
\\
\D{f_4}{v_4^{(1)}}
& = & \D{f_5}{v_4^{(1)}}
& = 1
\end{array}
\] 
All the other partial derivatives of  f_5 are zero.

3.3.7.h: Index 4: f_3
The previous operation has index 4,  \[
\begin{array}{rcl}
     v_4^{(0)} = 1 + v_3^{(0)}
     \\
     v_4^{(1)} = v_3^{(1)}
\end{array}
\] 
We define the function  f_3 \left( v_1^{(0)} , \ldots , v_3^{(1)} \right)  as equal to  f_4  except that  v_4^{(0)} and  v_4^{(1)} are eliminated using this operation; i.e.  \[
f_3 = 
f_4 \left[ v_1^{(0)} , \ldots , v_3^{(1)} ,
     v_4^{(0)} \left( v_3^{(0)} \right) ,
     v_4^{(1)} \left( v_3^{(1)} \right)  
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_3}{v_1^{(0)}} 
& = & \D{f_4}{v_1^{(0)}}
& =  0.5
\\
\D{f_3}{v_1^{(1)}} 
& = & \D{f_4}{v_1^{(1)}}
& =  0.25
\\
\D{f_3}{v_2^{(0)}} 
& = & \D{f_4}{v_2^{(0)}}  
& = 0
\\
\D{f_3}{v_2^{(1)}} 
& = & \D{f_4}{v_2^{(1)}}  
& = 0
\\
\D{f_3}{v_3^{(0)}} 
& = & \D{f_4}{v_3^{(0)}} 
  +   \D{f_4}{v_4^{(0)}} * \D{v_4^{(0)}}{v_3^{(0)}} 
& = 0.5
\\
\D{f_3}{v_3^{(1)}} 
& = & \D{f_4}{v_3^{(1)}} 
  +   \D{f_4}{v_4^{(1)}} * \D{v_4^{(1)}}{v_3^{(1)}} 
& = 1.25
\end{array}
\] 


3.3.7.i: Index 3: f_2
The previous operation has index 3,  \[
\begin{array}{rcl}
     v_3^{(0)} & = & v_2^{(0)} / 1
     \\
     v_3^{(1)} & = & v_2^{(1)} / 1
\end{array}
\] 
We define the function  f_2 \left( v_1^{(0)} , \ldots , v_2^{(1)} \right)  as equal to  f_3  except that  v_3^{(0)} and  v_3^{(1)} are eliminated using this operation; i.e.  \[
f_2 = 
f_3 \left[ v_1^{(0)} , \ldots , v_2^{(1)} ,
     v_3^{(0)} \left( v_2^{(0)} \right) ,
     v_3^{(1)} \left( v_2^{(1)} \right)
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_2}{v_1^{(0)}} 
& = & \D{f_3}{v_1^{(0)}}
& =  0.5
\\
\D{f_2}{v_1^{(1)}} 
& = & \D{f_3}{v_1^{(1)}}
& =  0.25
\\
\D{f_2}{v_2^{(0)}} 
& = & \D{f_3}{v_2^{(0)}}  
  +   \D{f_3}{v_3^{(0)}} * \D{v_3^{(0)}}{v_2^{(0)}}
& = 0.5
\\
\D{f_2}{v_2^{(1)}} 
& = & \D{f_3}{v_2^{(1)}}  
  +   \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_2^{(0)}}
& = 1.25
\end{array}
\] 


3.3.7.j: Index 2: f_1
The previous operation has index 1,  \[
\begin{array}{rcl}
     v_2^{(0)} & = & 1 * v_1^{(0)}
     \\
     v_2^{(1)} & = & 1 * v_1^{(1)}
\end{array}
\] 
We define the function  f_1 \left( v_1^{(0)} , v_1^{(1)} \right)  as equal to  f_2  except that  v_2^{(0)} and  v_2^{(1)} are eliminated using this operation; i.e.  \[
f_1 = 
f_2 \left[  v_1^{(0)} , v_1^{(1)} , 
     v_2^{(0)} \left( v_1^{(0)} \right)  ,
     v_2^{(1)} \left( v_1^{(1)} \right)  
\right]
\] 
It follows that  \[
\begin{array}{rcll}
\D{f_1}{v_1^{(0)}} 
& = & \D{f_2}{v_1^{(0)}}
  +   \D{f_2}{v_2^{(0)}} * \D{v_2^{(0)}}{v_1^{(0)}}
& =  1
\\
\D{f_1}{v_1^{(1)}} 
& = & \D{f_2}{v_1^{(1)}}
  +   \D{f_2}{v_2^{(1)}} * \D{v_2^{(1)}}{v_1^{(1)}}
& = 1.5
\end{array}
\] 
Note that  v_1 is equal to  x , so the second partial derivative of exp_eps(xepsilon) at x equal to .5 and epsilon equal .2 is  \[
\Dpow{2}{x} v_7^{(0)}
= \D{v_7^{(1)}}{x} 
= \D{f_1}{v_1^{(0)}}
= 1
\] 
There is a theorem about algorithmic differentiation that explains why the other partial of  f_1 is equal to the first partial of exp_eps(xepsilon) with respect to  x .

3.3.7.k: Verification
The file 3.3.7.1: exp_eps_rev2.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of  f_j that might not be equal to the corresponding partials of  f_{j+1} ; i.e., the other partials of  f_j must be equal to the corresponding partials of  f_{j+1} .

3.3.7.l: Exercises
  1. Consider the case where  x = .1 and we first preform a zero order forward mode sweep for the operation sequence used above (in reverse order). What are the results of a first order reverse mode sweep; i.e., what are the corresponding values for  \D{f_j}{v_k} for all  j, k such that  \D{f_j}{v_k} \neq 0 .
  2. Create a modified version of 3.3.7.1: exp_eps_rev2.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.3.7.1: exp_eps_rev2.cpp .

Input File: introduction/exp_apx/exp_eps.omh
3.3.7.1: exp_eps: Verify Second Order Reverse Sweep
 # include <cstddef>                     // define size_t
# include <cmath>                       // for fabs function
extern bool exp_eps_for0(double *v0);   // computes zero order forward sweep
extern bool exp_eps_for1(double *v1);   // computes first order forward sweep
bool exp_eps_rev2(void)
{	bool ok = true;

	// set the value of v0[j], v1[j] for j = 1 , ... , 7
	double v0[8], v1[8];
	ok &= exp_eps_for0(v0);
	ok &= exp_eps_for1(v1);

	// initial all partial derivatives as zero
	double f_v0[8], f_v1[8];
	size_t j;
	for(j = 0; j < 8; j++)
	{	f_v0[j] = 0.;
		f_v1[j] = 0.;
	}

	// set partial derivative for f_7
	f_v1[7] = 1.;
	ok &= std::fabs( f_v1[7] - 1.  ) <= 1e-10; // partial f_7 w.r.t. v_7^1

	// f_6 = f_7( v_1^0 , ... , v_6^1 , v_4^0 + v_6^0, v_4^1 , v_6^1 )
	f_v0[4] += f_v0[7];
	f_v0[6] += f_v0[7];
	f_v1[4] += f_v1[7];
	f_v1[6] += f_v1[7];
	ok &= std::fabs( f_v0[4] - 0.  ) <= 1e-10; // partial f_6 w.r.t. v_4^0
	ok &= std::fabs( f_v0[6] - 0.  ) <= 1e-10; // partial f_6 w.r.t. v_6^0
	ok &= std::fabs( f_v1[4] - 1.  ) <= 1e-10; // partial f_6 w.r.t. v_4^1
	ok &= std::fabs( f_v1[6] - 1.  ) <= 1e-10; // partial f_6 w.r.t. v_6^1

	// f_5 = f_6( v_1^0 , ... , v_5^1 , v_5^0 / 2 , v_5^1 / 2 )
	f_v0[5] += f_v0[6] / 2.;
	f_v1[5] += f_v1[6] / 2.;
	ok &= std::fabs( f_v0[5] - 0.  ) <= 1e-10; // partial f_5 w.r.t. v_5^0
	ok &= std::fabs( f_v1[5] - 0.5 ) <= 1e-10; // partial f_5 w.r.t. v_5^1

	// f_4 = f_5( v_1^0 , ... , v_4^1 , v_3^0 * v_1^0 , 
	//            v_3^1 * v_1^0 + v_3^0 * v_1^1 )
	f_v0[1] += f_v0[5] * v0[3] + f_v1[5] * v1[3];
	f_v0[3] += f_v0[5] * v0[1] + f_v1[5] * v1[1];
	f_v1[1] += f_v1[5] * v0[3];
	f_v1[3] += f_v1[5] * v0[1];
	ok &= std::fabs( f_v0[1] - 0.5  ) <= 1e-10; // partial f_4 w.r.t. v_1^0
	ok &= std::fabs( f_v0[3] - 0.5  ) <= 1e-10; // partial f_4 w.r.t. v_3^0
	ok &= std::fabs( f_v1[1] - 0.25 ) <= 1e-10; // partial f_4 w.r.t. v_1^1
	ok &= std::fabs( f_v1[3] - 0.25 ) <= 1e-10; // partial f_4 w.r.t. v_3^1

	// f_3 = f_4(  v_1^0 , ... , v_3^1 , 1 + v_3^0 , v_3^1 )
	f_v0[3] += f_v0[4];
	f_v1[3] += f_v1[4];
	ok &= std::fabs( f_v0[3] - 0.5 ) <= 1e-10;  // partial f_3 w.r.t. v_3^0
	ok &= std::fabs( f_v1[3] - 1.25) <= 1e-10;  // partial f_3 w.r.t. v_3^1

	// f_2 = f_3( v_1^0 , ... , v_2^1 , v_2^0 , v_2^1 )
	f_v0[2] += f_v0[3];
	f_v1[2] += f_v1[3];
	ok &= std::fabs( f_v0[2] - 0.5 ) <= 1e-10;  // partial f_2 w.r.t. v_2^0
	ok &= std::fabs( f_v1[2] - 1.25) <= 1e-10;  // partial f_2 w.r.t. v_2^1

	// f_1 = f_2 ( v_1^0 , v_2^0 , v_1^0 , v_2^0 )
	f_v0[1] += f_v0[2];
	f_v1[1] += f_v1[2];
	ok &= std::fabs( f_v0[1] - 1.  ) <= 1e-10;  // partial f_1 w.r.t. v_1^0
	ok &= std::fabs( f_v1[1] - 1.5 ) <= 1e-10;  // partial f_1 w.r.t. v_1^1

	return ok;
}

Input File: introduction/exp_apx/exp_eps_rev2.cpp
3.3.8: exp_eps: CppAD Forward and Reverse Sweeps
.

3.3.8.a: Purpose
Use CppAD forward and reverse modes to compute the partial derivative with respect to  x , at the point  x = .5 and  \varepsilon = .2 , of the function
     exp_eps(
xepsilon)
as defined by the 3.3.1: exp_eps.hpp include file.

3.3.8.b: Exercises
  1. Create and test a modified version of the routine below that computes the same order derivatives with respect to  x , at the point  x = .1 and  \varepsilon = .2 , of the function
         exp_eps(
    xepsilon)
  2. Create and test a modified version of the routine below that computes partial derivative with respect to  x , at the point  x = .1 and  \varepsilon = .2 , of the function corresponding to the operation sequence for  x = .5 and  \varepsilon = .2 . Hint: you could define a vector u with two components and use
         
    f.Forward(0, u)
    to run zero order forward mode at a point different form the point where the operation sequence corresponding to f was recorded.
 
# include <cppad/cppad.hpp>  // http://www.coin-or.org/CppAD/ 
# include "exp_eps.hpp"      // our example exponential function approximation
bool exp_eps_cppad(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::vector;    // can use any simple vector template class
	using CppAD::NearEqual; // checks if values are nearly equal

	// domain space vector
	size_t n = 2; // dimension of the domain space
	vector< AD<double> > U(n);
	U[0] = .5;    // value of x for this operation sequence
	U[1] = .2;    // value of e for this operation sequence

	// declare independent variables and start recording operation sequence
	CppAD::Independent(U);

	// evaluate our exponential approximation
	AD<double> x       = U[0];
	AD<double> epsilon = U[1];
	AD<double> apx = exp_eps(x, epsilon);  

	// range space vector
	size_t m = 1;  // dimension of the range space
	vector< AD<double> > Y(m);
	Y[0] = apx;    // variable that represents only range space component

	// Create f: U -> Y corresponding to this operation sequence
	// and stop recording. This also executes a zero order forward 
	// mode sweep using values in U for x and e.
	CppAD::ADFun<double> f(U, Y);

	// first order forward mode sweep that computes partial w.r.t x
	vector<double> du(n);      // differential in domain space
	vector<double> dy(m);      // differential in range space
	du[0] = 1.;                // x direction in domain space
	du[1] = 0.;
	dy    = f.Forward(1, du);  // partial w.r.t. x
	double check = 1.5;
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// first order reverse mode sweep that computes the derivative
	vector<double>  w(m);     // weights for components of the range
	vector<double> dw(n);     // derivative of the weighted function
	w[0] = 1.;                // there is only one weight 
	dw   = f.Reverse(1, w);   // derivative of w[0] * exp_eps(x, epsilon)
	check = 1.5;              // partial w.r.t. x
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);
	check = 0.;               // partial w.r.t. epsilon
	ok   &= NearEqual(dw[1], check, 1e-10, 1e-10);

	// second order forward sweep that computes
	// second partial of exp_eps(x, epsilon) w.r.t. x
	vector<double> x2(n);     // second order Taylor coefficients
	vector<double> y2(m);
	x2[0] = 0.;               // evaluate partial w.r.t x
	x2[1] = 0.;
	y2    = f.Forward(2, x2);
	check = 0.5 * 1.;         // Taylor coef is 1/2 second derivative
	ok   &= NearEqual(y2[0], check, 1e-10, 1e-10);

	// second order reverse sweep that computes
	// derivative of partial of exp_eps(x, epsilon) w.r.t. x
	dw.resize(2 * n);         // space for first and second derivative
	dw    = f.Reverse(2, w);
	check = 1.;               // result should be second derivative
	ok   &= NearEqual(dw[0*2+1], check, 1e-10, 1e-10);

	return ok;
}

Input File: introduction/exp_apx/exp_eps_cppad.cpp
3.4: Run the exp_2 and exp_eps Tests
 

// system include files used for I/O
# include <iostream>

// external complied tests
extern bool exp_2(void);
extern bool exp_2_cppad(void);
extern bool exp_2_for1(void);
extern bool exp_2_for2(void);
extern bool exp_2_rev1(void);
extern bool exp_2_rev2(void);
extern bool exp_2_for0(void);
extern bool exp_eps(void);
extern bool exp_eps_cppad(void);
extern bool exp_eps_for1(void);
extern bool exp_eps_for2(void);
extern bool exp_eps_for0(void);
extern bool exp_eps_rev1(void);
extern bool exp_eps_rev2(void);

namespace {
	// function that runs one test
	static size_t Run_ok_count    = 0;
	static size_t Run_error_count = 0;
	bool Run(bool TestOk(void), const char *name)
	{	bool ok = true;
		ok &= TestOk();
		if( ok )
		{	std::cout << "Ok:    " << name << std::endl;
			Run_ok_count++;
		}
		else
		{	std::cout << "Error: " << name << std::endl;
			Run_error_count++;
		}
		return ok;
	}
}

// main program that runs all the tests
int main(void)
{	bool ok = true;
	using namespace std;

	// This comment is used by OneTest 

	// external compiled tests
	ok &= Run( exp_2,           "exp_2"          );
	ok &= Run( exp_2_cppad,     "exp_2_cppad"    );
	ok &= Run( exp_2_for0,      "exp_2_for0"     );
	ok &= Run( exp_2_for1,      "exp_2_for1"     );
	ok &= Run( exp_2_for2,      "exp_2_for2"     );
	ok &= Run( exp_2_rev1,      "exp_2_rev1"     );
	ok &= Run( exp_2_rev2,      "exp_2_rev2"     );
	ok &= Run( exp_eps,         "exp_eps"        );
	ok &= Run( exp_eps_cppad,   "exp_eps_cppad"  );
	ok &= Run( exp_eps_for0,    "exp_eps_for0"   );
	ok &= Run( exp_eps_for1,    "exp_eps_for1"   );
	ok &= Run( exp_eps_for2,    "exp_eps_for2"   );
	ok &= Run( exp_eps_rev1,    "exp_eps_rev1"   );
	ok &= Run( exp_eps_rev2,    "exp_eps_rev2"   );
	if( ok )
		cout << "All " << int(Run_ok_count) << " tests passed." << endl;
	else	cout << int(Run_error_count) << " tests failed." << endl;

	return static_cast<int>( ! ok );
}

Input File: introduction/exp_apx/main.cpp
4: AD Objects

4.a: Purpose
The sections listed below describe the operations that are available to 9.4.b: AD of Base objects. These objects are used to 9.4.j: tape an AD of Base 9.4.g.b: operation sequence . This operation sequence can be transferred to an 5: ADFun object where it can be used to evaluate the corresponding function and derivative values.

4.b: Base Type Requirements
The Base requirements are provided by the CppAD package for the following base types: float, double, std::complex<float>, std::complex<double>, and AD<Other>. Otherwise, see 4.7: base_require .

4.c: Contents
Default: 4.1AD Default Constructor
ad_copy: 4.2AD Copy Constructor and Assignment Operator
Convert: 4.3Conversion and Printing of AD Objects
ADValued: 4.4AD Valued Operations and Functions
BoolValued: 4.5Bool Valued Operations and Functions with AD Arguments
VecAD: 4.6AD Vectors that Record Index Operations
base_require: 4.7AD<Base> Requirements for Base Type

Input File: cppad/local/user_ad.hpp
4.1: AD Default Constructor

4.1.a: Syntax
AD<Basex;

4.1.b: Purpose
Constructs an AD object with an unspecified value. Directly after this construction, the object is a 9.4.h: parameter .

4.1.c: Example
The file 4.1.1: Default.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/default.hpp
4.1.1: Default AD Constructor: Example and Test
 

# include <cppad/cppad.hpp>

bool Default(void)
{	bool ok = true;
	using CppAD::AD;

	// default AD constructor
	AD<double> x, y;

	// check that they are parameters
	ok &= Parameter(x);
	ok &= Parameter(y);

	// assign them values
	x = 3.; 
	y = 4.;

	// just check a simple operation
	ok &= (x + y == 7.);

	return ok;
}


Input File: example/default.cpp
4.2: AD Copy Constructor and Assignment Operator

4.2.a: Syntax

4.2.a.a: Constructor
AD<Basey(x)
AD<Basey = x

4.2.a.b: Assignment
y = x

4.2.b: Purpose
The constructor creates a new AD<Base> object y and the assignment operator uses an existing y. In either case, y has the same value as x, and the same dependence on the 9.4.j.c: independent variables (y is a 9.4.l: variable if and only if x is a variable).

4.2.c: x
The argument x has prototype
     const 
Type &x
where Type is VecAD<Base>::reference, AD<Base>, Base, or double.

4.2.d: y
The target y has prototype
     AD<
Base> &y

4.2.e: Example
The following files contain examples and tests of these operations. Each test returns true if it succeeds and false otherwise.
4.2.1: CopyAD.cpp AD Copy Constructor: Example and Test
4.2.2: CopyBase.cpp AD Constructor From Base Type: Example and Test
4.2.3: Eq.cpp AD Assignment Operator: Example and Test

Input File: cppad/local/ad_copy.hpp
4.2.1: AD Copy Constructor: Example and Test
 

# include <cppad/cppad.hpp>

bool CopyAD(void)
{	bool ok = true;   // initialize test result flag
	using CppAD::AD;  // so can use AD in place of CppAD::AD

	// domain space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]     = 2.;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// create an AD<double> that does not depend on x
	AD<double> b = 3.;   

	// use copy constructor 
	AD<double> u(x[0]);    
	AD<double> v = b;

	// check which are parameters
	ok &= Variable(u);
	ok &= Parameter(v);

	// range space vector
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0]  = u;
	y[1]  = v;

	// create f: x -> y and vectors used for derivative calculations
	CppAD::ADFun<double> f(x, y);
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
 
 	// check parameters flags
 	ok &= ! f.Parameter(0);
 	ok &=   f.Parameter(1);

	// check function values
	ok &= ( y[0] == 2. );
	ok &= ( y[1] == 3. );

	// forward computation of partials w.r.t. x[0]
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= ( dy[0] == 1. );   // du / dx
	ok   &= ( dy[1] == 0. );   // dv / dx

	return ok;
}


Input File: example/copy_ad.cpp
4.2.2: AD Constructor From Base Type: Example and Test
 

# include <cppad/cppad.hpp>

bool CopyBase(void)
{	bool ok = true;    // initialize test result flag
	using CppAD::AD;   // so can use AD in place of CppAD::AD

	// construct directly from Base where Base is double 
	AD<double> x(1.); 

	// construct from a type that converts to Base where Base is double
	AD<double> y = 2;

	// construct from a type that converts to Base where Base = AD<double>
	AD< AD<double> > z(3); 

	// check that resulting objects are parameters
	ok &= Parameter(x);
	ok &= Parameter(y);
	ok &= Parameter(z);

	// check values of objects (compare AD<double> with double)
	ok &= ( x == 1.);
	ok &= ( y == 2.);
	ok &= ( Value(z) == 3.);

	// user constructor through the static_cast template function
	x   = static_cast < AD<double> >( 4 );
	z  = static_cast < AD< AD<double> > >( 5 );

	ok &= ( x == 4. );
	ok &= ( Value(z) == 5. );

	return ok;
}


Input File: example/copy_base.cpp
4.2.3: AD Assignment Operator: Example and Test
 

# include <cppad/cppad.hpp>

bool Eq(void)
{	bool ok = true;
	using CppAD::AD;

	// domain space vector
	size_t n = 3;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]     = 2;      // AD<double> = int
	x[1]     = 3.;     // AD<double> = double
	x[2]     = x[1];   // AD<double> = AD<double>

	// declare independent variables and start tape recording
	CppAD::Independent(x);
	
	// range space vector 
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> > y(m);

	// assign an AD<Base> object equal to an independent variable
	// (choose the first independent variable to check a special case)
	// use the value returned by the assignment (for another assignment)
	y[0] = y[1] = x[0];  

	// assign an AD<Base> object equal to an expression 
	y[1] = x[1] + 1.;
	y[2] = x[2] + 2.;

	// check that all the resulting components of y depend on x
	ok &= Variable(y[0]);  // y[0] = x[0]
	ok &= Variable(y[1]);  // y[1] = x[1] + 1
	ok &= Variable(y[2]);  // y[2] = x[2] + 2

	// construct f : x -> y and stop the tape recording
	CppAD::ADFun<double> f(x, y);

	// check variable values
	ok &= ( y[0] == 2.);
	ok &= ( y[1] == 4.);
	ok &= ( y[2] == 5.);

	// compute partials w.r.t x[1]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 0.;
	dx[1] = 1.;
	dx[2] = 0.;
	dy   = f.Forward(1, dx);
	ok  &= (dy[0] == 0.);  // dy[0] / dx[1]
	ok  &= (dy[1] == 1.);  // dy[1] / dx[1]
	ok  &= (dy[2] == 0.);  // dy[2] / dx[1]

	// compute the derivative y[2]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0] = 0.;
	w[1] = 0.;
	w[2] = 1.;
	dw   = f.Reverse(1, w);
	ok  &= (dw[0] == 0.);  // dy[2] / dx[0]
	ok  &= (dw[1] == 0.);  // dy[2] / dx[1]
	ok  &= (dw[2] == 1.);  // dy[2] / dx[2]

	// assign a VecAD<Base>::reference
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = 5.;
	ok     &= (v[0] == 5.);

	return ok;
}


Input File: example/eq.cpp
4.3: Conversion and Printing of AD Objects
4.3.1: Value Convert From an AD Type to its Base Type
4.3.2: Integer Convert From AD to Integer
4.3.3: Output AD Output Stream Operator
4.3.4: PrintFor Printing AD Values During Forward Mode
4.3.5: Var2Par Convert an AD Variable to a Parameter

Input File: cppad/local/convert.hpp
4.3.1: Convert From an AD Type to its Base Type

4.3.1.a: Syntax
b = Value(x)

4.3.1.b: Purpose
Converts from an AD type to the corresponding 9.4.e: base type .

4.3.1.c: x
The argument x has prototype
     const AD<
Base> &x

4.3.1.d: b
The return value b has prototype
     
Base b

4.3.1.e: Operation Sequence
The result of this operation is not an 9.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

4.3.1.f: Restriction
If the argument x is a 9.4.l: variable its dependency information would not be included in the Value result (see above). For this reason, the argument x must be a 9.4.h: parameter ; i.e., it cannot depend on the current 9.4.j.c: independent variables .

4.3.1.g: Example
The file 4.3.1.1: Value.cpp contains an example and test of this operation.
Input File: cppad/local/value.hpp
4.3.1.1: Convert From AD to its Base Type: Example and Test
 

# include <cppad/cppad.hpp>

bool Value(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::Value;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0] = 3.;
	x[1] = 4.;

	// check value before recording
	ok &= (Value(x[0]) == 3.);
	ok &= (Value(x[1]) == 4.);

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = - x[1];

	// cannot call Value(x[j]) or Value(y[0]) here (currently variables)
	AD<double> p = 5.;        // p is a parameter (does not depend on x)
	ok &= (Value(p) == 5.);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// can call Value(x[j]) or Value(y[0]) here (currently parameters)
	ok &= (Value(x[0]) ==  3.);
	ok &= (Value(x[1]) ==  4.);
	ok &= (Value(y[0]) == -4.);

	return ok;
}

Input File: example/value.cpp
4.3.2: Convert From AD to Integer

4.3.2.a: Syntax
i = Integer(x)

4.3.2.b: Purpose
Converts from an AD type to the corresponding integer value.

4.3.2.c: i
The result i has prototype
     int 
i

4.3.2.d: x

4.3.2.d.a: Real Types
If the argument x has either of the following prototypes:
     const float                
  &x
     const double               
  &x
the fractional part is dropped to form the integer value. For example, if x is 1.5, i is 1. In general, if  x \geq 0 , i is the greatest integer less than or equal x. If  x \leq 0 , i is the smallest integer greater than or equal x.

4.3.2.d.b: Complex Types
If the argument x has either of the following prototypes:
     const std::complex<float>  
  &x
     const std::complex<double> 
  &x
The result i is given by
     
i = Integer(x.real())

4.3.2.d.c: AD Types
If the argument x has either of the following prototypes:
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x
Base must support the Integer function and the conversion has the same meaning as for Base.

4.3.2.e: Operation Sequence
The result of this operation is not an 9.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

4.3.2.f: Example
The file 4.3.2.1: Integer.cpp contains an example and test of this operation.
Input File: cppad/local/integer.hpp
4.3.2.1: Convert From AD to Integer: Example and Test
 

# include <cppad/cppad.hpp>

bool Integer(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::Integer;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0] = 3.5;
	x[1] = 4.5;

	// check integer before recording
	ok &= (Integer(x[0]) == 3);
	ok &= (Integer(x[1]) == 4);

	// start recording

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// check integer during recording
	ok &= (Integer(x[0]) == 3);
	ok &= (Integer(x[1]) == 4);

	// check integer for VecAD element
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = 2;
	ok &= (Integer(v[zero]) == 2);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = - x[1];

	// create f: x -> y and stop recording
	CppAD::ADFun<double> f(x, y);

	// check integer after recording
	ok &= (Integer(x[0]) ==  3.);
	ok &= (Integer(x[1]) ==  4.);
	ok &= (Integer(y[0]) == -4.);

	return ok;
}

Input File: example/integer.cpp
4.3.3: AD Output Stream Operator

4.3.3.a: Syntax
os << x

4.3.3.b: Purpose
Writes the Base value_, corresponding to x, to the output stream os.

4.3.3.c: os
The operand os has prototype
     std::ostream &
os

4.3.3.d: x
The operand x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.3.3.e: Result
The result of this operation can be used as a reference to os. For example, if the operand y has prototype
     AD<
Basey
then the syntax
     
os << x << y
will output the value corresponding to x followed by the value corresponding to y.

4.3.3.f: Operation Sequence
The result of this operation is not an 9.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

4.3.3.g: Example
The file 4.3.3.1: Output.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/output.hpp
4.3.3.1: AD Output Operator: Example and Test
 

# include <cppad/cppad.hpp>

# include <sstream>  // std::ostringstream
# include <string>   // std::string
# include <iomanip>  // std::setprecision, setw, setfill, right

namespace {
	template <class S>
	void set_ostream(S &os)
	{	os 
		<< std::setprecision(4) // 4 digits of precision
		<< std::setw(6)         // 6 characters per field
		<< std::setfill(' ')    // fill with spaces
		<< std::right;          // adjust value to the right
	}
}

bool Output(void)
{	bool ok = true;

	// This output stream is an ostringstream for testing purposes.
	// You can use << with other types of streams; i.e., std::cout.
	std::ostringstream stream;

	// ouput an AD<double> object
	CppAD::AD<double>  pi = 4. * atan(1.); // 3.1415926536
	set_ostream(stream);
	stream << pi;

	// ouput a VecAD<double>::reference object
	CppAD::VecAD<double> v(1);
	CppAD::AD<double> zero(0);
	v[zero]   = exp(1.);                  // 2.7182818285
	set_ostream(stream); 
	stream << v[zero];

	// convert output from stream to string
	std::string str = stream.str();

	// check the output
	ok      &= (str == " 3.142 2.718");

	return ok;
}

Input File: example/output.cpp
4.3.4: Printing AD Values During Forward Mode

4.3.4.a: Syntax
PrintFor(texty)
f.Forward(0, x)

4.3.4.b: Purpose
The current value of an AD<Base> object y is the result of an AD of Base operation. This operation may be part of the 9.4.g.b: operation sequence that is transferred to an 5: ADFun object f. The ADFun object can be evaluated at different values for the 9.4.j.c: independent variables . This may result in a corresponding value for y that is different from when the operation sequence was recorded. The routine PrintFor requests a printing, when f.Forward(0, x) is executed, of the value for y that corresponds to the independent variable values specified by x.

4.3.4.c: text
The argument text has prototype
     const char *
text
The corresponding text is written to std::cout before the value of y.

4.3.4.d: y
The argument y has one of the following prototypes
     const AD<
Base>               &y
     const VecAD<
Base>::reference &y
The value of y that corresponds to x is written to std::cout during the execution of
     
f.Forward(0, x)

4.3.4.e: f.Forward(0, x)
The objects f, x, and the purpose for this operation, are documented in 5.6.1: Forward .

4.3.4.f: Discussion
This is can be helpful for understanding why tape evaluations have trouble, for example, if the result of a tape calculation is the IEEE code for not a number Nan.

4.3.4.g: Alternative
The 4.3.3: Output section describes the normal printing of values; i.e., printing when the corresponding code is executed.

4.3.4.h: Example
The program 4.3.4.1: PrintFor.cpp is an example and test of this operation. The output of this program states the conditions for passing and failing the test.
Input File: cppad/local/print_for.hpp
4.3.4.1: Printing During Forward Mode: Example and Test

4.3.4.1.a: Program
 
# include <cppad/cppad.hpp>

int main(void)
{
	using namespace CppAD;
	using std::cout;
	using std::endl;

	// independent variable vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;
	Independent(X);

	// print a VecAD<double>::reference object
	VecAD<double> V(1);
	AD<double> Zero(0);
	V[Zero] = X[0];
	PrintFor("x[0] = ", V[Zero]); 

	// dependent variable vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = V[Zero] + X[1];

	// First print a newline to separate this from previous output,
	// then print an AD<double> object.
	PrintFor(  "\nx[0] + x[1] = ", Y[0]); 

	// define f: x -> y and stop tape recording
	ADFun<double> f(X, Y); 

	// zero order forward with x[0] = 1 and x[1] = 1
	CPPAD_TEST_VECTOR<double> x(n);
	x[0] = 1.;
	x[1] = 1.;

	cout << "x[0] = 1" << endl; 
	cout << "x[0] + x[1] = 2" << endl; 
	cout << "Test passes if two lines above repeat below:" << endl;
	f.Forward(0, x);	

	// print a new line after output
	std::cout << std::endl;

	return 0;
}


4.3.4.1.b: Output
Executing the program above generates the following output:
 
	x[0] = 1
	x[0] + x[1] = 2
	Test passes if two lines above repeat below:
	x[0] = 1
	x[0] + x[1] = 2

Input File: print_for/print_for.cpp
4.3.5: Convert an AD Variable to a Parameter

4.3.5.a: Syntax
y = Var2Par(x)

4.3.5.b: Purpose
Returns a 9.4.h: parameter y with the same value as the 9.4.l: variable x.

4.3.5.c: x
The argument x has prototype
     const AD<
Base> &x
The argument x may be a variable or parameter.

4.3.5.d: y
The result y has prototype
     AD<
Base> &y
The return value y will be a parameter.

4.3.5.e: Example
The file 4.3.5.1: Var2Par.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/var2par.hpp
4.3.5.1: Convert an AD Variable to a Parameter: Example and Test
 

# include <cppad/cppad.hpp>


bool Var2Par(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::Value;
	using CppAD::Var2Par;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0] = 3.;
	x[1] = 4.;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = - x[1] * Var2Par(x[0]);    // same as y[0] = -x[1] * 3.;

	// cannot call Value(x[j]) or Value(y[0]) here (currently variables)
	ok &= ( Value( Var2Par(x[0]) ) == 3. );
	ok &= ( Value( Var2Par(x[1]) ) == 4. ); 
	ok &= ( Value( Var2Par(y[0]) ) == -12. ); 

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// can call Value(x[j]) or Value(y[0]) here (currently parameters)
	ok &= (Value(x[0]) ==  3.);
	ok &= (Value(x[1]) ==  4.);
	ok &= (Value(y[0]) == -12.);

	// evaluate derivative of y w.r.t x
	CPPAD_TEST_VECTOR<double> w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0] = 1.;
	dw   = f.Reverse(1, w);
	ok  &= (dw[0] == 0.);  // derivative of y[0] w.r.t x[0] is zero
	ok  &= (dw[1] == -3.); // derivative of y[0] w.r.t x[1] is 3

	return ok;
}

Input File: example/var_2par.cpp
4.4: AD Valued Operations and Functions

4.4.a: Contents
Arithmetic: 4.4.1AD Arithmetic Operators and Computed Assignments
std_math_ad: 4.4.2AD Standard Math Unary Functions
MathOther: 4.4.3Other AD Math Functions
CondExp: 4.4.4AD Conditional Expressions
Discrete: 4.4.5Discrete AD Functions

Input File: cppad/local/ad_valued.hpp
4.4.1: AD Arithmetic Operators and Computed Assignments

4.4.1.a: Contents
UnaryPlus: 4.4.1.1AD Unary Plus Operator
UnaryMinus: 4.4.1.2AD Unary Minus Operator
ad_binary: 4.4.1.3AD Binary Arithmetic Operators
compute_assign: 4.4.1.4AD Computed Assignment Operators

Input File: cppad/local/arithmetic.hpp
4.4.1.1: AD Unary Plus Operator

4.4.1.1.a: Syntax
y = + x

4.4.1.1.b: Purpose
Performs the unary plus operation (the result y is equal to the operand x).

4.4.1.1.c: x
The operand x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.1.1.d: y
The result y has type
     AD<
Basey
It is equal to the operand x.

4.4.1.1.e: Operation Sequence
This is an AD of Base 9.4.g.a: atomic operation and hence is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.1.1.f: Derivative
If  f is a 9.4.d: Base function ,  \[
     \D{[ + f(x) ]}{x} = \D{f(x)}{x}
\] 


4.4.1.1.g: Example
The file 4.4.1.1.1: UnaryPlus.cpp contains an example and test of this operation.
Input File: cppad/local/unary_plus.hpp
4.4.1.1.1: AD Unary Plus Operator: Example and Test
 

# include <cppad/cppad.hpp>

bool UnaryPlus(void)
{	bool ok = true;
	using CppAD::AD;


	// domain space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = 3.;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = + x[0];

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// check values
	ok &= ( y[0] == 3. );

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	size_t p = 1;
	dx[0]    = 1.;
	dy       = f.Forward(p, dx);
	ok      &= ( dy[0] == 1. );   // dy[0] / dx[0]

	// reverse computation of dertivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0] = 1.;
	dw   = f.Reverse(p, w);
	ok &= ( dw[0] == 1. );       // dy[0] / dx[0]

	// use a VecAD<Base>::reference object with unary plus
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = x[0];
	AD<double> result = + v[zero];
	ok     &= (result == y[0]);
	 
	return ok;
}

Input File: example/unary_plus.cpp
4.4.1.2: AD Unary Minus Operator

4.4.1.2.a: Syntax
y = - x

4.4.1.2.b: Purpose
Computes the negative of x.

4.4.1.2.c: Base
The operation in the syntax above must be supported for the case where the operand is a const Base object.

4.4.1.2.d: x
The operand x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.1.2.e: y
The result y has type
     AD<
Basey
It is equal to the negative of the operand x.

4.4.1.2.f: Operation Sequence
This is an AD of Base 9.4.g.a: atomic operation and hence is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.1.2.g: Derivative
If  f is a 9.4.d: Base function ,  \[
     \D{[ - f(x) ]}{x} = - \D{f(x)}{x}
\] 


4.4.1.2.h: Example
The file 4.4.1.2.1: UnaryMinus.cpp contains an example and test of this operation.
Input File: cppad/local/unary_minus.hpp
4.4.1.2.1: AD Unary Minus Operator: Example and Test
 

# include <cppad/cppad.hpp>

bool UnaryMinus(void)
{	bool ok = true;
	using CppAD::AD;


	// domain space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = 3.;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = - x[0];

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// check values
	ok &= ( y[0] == -3. );

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	size_t p = 1;
	dx[0]    = 1.;
	dy       = f.Forward(p, dx);
	ok      &= ( dy[0] == -1. );   // dy[0] / dx[0]

	// reverse computation of dertivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0] = 1.;
	dw   = f.Reverse(p, w);
	ok &= ( dw[0] == -1. );       // dy[0] / dx[0]

	// use a VecAD<Base>::reference object with unary minus
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = x[0];
	AD<double> result = - v[zero];
	ok     &= (result == y[0]);
	 
	return ok;
}

Input File: example/unary_minus.cpp
4.4.1.3: AD Binary Arithmetic Operators

4.4.1.3.a: Syntax
z = x Op y

4.4.1.3.b: Purpose
Performs arithmetic operations where either x or y has type AD<Base> or 4.6.d: VecAD<Base>::reference .

4.4.1.3.c: Op
The operator Op is one of the following
Op Meaning
+ z is x plus y
- z is x minus y
* z is x times y
/ z is x divided by y

4.4.1.3.d: Base
The type Base is determined by the operand that has type AD<Base> or VecAD<Base>::reference.

4.4.1.3.e: x
The operand x has the following prototype
     const 
Type &x
where Type is VecAD<Base>::reference, AD<Base>, Base, or double.

4.4.1.3.f: y
The operand y has the following prototype
     const 
Type &y
where Type is VecAD<Base>::reference, AD<Base>, Base, or double.

4.4.1.3.g: z
The result z has the following prototype
     
Type z
where Type is AD<Base>.

4.4.1.3.h: Operation Sequence
This is an 9.4.g.a: atomic 9.4.b: AD of Base operation and hence it is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.1.3.i: Example
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.
4.4.1.3.1: Add.cpp AD Binary Addition: Example and Test
4.4.1.3.2: Sub.cpp AD Binary Subtraction: Example and Test
4.4.1.3.3: Mul.cpp AD Binary Multiplication: Example and Test
4.4.1.3.4: Div.cpp AD Binary Division: Example and Test

4.4.1.3.j: Derivative
If  f and  g are 9.4.d: Base functions

4.4.1.3.j.a: Addition
 \[
     \D{[ f(x) + g(x) ]}{x} = \D{f(x)}{x} + \D{g(x)}{x}
\] 


4.4.1.3.j.b: Subtraction
 \[
     \D{[ f(x) - g(x) ]}{x} = \D{f(x)}{x} - \D{g(x)}{x}
\] 


4.4.1.3.j.c: Multiplication
 \[
     \D{[ f(x) * g(x) ]}{x} = g(x) * \D{f(x)}{x} + f(x) * \D{g(x)}{x}
\] 


4.4.1.3.j.d: Division
 \[
     \D{[ f(x) / g(x) ]}{x} = 
          [1/g(x)] * \D{f(x)}{x} - [f(x)/g(x)^2] * \D{g(x)}{x}
\] 

Input File: cppad/local/ad_binary.hpp
4.4.1.3.1: AD Binary Addition: Example and Test
 
# include <cppad/cppad.hpp>

bool Add(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// some binary addition operations
	AD<double> a = x[0] + 1.; // AD<double> + double
	AD<double> b = a    + 2;  // AD<double> + int
	AD<double> c = 3.   + b;  // double     + AD<double> 
	AD<double> d = 4    + c;  // int        + AD<double> 

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = d + x[0];          // AD<double> + AD<double> 

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , 2. * x0 + 10,  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 2., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 2., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with addition
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = a;
	AD<double> result = v[zero] + 2;
	ok     &= (result == b);

	return ok;
}


Input File: example/add.cpp
4.4.1.3.2: AD Binary Subtraction: Example and Test
 
# include <cppad/cppad.hpp>

bool Sub(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t  n =  1;
	double x0 = .5;
	CPPAD_TEST_VECTOR< AD<double> > x(1);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	AD<double> a = 2. * x[0] - 1.; // AD<double> - double
	AD<double> b = a  - 2;         // AD<double> - int
	AD<double> c = 3. - b;         // double     - AD<double> 
	AD<double> d = 4  - c;         // int        - AD<double> 

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = x[0] - d;              // AD<double> - AD<double>

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// check value 
	ok &= NearEqual(y[0], x0-4.+3.+2.-2.*x0+1.,  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], -1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], -1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with subtraction
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = b;
	AD<double> result = 3. - v[zero];
	ok     &= (result == c);

	return ok;
}


Input File: example/sub.cpp
4.4.1.3.3: AD Binary Multiplication: Example and Test
 
# include <cppad/cppad.hpp>

bool Mul(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = .5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// some binary multiplication operations
	AD<double> a = x[0] * 1.; // AD<double> * double
	AD<double> b = a    * 2;  // AD<double> * int
	AD<double> c = 3.   * b;  // double     * AD<double> 
	AD<double> d = 4    * c;  // int        * AD<double> 

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = x[0] * d;          // AD<double> * AD<double>

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0*(4.*3.*2.*1.)*x0,  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], (4.*3.*2.*1.)*2.*x0, 1e-10 , 1e-10); 

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n); 
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], (4.*3.*2.*1.)*2.*x0, 1e-10 , 1e-10); 

	// use a VecAD<Base>::reference object with multiplication
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = c;
	AD<double> result = 4 * v[zero];
	ok     &= (result == d);

	return ok;
}


Input File: example/mul.cpp
4.4.1.3.4: AD Binary Division: Example and Test
 
# include <cppad/cppad.hpp>

bool Div(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;


	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// some binary division operations
	AD<double> a = x[0] / 1.; // AD<double> / double
	AD<double> b = a  / 2;    // AD<double> / int
	AD<double> c = 3. / b;    // double     / AD<double> 
	AD<double> d = 4  / c;    // int        / AD<double> 

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = (x[0] * x[0]) / d;   // AD<double> / AD<double>

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0], x0*x0*3.*2.*1./(4.*x0),  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 3.*2.*1./4., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 3.*2.*1./4., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with division
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = d;
	AD<double> result = (x[0] * x[0]) / v[zero];
	ok     &= (result == y[0]);

	return ok;
}


Input File: example/div.cpp
4.4.1.4: AD Computed Assignment Operators

4.4.1.4.a: Syntax
x Op y

4.4.1.4.b: Purpose
Performs computed assignment operations where either x has type AD<Base>.

4.4.1.4.c: Op
The operator Op is one of the following
Op Meaning
+= x is assigned x plus y
-= x is assigned x minus y
*= x is assigned x times y
/= x is assigned x divided by y

4.4.1.4.d: Base
The type Base is determined by the operand x.

4.4.1.4.e: x
The operand x has the following prototype
     AD<
Base> &x

4.4.1.4.f: y
The operand y has the following prototype
     const 
Type &y
where Type is VecAD<Base>::reference, AD<Base>, Base, or double.

4.4.1.4.g: Result
The result of this assignment can be used as a reference to x. For example, if z has the following type
     AD<
Basez
then the syntax
     
z = x += y
will compute x plus y and then assign this value to both x and z.

4.4.1.4.h: Operation Sequence
This is an 9.4.g.a: atomic 9.4.b: AD of Base operation and hence it is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.1.4.i: Example
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.
4.4.1.4.1: AddEq.cpp AD Computed Assignment Addition: Example and Test
4.4.1.4.2: SubEq.cpp AD Computed Assignment Subtraction: Example and Test
4.4.1.4.3: MulEq.cpp AD Computed Assignment Multiplication: Example and Test
4.4.1.4.4: DivEq.cpp AD Computed Assignment Division: Example and Test

4.4.1.4.j: Derivative
If  f and  g are 9.4.d: Base functions

4.4.1.4.j.a: Addition
 \[
     \D{[ f(x) + g(x) ]}{x} = \D{f(x)}{x} + \D{g(x)}{x}
\] 


4.4.1.4.j.b: Subtraction
 \[
     \D{[ f(x) - g(x) ]}{x} = \D{f(x)}{x} - \D{g(x)}{x}
\] 


4.4.1.4.j.c: Multiplication
 \[
     \D{[ f(x) * g(x) ]}{x} = g(x) * \D{f(x)}{x} + f(x) * \D{g(x)}{x}
\] 


4.4.1.4.j.d: Division
 \[
     \D{[ f(x) / g(x) ]}{x} = 
          [1/g(x)] * \D{f(x)}{x} - [f(x)/g(x)^2] * \D{g(x)}{x}
\] 

Input File: cppad/local/compute_assign.hpp
4.4.1.4.1: AD Computed Assignment Addition: Example and Test
 
# include <cppad/cppad.hpp>

bool AddEq(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t  n = 1;
	double x0 = .5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = x[0];         // initial value
	y[0] += 2;           // AD<double> += int
	y[0] += 4.;          // AD<double> += double
	y[1] = y[0] += x[0]; // use the result of a computed assignment

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0+2.+4.+x0,  1e-10 , 1e-10);
	ok &= NearEqual(y[1] ,        y[0],  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 2., 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], 2., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	w[1]  = 0.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 2., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with computed addition
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	AD<double> result = 1;
	v[zero] = 2;
	result += v[zero];
	ok     &= (result == 3);

	return ok;
}


Input File: example/add_eq.cpp
4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
 
# include <cppad/cppad.hpp>

bool SubEq(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t  n = 1;
	double x0 = .5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = 3. * x[0];    // initial value
	y[0] -= 2;           // AD<double> -= int
	y[0] -= 4.;          // AD<double> -= double
	y[1] = y[0] -= x[0]; // use the result of a computed assignment

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , 3.*x0-(2.+4.+x0),  1e-10 , 1e-10);
	ok &= NearEqual(y[1] ,             y[0],  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 2., 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], 2., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	w[1]  = 0.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 2., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with computed subtraction
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	AD<double> result = 1;
	v[zero] = 2;
	result -= v[zero];
	ok     &= (result == -1);

	return ok;
}


Input File: example/sub_eq.cpp
4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
 
# include <cppad/cppad.hpp>

bool MulEq(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t  n = 1;
	double x0 = .5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = x[0];         // initial value
	y[0] *= 2;           // AD<double> *= int
	y[0] *= 4.;          // AD<double> *= double
	y[1] = y[0] *= x[0]; // use the result of a computed assignment

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0*2.*4.*x0,  1e-10 , 1e-10);
	ok &= NearEqual(y[1] ,        y[0],  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 8.*2.*x0, 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], 8.*2.*x0, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	w[1]  = 0.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 8.*2.*x0, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with computed multiplication
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	AD<double> result = 1;
	v[zero] = 2;
	result *= v[zero];
	ok     &= (result == 2);

	return ok;
}


Input File: example/mul_eq.cpp
4.4.1.4.4: AD Computed Assignment Division: Example and Test
 
# include <cppad/cppad.hpp>

bool DivEq(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t  n = 1;
	double x0 = .5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = x[0] * x[0];  // initial value
	y[0] /= 2;           // AD<double> /= int
	y[0] /= 4.;          // AD<double> /= double
	y[1] = y[0] /= x[0]; // use the result of a computed assignment

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0*x0/(2.*4.*x0),  1e-10 , 1e-10);
	ok &= NearEqual(y[1] ,             y[0],  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1./8., 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], 1./8., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	w[1]  = 0.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1./8., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with computed division
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	AD<double> result = 2;
	v[zero] = 1;
	result /= v[zero];
	ok     &= (result == 2);

	return ok;
}


Input File: example/div_eq.cpp
4.4.2: AD Standard Math Unary Functions

4.4.2.a: Syntax
y = fun(x)

4.4.2.b: Purpose
Evaluates the one argument standard math function fun where its argument is an 9.4.b: AD of Base object.

4.4.2.c: x
The argument x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.2.d: y
The result y has prototype
     AD<
Basey

4.4.2.e: Operation Sequence
Most of these functions are AD of Base 9.4.g.a: atomic operations . In all cases, The AD of Base operation sequence used to calculate y is 9.4.g.d: independent of x.

4.4.2.f: fun
A definition of fun is included for each of the following functions: acos, asin, atan, cos, cosh, exp, log, log10, sin, sinh, sqrt, tan, tanh.

4.4.2.g: Examples
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.
4.4.2.1: Acos.cpp The AD acos Function: Example and Test
4.4.2.2: Asin.cpp The AD asin Function: Example and Test
4.4.2.3: Atan.cpp The AD atan Function: Example and Test
4.4.2.4: Cos.cpp The AD cos Function: Example and Test
4.4.2.5: Cosh.cpp The AD cosh Function: Example and Test
4.4.2.6: Exp.cpp The AD exp Function: Example and Test
4.4.2.7: Log.cpp The AD log Function: Example and Test
4.4.2.8: Log10.cpp The AD log10 Function: Example and Test
4.4.2.9: Sin.cpp The AD sin Function: Example and Test
4.4.2.10: Sinh.cpp The AD sinh Function: Example and Test
4.4.2.11: Sqrt.cpp The AD sqrt Function: Example and Test
4.4.2.12: Tan.cpp The AD tan Function: Example and Test
4.4.2.13: Tanh.cpp The AD tanh Function: Example and Test

4.4.2.h: Derivatives
Each of these functions satisfy a standard math function differential equation. Calculating derivatives using this differential equation is discussed for both 9.3.1.c: forward and 9.3.2.c: reverse mode. The exact form of the differential equation for each of these functions is listed below:

4.4.2.h.a: acos
 \[
\begin{array}{lcr}
     \D{[ {\rm acos} (x) ]}{x} & = & - (1 - x * x)^{-1/2}
\end{array}
\] 


4.4.2.h.b: asin
 \[
\begin{array}{lcr}
     \D{[ {\rm asin} (x) ]}{x} & = & (1 - x * x)^{-1/2}
\end{array}
\] 


4.4.2.h.c: atan
 \[
\begin{array}{lcr}
        \D{[ {\rm atan} (x) ]}{x} & = & \frac{1}{1 + x^2}
\end{array}
\] 


4.4.2.h.d: cos
 \[
\begin{array}{lcr}
        \D{[ \cos (x) ]}{x} & = & - \sin (x)  \\
        \D{[ \sin (x) ]}{x} & = & \cos (x)
\end{array}
\] 


4.4.2.h.e: cosh
 \[
\begin{array}{lcr}
        \D{[ \cosh (x) ]}{x} & = & \sinh (x)  \\
        \D{[ \sin (x) ]}{x}  & = & \cosh (x)
\end{array}
\] 


4.4.2.h.f: exp
 \[
\begin{array}{lcr}
        \D{[ \exp (x) ]}{x} & = & \exp (x)
\end{array}
\] 


4.4.2.h.g: log
 \[
\begin{array}{lcr}
        \D{[ \log (x) ]}{x} & = & \frac{1}{x}
\end{array}
\] 


4.4.2.h.h: log10
This function is special in that it's derivatives are calculated using the relation  \[
\begin{array}{lcr}
        {\rm log10} (x) & = & \log(x) / \log(10)
\end{array}
\] 


4.4.2.h.i: sin
 \[
\begin{array}{lcr}
        \D{[ \sin (x) ]}{x} & = & \cos (x) \\
        \D{[ \cos (x) ]}{x} & = & - \sin (x) 
\end{array}
\] 


4.4.2.h.j: sinh
 \[
\begin{array}{lcr}
        \D{[ \sinh (x) ]}{x} & = & \cosh (x)   \\
        \D{[ \cosh (x) ]}{x} & = & \sinh (x)
\end{array}
\] 


4.4.2.h.k: sqrt
 \[
\begin{array}{lcr}
        \D{[ {\rm sqrt} (x) ]}{x} & = & \frac{1}{2 {\rm sqrt} (x) }
\end{array}
\] 


4.4.2.h.l: tan
This function is special in that it's derivatives are calculated using the relation  \[
\begin{array}{lcr}
        \tan (x) & = & \sin(x) / \cos(x)
\end{array}
\] 


4.4.2.h.m: tanh
This function is also special in that it's derivatives are calculated using the relation  \[
\begin{array}{lcr}
        \tanh (x) & = & \sinh(x) / \cosh(x)
\end{array}
\] 

Input File: cppad/local/std_math_ad.hpp
4.4.2.1: The AD acos Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Acos(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// a temporary value
	AD<double> cos_of_x0 = CppAD::cos(x[0]);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::acos(cos_of_x0);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with acos
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = cos_of_x0;
	AD<double> result = CppAD::acos(v[zero]);
	ok     &= NearEqual(result, x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/acos.cpp
4.4.2.2: The AD asin Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Asin(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// a temporary value
	AD<double> sin_of_x0 = CppAD::sin(x[0]);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::asin(sin_of_x0);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with asin
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = sin_of_x0;
	AD<double> result = CppAD::asin(v[zero]);
	ok     &= NearEqual(result, x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/asin.cpp
4.4.2.3: The AD atan Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Atan(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// a temporary value
	AD<double> tan_of_x0 = CppAD::tan(x[0]);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::atan(tan_of_x0);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with atan
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero] = tan_of_x0;
	AD<double> result = CppAD::atan(v[zero]);
	ok     &= NearEqual(result, x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/atan.cpp
4.4.2.4: The AD cos Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Cos(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::cos(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::cos(x0);
	ok &= NearEqual(y[0] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = - std::sin(x0);
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with cos
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::cos(v[zero]);
	check = std::cos(x0);
	ok   &= NearEqual(result, check, 1e-10, 1e-10);

	return ok;
}


Input File: example/cos.cpp
4.4.2.5: The AD cosh Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Cosh(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::cosh(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::cosh(x0);
	ok &= NearEqual(y[0] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = std::sinh(x0);
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with cosh
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::cosh(v[zero]);
	check = std::cosh(x0);
	ok   &= NearEqual(result, check, 1e-10, 1e-10);

	return ok;
}


Input File: example/cosh.cpp
4.4.2.6: The AD exp Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Exp(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::exp(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::exp(x0);
	ok &= NearEqual(y[0] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = std::exp(x0);
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with exp
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::exp(v[zero]);
	ok   &= NearEqual(result, check, 1e-10, 1e-10);

	return ok;
}


Input File: example/exp.cpp
4.4.2.7: The AD log Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Log(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// a temporary value
	AD<double> exp_of_x0 = CppAD::exp(x[0]);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::log(exp_of_x0);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with log
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = exp_of_x0;
	AD<double> result = CppAD::log(v[zero]);
	ok   &= NearEqual(result, x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/log.cpp
4.4.2.8: The AD log10 Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Log10(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// ten raised to the x0 power
	AD<double> ten = 10.;
	AD<double> pow_10_x0 = CppAD::pow(ten, x[0]); 

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::log10(pow_10_x0);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with log10
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = pow_10_x0;
	AD<double> result = CppAD::log10(v[zero]);
	ok   &= NearEqual(result, x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/log_10.cpp
4.4.2.9: The AD sin Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Sin(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::sin(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::sin(x0);
	ok &= NearEqual(y[0] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = std::cos(x0);
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with sin
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::sin(v[zero]);
	check = std::sin(x0);
	ok   &= NearEqual(result, check, 1e-10, 1e-10);

	return ok;
}


Input File: example/sin.cpp
4.4.2.10: The AD sinh Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Sinh(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::sinh(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::sinh(x0);
	ok &= NearEqual(y[0] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = std::cosh(x0);
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with sinh
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::sinh(v[zero]);
	check = std::sinh(x0);
	ok   &= NearEqual(result, check, 1e-10, 1e-10);

	return ok;
}


Input File: example/sinh.cpp
4.4.2.11: The AD sqrt Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Sqrt(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::sqrt(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::sqrt(x0);
	ok &= NearEqual(y[0] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = 1. / (2. * std::sqrt(x0) );
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with sqrt
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::sqrt(v[zero]);
	check = std::sqrt(x0);
	ok   &= NearEqual(result, check, 1e-10, 1e-10);

	return ok;
}


Input File: example/sqrt.cpp
4.4.2.12: The AD tan Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>
# include <limits>

bool Tan(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;
	double eps = 10. * std::numeric_limits<double>::epsilon();

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::tan(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::tan(x0);
	ok &= NearEqual(y[0] , check,  eps, eps);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = 1. + std::tan(x0) * std::tan(x0); 
	ok   &= NearEqual(dy[0], check, eps, eps);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, eps, eps);

	// use a VecAD<Base>::reference object with tan
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::tan(v[zero]);
	check = std::tan(x0);
	ok   &= NearEqual(result, check, eps, eps);

	return ok;
}


Input File: example/tan.cpp
4.4.2.13: The AD tanh Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>
# include <limits>

bool Tanh(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;
	double eps = 10. * std::numeric_limits<double>::epsilon();

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::tanh(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check = std::tanh(x0);
	ok &= NearEqual(y[0] , check,  eps, eps);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	check = 1. - std::tanh(x0) * std::tanh(x0); 
	ok   &= NearEqual(dy[0], check, eps, eps);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, eps, eps);

	// use a VecAD<Base>::reference object with tan
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::tanh(v[zero]);
	check = std::tanh(x0);
	ok   &= NearEqual(result, check, eps, eps);

	return ok;
}


Input File: example/tanh.cpp
4.4.3: Other AD Math Functions
4.4.3.1: abs AD Absolute Value Function
4.4.3.2: atan2 AD Two Argument Inverse Tangent Function
4.4.3.3: erf The AD Error Function
4.4.3.4: pow The AD Power Function

Input File: cppad/local/math_other.hpp
4.4.3.1: AD Absolute Value Function

4.4.3.1.a: Syntax
y = abs(x)

4.4.3.1.b: Purpose
Evaluates the absolute value function.

4.4.3.1.c: x
The argument x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.3.1.d: y
The result y has prototype
     AD<
Basey

4.4.3.1.e: Operation Sequence
This is an AD of Base 9.4.g.a: atomic operation and hence is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.3.1.f: Complex Types
The function abs is not defined for the AD type sequences above std::complex<float> or std::complex<double> because the complex abs function is not complex differentiable (see 9.1.d: complex types faq ).

4.4.3.1.g: Directional Derivative
The derivative of the absolute value function is one for  x > 0 and minus one for  x < 0 . The subtitle issue is how to compute its directional derivative what  x = 0 .

The function corresponding to the argument x and the result y are represented by their Taylor coefficients; i.e.,  \[
\begin{array}{rcl}
     X(t) & = & x^{(0)} (t) + x^{(1)} t + \cdots + x^{(p)} t^p
     \\
     Y(t) & = & y^{(0)} (t) + y^{(1)} t + \cdots + y^{(p)} t^p
\end{array}
\] 
Note that  x^{(0)} = X(0) is the value of x and  y^{(0)} = Y(0) is the value of y. In the equations above, the order  p is specified by a call to 5.6.1: Forward or 5.6.2: Reverse as follows:
     
f.Forward(pdx)
     
f.Reverse(p+1, w)
If all of the Taylor coefficients of  X(t) are zero, we define  k = p . Otherwise, we define  k to be the minimal index such that  x^{(k)} \neq 0 . Note that if  x \neq 0 ,  k = 0 . The Taylor coefficient representation of  Y(t) (and hence it's derivatives) are computed as  \[
y^{(\ell)}
=
\left\{ \begin{array}{ll} 
      x^{(\ell)}   & {\rm if} \; x^{(k)} > 0         \\
      0                    & {\rm if} \; x^{(k)} = 0 \\
     - x^{(\ell)}  & {\rm if} \; x^{(k)} < 0
\end{array} \right.
\] 


4.4.3.1.h: Example
The file 4.4.3.1.1: Abs.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/local/abs.hpp
4.4.3.1.1: AD Absolute Value Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Abs(void)
{	bool ok = true;

	using CppAD::abs;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]     = 0.;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0]     = abs(x[0] - 1.);
	y[1]     = abs(x[0]);
	y[2]     = abs(x[0] + 1.);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// check values
	ok &= (y[0] == 1.);
	ok &= (y[1] == 0.);
	ok &= (y[2] == 1.);

	// forward computation of partials w.r.t. a positive x[0] direction
	size_t p = 1;
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(p, dx);
	ok  &= (dy[0] == - dx[0]);
	ok  &= (dy[1] == + dx[0]);
	ok  &= (dy[2] == + dx[0]);

	// forward computation of partials w.r.t. a negative x[0] direction
	dx[0] = -1.;
	dy    = f.Forward(p, dx);
	ok  &= (dy[0] == - dx[0]);
	ok  &= (dy[1] == - dx[0]);
	ok  &= (dy[2] == + dx[0]);

	// reverse computation of derivative of y[0] 
	p    = 0;
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0] = 1.; w[1] = 0.; w[2] = 0.;
	dw   = f.Reverse(p+1, w);
	ok  &= (dw[0] == -1.);

	// reverse computation of derivative of y[1] 
	w[0] = 0.; w[1] = 1.; w[2] = 0.;
	dw   = f.Reverse(p+1, w);
	ok  &= (dw[0] == 0.);

	// reverse computation of derivative of y[2] 
	w[0] = 0.; w[1] = 0.; w[2] = 1.;
	dw   = f.Reverse(p+1, w);
	ok  &= (dw[0] == 1.);

	// use a VecAD<Base>::reference object with abs
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = -1;
	AD<double> result = abs(v[zero]);
	ok   &= NearEqual(result, 1., 1e-10, 1e-10);

	return ok;
}


Input File: example/abs.cpp
4.4.3.2: AD Two Argument Inverse Tangent Function

4.4.3.2.a: Syntax
theta = atan2(yx)

4.4.3.2.b: Purpose
Determines an angle  \theta \in [ - \pi , + \pi ] such that  \[
\begin{array}{rcl}
     \sin ( \theta )  & = & y / \sqrt{ x^2 + y^2 }  \\
     \cos ( \theta )  & = & x / \sqrt{ x^2 + y^2 }
\end{array}
\] 


4.4.3.2.c: y
The argument y has one of the following prototypes
     const AD<
Base>               &y
     const VecAD<
Base>::reference &y

4.4.3.2.d: x
The argument x has one of the following prototypes
     const AD<
Base>               &x
     const VecAD<
Base>::reference &x

4.4.3.2.e: theta
The result theta has prototype
     AD<
Basetheta

4.4.3.2.f: Operation Sequence
The AD of Base operation sequence used to calculate theta is 9.4.g.d: independent of x and y.

4.4.3.2.g: Example
The file 4.4.3.2.1: Atan2.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/local/atan2.hpp
4.4.3.2.1: The AD atan2 Function: Example and Test
 

# include <cppad/cppad.hpp>

bool Atan2(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// a temporary value
	AD<double> sin_of_x0 = CppAD::sin(x[0]);
	AD<double> cos_of_x0 = CppAD::cos(x[0]);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::atan2(sin_of_x0, cos_of_x0);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 1., 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with atan2
	CppAD::VecAD<double> v(2);
	AD<double> zero(0);
	AD<double> one(1);
	v[zero]           = sin_of_x0;
	v[one]            = cos_of_x0;
	AD<double> result = CppAD::atan2(v[zero], v[one]);
	ok               &= NearEqual(result, x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/atan_2.cpp
4.4.3.3: The AD Error Function

4.4.3.3.a: Syntax
y = erf(x)

4.4.3.3.b: Description
Returns the value of the error function which is defined by  \[
{\rm erf} (x) = \frac{2}{ \sqrt{\pi} } \int_0^x \exp( - t * t ) \; {\bf d} t
\] 


4.4.3.3.c: x
The argument x , and the result y have one of the following paris of prototypes:
     const float
                  &x,     float    y
     const double
                 &x,     double   y
     const AD<
Base>               &x,     AD<Basey
     const VecAD<
Base>::reference &x,     AD<Basey

4.4.3.3.d: Operation Sequence
The AD of Base operation sequence used to calculate y is 9.4.g.d: independent of x .

4.4.3.3.e: Method
This is a fast approximation (few numerical operations) with relative error bound  4 \times 10^{-4} ; see Vedder, J.D., Simple approximations for the error function and its inverse, American Journal of Physics, v 55, n 8, 1987, p 762-3.

4.4.3.3.f: Example
The file 4.4.3.3.1: Erf.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/local/erf.hpp
4.4.3.3.1: The AD erf Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>
# include <limits>

bool Erf(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;
	double eps = 10. * std::numeric_limits<double>::epsilon();

	// domain space vector
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	// a temporary value

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = CppAD::erf(x[0]);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check relative erorr 
	double erf_x0 = 0.5205;
	ok &= NearEqual(y[0] , erf_x0,  4e-4 , 0.);

	// value of derivative of erf at x0
	double pi     = 4. * std::atan(1.);
	double factor = 2. / sqrt(pi);
	double check  = factor * std::exp(-x0 * x0);

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], check, 4e-4, 0.);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], check, 4e-4, 0.);

	// use a VecAD<Base>::reference object with erf
	CppAD::VecAD<double> v(1);
	AD<double> zero(0);
	v[zero]           = x0;
	AD<double> result = CppAD::erf(v[zero]);
	ok   &= NearEqual(result, y[0], eps, eps);

	// use a double with erf
	ok   &= NearEqual(CppAD::erf(x0), y[0], eps, eps);

	return ok;
}


Input File: example/erf.cpp
4.4.3.4: The AD Power Function

4.4.3.4.a: Syntax
z = pow(xy)

4.4.3.4.b: Purpose
Determines the value of the power function which is defined by  \[
     {\rm pow} (x, y) = x^y
\] 
This version of the pow function may use logarithms and exponentiation to compute derivatives. This will not work if x is less than or equal zero. If the value of y is an integer, the 6.10: pow_int function is used to compute this value using only multiplication (and division if y is negative). (This will work even if x is less than or equal zero.)

4.4.3.4.c: x
The argument x has the following prototype
     const 
Type &x
where Type is VecAD<Base>::reference, AD<Base>, Base, double, or int.

4.4.3.4.d: y
The argument y has the following prototype
     const 
Type &y
where Type is VecAD<Base>::reference, AD<Base>, Base, double, or int.

4.4.3.4.e: z
The result z has prototype
     AD<
Basez

4.4.3.4.f: Standard Types
A definition for the pow function is included in the CppAD namespace for the case where both x and y have the same type and that type is float or double.

4.4.3.4.g: Operation Sequence
This is an AD of Base 9.4.g.a: atomic operation and hence is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.3.4.h: Example
The files 4.4.3.4.1: Pow.cpp , 4.4.3.4.2: pow_int.cpp contain an examples and tests of this function. They returns true if they succeed and false otherwise.
Input File: cppad/local/pow.hpp
4.4.3.4.1: The AD Power Function: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Pow(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n  = 2;
	double x = 0.5;
	double y = 2.;
	CPPAD_TEST_VECTOR< AD<double> > XY(n);
	XY[0]      = x;
	XY[1]      = y;

	// declare independent variables and start tape recording
	CppAD::Independent(XY);

	// range space vector 
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> > Z(m);
	Z[0] = CppAD::pow(XY[0], XY[1]);  // pow(variable, variable)
	Z[1] = CppAD::pow(XY[0], y);      // pow(variable, parameter)
	Z[2] = CppAD::pow(x,     XY[1]);  // pow(parameter, variable)

	// create f: XY -> Z and stop tape recording
	CppAD::ADFun<double> f(XY, Z); 

	// check value 
	double check = std::pow(x, y);
	size_t i;
	for(i = 0; i < m; i++)
		ok &= NearEqual(Z[i] , check,  1e-10 , 1e-10);

	// forward computation of first partial w.r.t. x
	CPPAD_TEST_VECTOR<double> dxy(n);
	CPPAD_TEST_VECTOR<double> dz(m);
	dxy[0] = 1.;
	dxy[1] = 0.;
	dz    = f.Forward(1, dxy);
	check = y * std::pow(x, y-1.);
	ok   &= NearEqual(dz[0], check, 1e-10, 1e-10);
	ok   &= NearEqual(dz[1], check, 1e-10, 1e-10);
	ok   &= NearEqual(dz[2],    0., 1e-10, 1e-10);

	// forward computation of first partial w.r.t. y
	dxy[0] = 0.;
	dxy[1] = 1.;
	dz    = f.Forward(1, dxy);
	check = std::log(x) * std::pow(x, y);
	ok   &= NearEqual(dz[0], check, 1e-10, 1e-10);
	ok   &= NearEqual(dz[1],    0., 1e-10, 1e-10);
	ok   &= NearEqual(dz[2], check, 1e-10, 1e-10);

	// reverse computation of derivative of z[0] + z[1] + z[2]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	w[1]  = 1.;
	w[2]  = 1.;
	dw    = f.Reverse(1, w);
	check = y * std::pow(x, y-1.);
	ok   &= NearEqual(dw[0], 2. * check, 1e-10, 1e-10);
	check = std::log(x) * std::pow(x, y);
	ok   &= NearEqual(dw[1], 2. * check, 1e-10, 1e-10);

	// use a VecAD<Base>::reference object with pow
	CppAD::VecAD<double> v(2);
	AD<double> zero(0);
	AD<double> one(1);
	v[zero]           = XY[0];
	v[one]            = XY[1];
	AD<double> result = CppAD::pow(v[zero], v[one]);
	ok               &= NearEqual(result, Z[0], 1e-10, 1e-10);

	return ok;
}


Input File: example/pow.cpp
4.4.3.4.2: The Pow Integer Exponent: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool pow_int(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// declare independent variables and start tape recording
	size_t n  = 1;
	double x0 = -0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0;
	CppAD::Independent(x);

	// dependent variable vector 
	size_t m = 7;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	int i;
	for(i = 0; i < int(m); i++) 
		y[i] = CppAD::pow(x[0], i - 3);

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	double check;
	for(i = 0; i < int(m); i++) 
	{	check = std::pow(x0, double(i - 3));
		ok &= NearEqual(y[i] , check,  1e-10 , 1e-10);
	}

	// forward computation of first partial w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	for(i = 0; i < int(m); i++) 
	{	check = double(i-3) * std::pow(x0, double(i - 4));
		ok &= NearEqual(dy[i] , check,  1e-10 , 1e-10);
	}

	// reverse computation of derivative of y[i]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	for(i = 0; i < int(m); i++) 
		w[i] = 0.;
	for(i = 0; i < int(m); i++) 
	{	w[i] = 1.;	
		dw    = f.Reverse(1, w);
		check = double(i-3) * std::pow(x0, double(i - 4));
		ok &= NearEqual(dw[0] , check,  1e-10 , 1e-10);
		w[i] = 0.;	
	}

	return ok;
}


Input File: example/pow_int.cpp
4.4.4: AD Conditional Expressions

4.4.4.a: Syntax
result = CondExpOp(leftrighttrueCasefalseCase)

4.4.4.b: Purpose
Record, as part of an AD of Base 9.4.g.b: operation sequence , a the conditional result of the form
     if( 
left op right )
          
result = trueCase
     else 
result = falseCase
The notation Op and op above have the following correspondence:
Op    Lt    Le    Eq    Ge    Gt
op < <= == >= >
If f is the 5: ADFun object corresponding to the AD operation sequence, the choice in an AD conditional expression is made each time 5.6.1: f.Forward is used to evaluate the zero order Taylor coefficients with new values for the 9.4.j.c: independent variables . This is in contrast to the 4.5.1: AD comparison operators which are boolean valued and not included in the AD operation sequence.

4.4.4.c: Op
In the syntax above, Op represents one of the following two characters: Lt, Le, Eq, Ge, Gt. As in the table above, Op determines the comparison operator op.

4.4.4.d: Type
These functions are defined in the CppAD namespace for arguments of Type is float , double, or any type of the form AD<Base>. (Note that all four arguments must have the same type.)

4.4.4.e: left
The argument left has prototype
     const 
Type &left
It specifies the value for the left side of the comparison operator.

4.4.4.f: right
The argument right has prototype
     const 
Type &right
It specifies the value for the right side of the comparison operator.

4.4.4.g: trueCase
The argument trueCase has prototype
     const 
Type &trueCase
It specifies the return value if the result of the comparison is true.

4.4.4.h: falseCase
The argument falseCase has prototype
     const 
Type &falseCase
It specifies the return value if the result of the comparison is false.

4.4.4.i: result
The result has prototype
     
Type &falseCase

4.4.4.j: CondExp
Previous versions of CppAD used
     CondExp(
flagtrueCasefalseCase)
for the same meaning as
     CondExpGt(
flagType(0), trueCasefalseCase)
Use of CondExp is deprecated, but continues to be supported.

4.4.4.k: Operation Sequence
This is an AD of Base 9.4.g.a: atomic operation and hence is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.4.l: Example

4.4.4.m: Test
The file 4.4.4.1: CondExp.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.

4.4.4.n: Atan2
The following implementation of the AD 4.4.3.2: atan2 function is a more complex example of using conditional expressions:
 
template <class Base>
AD<Base> atan2 (const AD<Base> &y, const AD<Base> &x)
{	AD<Base> alpha;
	AD<Base> beta;
	AD<Base> theta;

	AD<Base> zero = 0;
	AD<Base> pi2  = 2. * atan(1.);
	AD<Base> pi   = 2. * pi2;

	AD<Base> ax = abs(x);
	AD<Base> ay = abs(y);

	// if( ax > ay )
	// 	theta = atan(ay / ax);
	// else	theta = pi2 - atan(ax / ay);
	alpha = atan(ay / ax);
	beta  = pi2 - atan(ax / ay);
	theta = CondExpGt(ax, ay, alpha, beta);         // use of CondExp

	// if( x <= 0 )
	// 	theta = pi - theta;
	theta = CondExpLe(x, zero, pi - theta, theta);  // use of CondExp
	
	// if( y <= 0 )
	// 	theta = - theta;
	theta = CondExpLe(y, zero, -theta, theta);      // use of CondExp

	return theta;
}

Input File: cppad/local/cond_exp.hpp
4.4.4.1: Conditional Expressions: Example and Test

4.4.4.1.a: Description
Use CondExp to compute  \[
     f(x) = \sum_{j=0}^{m-1} \log( | x_j | )
\] 
and its derivative at various argument values with out having to re-tape; i.e., using only one 5: ADFun object.
 

# include <cppad/cppad.hpp>

namespace {
	double Infinity(double zero)
	{	return 1. / zero; }
}

bool CondExp(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::log; 
	using CppAD::abs;

	// domain space vector
	size_t n = 5;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	size_t j;
	for(j = 0; j < n; j++)
		X[j] = 1.;

	// declare independent variables and start tape recording
	CppAD::Independent(X);

	// sum with respect to j of log of absolute value of X[j]
	// sould be - infinity if any of the X[j] are zero
	AD<double> MinusInfinity = - Infinity(0.);
	AD<double> Sum           = 0.;
	AD<double> Zero(0);
	for(j = 0; j < n; j++)
	{	// if X[j] > 0
		Sum += CppAD::CondExpGt(X[j], Zero, log(X[j]),     Zero);

		// if X[j] < 0
		Sum += CppAD::CondExpLt(X[j], Zero, log(-X[j]),    Zero);

		// if X[j] == 0
		Sum += CppAD::CondExpEq(X[j], Zero, MinusInfinity, Zero);
	}

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = Sum;

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// vectors for arguments to the function object f
	CPPAD_TEST_VECTOR<double> x(n);   // argument values
	CPPAD_TEST_VECTOR<double> y(m);   // function values 
	CPPAD_TEST_VECTOR<double> w(m);   // function weights 
	CPPAD_TEST_VECTOR<double> dw(n);  // derivative of weighted function

	// a case where abs( x[j] ) > 0 for all j
	double check  = 0.;
	double sign   = 1.;
	for(j = 0; j < n; j++)
	{	sign *= -1.;
		x[j] = sign * double(j + 1); 
		check += log( abs( x[j] ) );
	}

	// function value 
	y  = f.Forward(0, x);
	ok &= ( y[0] == check );

	// compute derivative of y[0]
	w[0] = 1.;
	dw   = f.Reverse(1, w);
	for(j = 0; j < n; j++)
	{	if( x[j] > 0. )
			ok &= NearEqual(dw[j], 1./abs( x[j] ), 1e-10, 1e-10); 
		else	ok &= NearEqual(dw[j], -1./abs( x[j] ), 1e-10, 1e-10); 
	}

	// a case where x[0] is equal to zero
	sign = 1.;
	for(j = 0; j < n; j++)
	{	sign *= -1.;
		x[j] = sign * double(j); 
	}

	// function value 
	y   = f.Forward(0, x);
	ok &= ( y[0] == -Infinity(0.) );

	// compute derivative of y[0]
	w[0] = 1.;
	dw   = f.Reverse(1, w);
	for(j = 0; j < n; j++)
	{	if( x[j] > 0. )
			ok &= NearEqual(dw[j], 1./abs( x[j] ), 1e-10, 1e-10); 
		else if( x[j] < 0. )
			ok &= NearEqual(dw[j], -1./abs( x[j] ), 1e-10, 1e-10); 
		else
		{	// in this case computing dw[j] ends up multiplying 
			// -infinity * zero and hence results in Nan
		}
	}
	
	return ok;
}

Input File: example/cond_exp.cpp
4.4.5: Discrete AD Functions

4.4.5.a: Syntax
CPPAD_DISCRETE_FUNCTION(Basename)
v = name(u)
y = name(x)

4.4.5.b: Purpose
Record the evaluation of a discrete function as part of an AD<Base> 9.4.g.b: operation sequence . The value of a discrete function can depend on the 9.4.j.c: independent variables , but its derivative is identically zero. For example, suppose that the integer part of a 9.4.l: variable x is the index into an array of values.

4.4.5.c: Base
This is the 4.7: base type corresponding to the operations sequence; i.e., use of the name with arguments of type AD<Base> can be recorded in an operation sequence.

4.4.5.d: name
This is the name of the function (as it is used in the source code). The user must provide a version of name where the argument has type Base. CppAD uses this to create a version of name where the argument has type AD<Base>.

4.4.5.e: u
The argument u has prototype
     const 
Base &u
It is the value at which the user provided version of name is to be evaluated.

4.4.5.f: v
The result v has prototype
     
Base v
It is the return value for the user provided version of name.

4.4.5.g: x
The argument x has prototype
     const AD<
Base> &x
It is the value at which the CppAD provided version of name is to be evaluated.

4.4.5.h: y
The result y has prototype
     AD<
Basev
It is the return value for the CppAD provided version of name.

4.4.5.i: Create AD Version
The preprocessor macro invocation
     CPPAD_DISCRETE_FUNCTION(
Basename)
defines the AD<Base> version of name. This can be with in a namespace (not the CppAD namespace) but must be outside of any routine.

4.4.5.j: Operation Sequence
This is an AD of Base 9.4.g.a: atomic operation and hence is part of the current AD of Base 9.4.g.b: operation sequence .

4.4.5.k: Derivatives
During a zero order 5.6.1: Forward operation, an 5: ADFun object will compute the value of name using the user provided Base version of this routine. All the derivatives of name will be evaluated as zero.

4.4.5.l: Example
The file 4.4.5.1: TapeIndex.cpp contains an example and test that uses a discrete function to vary an array index during 5.6.1: Forward mode calculations. The file 4.4.5.2: interp_onetape.cpp contains an example and test that uses discrete functions to avoid retaping a calculation that requires interpolation. (The file 4.4.5.3: interp_retape.cpp shows how interpolation can be done with retaping.)

4.4.5.m: Deprecated
The preprocessor symbol CppADCreateDiscrete is defined to be the same as CPPAD_DISCRETE_FUNCTION but its use is deprecated.
Input File: cppad/local/discrete.hpp
4.4.5.1: Taping Array Index Operation: Example and Test
 
# include <cppad/cppad.hpp>

namespace {
	double Array(const double &index)
	{	static double array[] = {
			5.,
			4.,
			3.,
			2.,
			1.
		};
		static size_t number = sizeof(array) / sizeof(array[0]);
		if( index < 0. )
			return array[0];

		size_t i = static_cast<size_t>(index);
		if( i >= number )
			return array[number-1];

		return array[i];
	}
	// in empty namespace and outside any other routine
	CPPAD_DISCRETE_FUNCTION(double, Array)
}

bool TapeIndex(void)
{	bool ok = true;
	using CppAD::AD;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 2.;   // array index value
	X[1] = 3.;   // multiplier of array index value

	// declare independent variables and start tape recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[1] * Array( X[0] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// vectors for arguments to the function object f
	CPPAD_TEST_VECTOR<double> x(n);   // argument values
	CPPAD_TEST_VECTOR<double> y(m);   // function values 
	CPPAD_TEST_VECTOR<double> w(m);   // function weights 
	CPPAD_TEST_VECTOR<double> dw(n);  // derivative of weighted function

	// check function value
	x[0] = Value(X[0]);
	x[1] = Value(X[1]);
	y[0] = Value(Y[0]);
	ok  &= y[0] == x[1] * Array(x[0]);

	// evaluate f where x has different values
	x[0] = x[0] + 1.;  // new array index value
	x[1] = x[1] + 1.;  // new multiplier value
	y    = f.Forward(0, x);
	ok  &= y[0] == x[1] * Array(x[0]);

	// evaluate derivaitve of y[0] 
	w[0] = 1.;
	dw   = f.Reverse(1, w);
	ok   &= dw[0] == 0.;              // partial w.r.t array index
	ok   &= dw[1] == Array(x[0]);     // partial w.r.t multiplier

	return ok;
}


Input File: example/tape_index.cpp
4.4.5.2: Interpolation With Out Retaping: Example and Test

4.4.5.2.a: See Also
4.4.5.3: interp_retape.cpp

 
# include <cppad/cppad.hpp>
# include <cassert>
# include <cmath>

namespace {
	double ArgumentValue[] = {
		.0 ,
		.2 ,
		.4 ,
		.8 ,
		1.
	};
	double FunctionValue[] = {
		std::sin( ArgumentValue[0] ) ,
		std::sin( ArgumentValue[1] ) ,
		std::sin( ArgumentValue[2] ) ,
		std::sin( ArgumentValue[3] ) ,
		std::sin( ArgumentValue[4] )
	};
	size_t TableLength = 5;

	size_t Index(const double &x)
	{	// determine the index j such that x is between
		// ArgumentValue[j] and ArgumentValue[j+1] 
		static size_t j = 0;
		while ( x < ArgumentValue[j] && j > 0 )
			j--;
		while ( x > ArgumentValue[j+1] && j < TableLength - 2)
			j++;
		// assert conditions that must be true given logic above
		assert( j >= 0 && j < TableLength - 1 );
		return j;
	}

	double Argument(const double &x)
	{	size_t j = Index(x);
		return ArgumentValue[j];
	}
	double Function(const double &x)
	{	size_t j = Index(x);
		return FunctionValue[j];
	}

	double Slope(const double &x)
	{	size_t j  = Index(x);
		double dx = ArgumentValue[j+1] - ArgumentValue[j];
		double dy = FunctionValue[j+1] - FunctionValue[j];
		return dy / dx;
	}
	CPPAD_DISCRETE_FUNCTION(double, Argument)
	CPPAD_DISCRETE_FUNCTION(double, Function)
	CPPAD_DISCRETE_FUNCTION(double, Slope)
}


bool interp_onetape(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = .4 * ArgumentValue[1] + .6 * ArgumentValue[2];

	// declare independent variables and start tape recording
	CppAD::Independent(X);

	// evaluate piecewise linear interpolant at X[0]
	AD<double> A = Argument(X[0]); 
	AD<double> F = Function(X[0]);
	AD<double> S = Slope(X[0]);
	AD<double> I = F + (X[0] - A) * S;

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = I;

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// vectors for arguments to the function object f
	CPPAD_TEST_VECTOR<double> x(n);   // argument values
	CPPAD_TEST_VECTOR<double> y(m);   // function values 
	CPPAD_TEST_VECTOR<double> dx(n);  // differentials in x space
	CPPAD_TEST_VECTOR<double> dy(m);  // differentials in y space

	// to check function value we use the fact that X[0] is between 
	// ArgumentValue[1] and ArgumentValue[2]
	x[0]          = Value(X[0]);
	double delta  = ArgumentValue[2] - ArgumentValue[1];
	double check  = FunctionValue[2] * (x[0] - ArgumentValue[1]) / delta
	              + FunctionValue[1] * (ArgumentValue[2] - x[0]) / delta; 
	ok  &= NearEqual(Y[0], check, 1e-10, 1e-10);

	// evaluate f where x has different value
	x[0]   = .7 * ArgumentValue[2] + .3 * ArgumentValue[3];
	y      = f.Forward(0, x);

	// check function value 
	delta  = ArgumentValue[3] - ArgumentValue[2];
	check  = FunctionValue[3] * (x[0] - ArgumentValue[2]) / delta
	              + FunctionValue[2] * (ArgumentValue[3] - x[0]) / delta; 
	ok  &= NearEqual(y[0], check, 1e-10, 1e-10);

	// evaluate partials w.r.t. x[0] 
	dx[0] = 1.;
	dy    = f.Forward(1, dx);

	// check that the derivative is the slope
	check = (FunctionValue[3] - FunctionValue[2])
	      / (ArgumentValue[3] - ArgumentValue[2]);
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	return ok;
}


Input File: example/interp_onetape.cpp
4.4.5.3: Interpolation With Retaping: Example and Test

4.4.5.3.a: See Also
4.4.5.2: interp_onetape.cpp

 
# include <cppad/cppad.hpp>
# include <cassert>
# include <cmath>

namespace {
	double ArgumentValue[] = {
		.0 ,
		.2 ,
		.4 ,
		.8 ,
		1.
	};
	double FunctionValue[] = {
		std::sin( ArgumentValue[0] ) ,
		std::sin( ArgumentValue[1] ) ,
		std::sin( ArgumentValue[2] ) ,
		std::sin( ArgumentValue[3] ) ,
		std::sin( ArgumentValue[4] )
	};
	size_t TableLength = 5;

	size_t Index(const CppAD::AD<double> &x)
	{	// determine the index j such that x is between
		// ArgumentValue[j] and ArgumentValue[j+1] 
		static size_t j = 0;
		while ( x < ArgumentValue[j] && j > 0 )
			j--;
		while ( x > ArgumentValue[j+1] && j < TableLength - 2)
			j++;
		// assert conditions that must be true given logic above
		assert( j >= 0 && j < TableLength - 1 );
		return j;
	}
	double Argument(const CppAD::AD<double> &x)
	{	size_t j = Index(x);
		return ArgumentValue[j];
	}
	double Function(const CppAD::AD<double> &x)
	{	size_t j = Index(x);
		return FunctionValue[j];
	}
	double Slope(const CppAD::AD<double> &x)
	{	size_t j  = Index(x);
		double dx = ArgumentValue[j+1] - ArgumentValue[j];
		double dy = FunctionValue[j+1] - FunctionValue[j];
		return dy / dx;
	}
}

bool interp_retape(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > X(n);

	// loop over argument values
	size_t k;
	for(k = 0; k < TableLength - 1; k++)
	{
		X[0] = .4 * ArgumentValue[k] + .6 * ArgumentValue[k+1];

		// declare independent variables and start tape recording
		// (use a different tape for each argument value)
		CppAD::Independent(X);

		// evaluate piecewise linear interpolant at X[0]
		AD<double> A = Argument(X[0]); 
		AD<double> F = Function(X[0]);
		AD<double> S = Slope(X[0]);
		AD<double> I = F + (X[0] - A) * S;

		// range space vector
		size_t m = 1;
		CPPAD_TEST_VECTOR< AD<double> > Y(m);
		Y[0] = I;

		// create f: X -> Y and stop tape recording
		CppAD::ADFun<double> f(X, Y);

		// vectors for arguments to the function object f
		CPPAD_TEST_VECTOR<double> x(n);   // argument values
		CPPAD_TEST_VECTOR<double> y(m);   // function values 
		CPPAD_TEST_VECTOR<double> dx(n);  // differentials in x space
		CPPAD_TEST_VECTOR<double> dy(m);  // differentials in y space

		// to check function value we use the fact that X[0] is between
		// ArgumentValue[k] and ArgumentValue[k+1]
		double delta, check;
		x[0]   = Value(X[0]);
		delta  = ArgumentValue[k+1] - ArgumentValue[k];
		check  = FunctionValue[k+1] * (x[0]-ArgumentValue[k]) / delta
	               + FunctionValue[k] * (ArgumentValue[k+1]-x[0]) / delta; 
		ok    &= NearEqual(Y[0], check, 1e-10, 1e-10);

		// evaluate partials w.r.t. x[0] 
		dx[0] = 1.;
		dy    = f.Forward(1, dx);

		// check that the derivative is the slope
		check = (FunctionValue[k+1] - FunctionValue[k])
		      / (ArgumentValue[k+1] - ArgumentValue[k]);
		ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);
	}
	return ok;
}


Input File: example/interp_retape.cpp
4.5: Bool Valued Operations and Functions with AD Arguments
4.5.1: Compare AD Binary Comparison Operators
4.5.2: NearEqualExt Compare AD and Base Objects for Nearly Equal
4.5.3: BoolFun AD Boolean Functions
4.5.4: ParVar Is an AD Object a Parameter or Variable
4.5.5: EqualOpSeq Check if Equal and Correspond to Same Operation Sequence

Input File: cppad/local/bool_valued.hpp
4.5.1: AD Binary Comparison Operators

4.5.1.a: Syntax
b = x Op y

4.5.1.b: Purpose
Compares two operands where one of the operands is an AD<Base> object. The comparison has the same interpretation as for the Base type.

4.5.1.c: Op
The operator Op is one of the following:
Op   Meaning
< is x less than y
<= is x less than or equal y
> is x greater than y
>= is x greater than or equal y
== is x equal to y
!= is x not equal to y

4.5.1.d: x
The operand x has prototype
     const 
Type &x
where Type is AD<Base>, Base, or int.

4.5.1.e: y
The operand y has prototype
     const 
Type &y
where Type is AD<Base>, Base, or int.

4.5.1.f: b
The result b has type
     bool 
b

4.5.1.g: Operation Sequence
The result of this operation is a bool value (not an 9.4.b: AD of Base object). Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

For example, suppose x and y are AD<Base> objects, the tape corresponding to AD<Base> is recording, b is true, and the subsequent code is
     if( 
b )
          
y = cos(x);
     else 
y = sin(x); 
only the assignment y = cos(x) is recorded on the tape (if x is a 9.4.h: parameter , nothing is recorded). The 5.6.1.5: CompareChange function can yield some information about changes in comparison operation results. You can use 4.4.4: CondExp to obtain comparison operations that depends on the 9.4.j.c: independent variable values with out re-taping the AD sequence of operations.

4.5.1.h: Assumptions
If one of the Op operators listed above is used with an AD<Base> object, it is assumed that the same operator is supported by the base type Base.

4.5.1.i: Example
The file 4.5.1.1: Compare.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: cppad/local/compare.hpp
4.5.1.1: AD Binary Comparison Operators: Example and Test
 
# include <cppad/cppad.hpp>

bool Compare(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// declare independent variables and start tape recording
	size_t n  = 2;
	double x0 = 0.5;
	double x1 = 1.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 
	x[1]      = x1; 
	CppAD::Independent(x);

	// some binary comparision operations
	AD<double> p;
	if( x[0] < x[1] )
		p = x[0];   // values in x choose this case
	else	p = x[1];
	if( x[0] <= x[1] )
		p *= x[0];  // values in x choose this case
	else	p *= x[1];
	if( x[0] >  x[1] )
		p *= x[0]; 
	else	p *= x[1];  // values in x choose this case
	if( x[0] >= x[1] )
		p *= x[0]; 
	else	p *= x[1];  // values in x choose this case
	if( x[0] == x[1] )
		p *= x[0]; 
	else	p *= x[1];  // values in x choose this case
	if( x[0] != x[1] )
		p *= x[0];  // values in x choose this case
	else	p *= x[1]; 

	// dependent variable vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = p;

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check value 
	ok &= NearEqual(y[0] , x0*x0*x1*x1*x1*x0,  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dx[1] = 0.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 3.*x0*x0*x1*x1*x1, 1e-10, 1e-10);

	// forward computation of partials w.r.t. x[1]
	dx[0] = 0.;
	dx[1] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], 3.*x0*x0*x1*x1*x0, 1e-10, 1e-10);

	// reverse computation of derivative of y[0]
	CPPAD_TEST_VECTOR<double>  w(m);
	CPPAD_TEST_VECTOR<double> dw(n);
	w[0]  = 1.;
	dw    = f.Reverse(1, w);
	ok   &= NearEqual(dw[0], 3.*x0*x0*x1*x1*x1, 1e-10, 1e-10);
	ok   &= NearEqual(dw[1], 3.*x0*x0*x1*x1*x0, 1e-10, 1e-10);

	return ok;
}


Input File: example/compare.cpp
4.5.2: Compare AD and Base Objects for Nearly Equal

4.5.2.a: Syntax
b = NearEqual(xyra)

4.5.2.b: Purpose
The routine 6.2: NearEqual determines if two objects of the same type are nearly. This routine is extended to the case where one object can have type Type while the other can have type AD<Type> or AD< std::complex<Type> >.

4.5.2.c: x
The arguments x has one of the following possible prototypes:
     const 
Type                     &x
     const AD<
Type>                 &x
     const AD< std::complex<
Type> > &x

4.5.2.d: y
The arguments y has one of the following possible prototypes:
     const 
Type                     &y
     const AD<
Type>                 &y
     const AD< std::complex<
Type> > &x

4.5.2.e: r
The relative error criteria r has prototype
     const 
Type &r
It must be greater than or equal to zero. The relative error condition is defined as:  \[
     \frac{ | x - y | } { |x| + |y| } \leq r
\] 


4.5.2.f: a
The absolute error criteria a has prototype
     const 
Type &a
It must be greater than or equal to zero. The absolute error condition is defined as:  \[
     | x - y | \leq a
\] 


4.5.2.g: b
The return value b has prototype
     bool 
b
If either x or y is infinite or not a number, the return value is false. Otherwise, if either the relative or absolute error condition (defined above) is satisfied, the return value is true. Otherwise, the return value is false.

4.5.2.h: Type
The type Type must be a 6.5: NumericType . The routine 6.6: CheckNumericType will generate an error message if this is not the case. If a and b have type Type, the following operation must be defined
Operation Description
a <= b less that or equal operator (returns a bool object)

4.5.2.i: Operation Sequence
The result of this operation is not an 9.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

4.5.2.j: Example
The file 4.5.2.1: NearEqualExt.cpp contains an example and test of this extension of 6.2: NearEqual . It return true if it succeeds and false otherwise.
Input File: cppad/local/near_equal_ext.hpp
4.5.2.1: Compare AD with Base Objects: Example and Test
 

# include <cppad/cppad.hpp>
# include <complex>

bool NearEqualExt(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// double 
	double x    = 1.00000;
	double y    = 1.00001;
	double a    =  .00005;
	double r    =  .00005;
	double zero = 0.; 

	// AD<double> 
	AD<double> ax(x);
	AD<double> ay(y);

	ok &= NearEqual(ax, ay, zero, a);
	ok &= NearEqual(ax, y,  r, zero);
	ok &= NearEqual(x, ay,  r,    a);

	// std::complex<double> 
	AD<double> cx(x);
	AD<double> cy(y);

	// AD< std::complex<double> > 
	AD<double> acx(x);
	AD<double> acy(y);

	ok &= NearEqual(acx, acy, zero, a);
	ok &= NearEqual(acx,  cy, r, zero);
	ok &= NearEqual(acx,   y, r,    a);
	ok &= NearEqual( cx, acy, r,    a);
	ok &= NearEqual(  x, acy, r,    a);

	return ok;
}


Input File: example/near_equal_ext.cpp
4.5.3: AD Boolean Functions

4.5.3.a: Syntax
CPPAD_BOOL_UNARY(Baseunary_name)
b = unary_name(u)
b = unary_name(x)
CPPAD_BOOL_BINARY(Basebinary_name)
b = binary_name(uv)
b = binary_name(xy)

4.5.3.b: Purpose
Create a bool valued function that has AD<Base> arguments.

4.5.3.c: unary_name
This is the name of the bool valued function with one argument (as it is used in the source code). The user must provide a version of unary_name where the argument has type Base. CppAD uses this to create a version of unary_name where the argument has type AD<Base>.

4.5.3.d: u
The argument u has prototype
     const 
Base &u
It is the value at which the user provided version of unary_name is to be evaluated. It is also used for the first argument to the user provided version of binary_name.

4.5.3.e: x
The argument x has prototype
     const AD<
Base> &x
It is the value at which the CppAD provided version of unary_name is to be evaluated. It is also used for the first argument to the CppAD provided version of binary_name.

4.5.3.f: b
The result b has prototype
     bool 
b

4.5.3.g: Create Unary
The preprocessor macro invocation
     CPPAD_BOOL_UNARY(
Baseunary_name)
defines the version of unary_name with a AD<Base> argument. This can with in a namespace (not the CppAD namespace) but must be outside of any routine.

4.5.3.h: binary_name
This is the name of the bool valued function with two arguments (as it is used in the source code). The user must provide a version of binary_name where the arguments have type Base. CppAD uses this to create a version of binary_name where the arguments have type AD<Base>.

4.5.3.i: v
The argument v has prototype
     const 
Base &v
It is the second argument to the user provided version of binary_name.

4.5.3.j: y
The argument x has prototype
     const AD<
Base> &y
It is the second argument to the CppAD provided version of binary_name.

4.5.3.k: Create Binary
The preprocessor macro invocation
     CPPAD_BOOL_BINARY(
Basebinary_name)
defines the version of binary_name with AD<Base> arguments. This can with in a namespace (not the CppAD namespace) but must be outside of any routine.

4.5.3.l: Operation Sequence
The result of this operation is not an 9.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

4.5.3.m: Example
The file 4.5.3.1: BoolFun.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.

4.5.3.n: Deprecated
The preprocessor symbols CppADCreateUnaryBool and CppADCreateBinaryBool are defined to be the same as CPPAD_BOOL_UNARY and CPPAD_BOOL_BINARY respectively (but their use is deprecated).
Input File: cppad/local/bool_fun.hpp
4.5.3.1: AD Boolean Functions: Example and Test
 

# include <cppad/cppad.hpp>
# include <complex>


// define abbreviation for double precision complex 
typedef std::complex<double> Complex;

namespace {
	// a unary bool function with Complex argument
	static bool IsReal(const Complex &x)
	{	return x.imag() == 0.; }

	// a binary bool function with Complex arguments
	static bool AbsGeq(const Complex &x, const Complex &y)
	{	double axsq = x.real() * x.real() + x.imag() * x.imag();
		double aysq = y.real() * y.real() + y.imag() * y.imag();

		return axsq >= aysq;
	}

	// Create version of IsReal with AD<Complex> argument
	// inside of namespace and outside of any other function.
	CPPAD_BOOL_UNARY(Complex, IsReal)

	// Create version of AbsGeq with AD<Complex> arguments
	// inside of namespace and outside of any other function.
	CPPAD_BOOL_BINARY(Complex, AbsGeq)

}
bool BoolFun(void)
{	bool ok = true;

	CppAD::AD<Complex> x = Complex(1.,  0.);
	CppAD::AD<Complex> y = Complex(1.,  1.);

	ok &= IsReal(x);
	ok &= ! AbsGeq(x, y);

	return ok;
}


Input File: example/bool_fun.cpp
4.5.4: Is an AD Object a Parameter or Variable

4.5.4.a: Syntax
b = Parameter(x)
b = Variable(x)

4.5.4.b: Purpose
Determine if x is a 9.4.h: parameter or 9.4.l: variable .

4.5.4.c: x
The argument x has prototype
     const AD<
Base>    &x
     const VecAD<
Base> &x

4.5.4.d: b
The return value b has prototype
     bool 
b
The return value for Parameter (Variable) is true if and only if x is a parameter (variable). Note that a 4.6: VecAD<Base> object is a variable if any element of the vector depends on the independent variables.

4.5.4.e: Operation Sequence
The result of this operation is not an 9.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 9.4.g.b: operation sequence .

4.5.4.f: Example
The file 4.5.4.1: ParVar.cpp contains an example and test of these functions. It returns true if it succeeds and false otherwise.
Input File: cppad/local/par_var.hpp
4.5.4.1: AD Parameter and Variable Functions: Example and Test
 

# include <cppad/cppad.hpp>

bool ParVar(void)
{	bool ok = true;

	using CppAD::AD;
	using CppAD::VecAD;
	using CppAD::Parameter;
	using CppAD::Variable;

	// declare independent variables and start tape recording
	size_t n = 1;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]     = 0.;
	ok &= Parameter(x[0]);     // x[0] is a paraemter here
	CppAD::Independent(x);
	ok &= Variable(x[0]);      // now x[0] is a variable 

	// dependent variable vector
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = 2.;
	ok  &= Parameter(y[0]);    // y[0] does not depend on x[0]
	y[1] = abs(x[0]);
	ok  &= Variable(y[1]);     // y[1] does depends on x[0] 

	// VecAD objects
	VecAD<double> z(2);
	z[0] = 0.;
	z[1] = 1.;
	ok  &= Parameter(z);      // z does not depend on x[0]
	z[x[0]] = 2.;
	ok  &= Variable(z);       // z depends on x[0]
	

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y);

	// check that now all AD<double> objects are parameters
	ok &= Parameter(x[0]); ok &= ! Variable(x[0]);
	ok &= Parameter(y[0]); ok &= ! Variable(y[0]);
	ok &= Parameter(y[1]); ok &= ! Variable(y[1]);

	// check that the VecAD<double> object is a parameter
	ok &= Parameter(z);

	return ok;
}


Input File: example/par_var.cpp
4.5.5: Check if Equal and Correspond to Same Operation Sequence

4.5.5.a: Syntax
b = EqualOpSeq(xy)

4.5.5.b: Purpose
Determine if two x and y are equal, and if they are 9.4.l: variables , determine if they correspond to the same 9.4.g.b: operation sequence .

4.5.5.c: Motivation
Sometimes it is useful to cache information and only recalculate when a function's arguments change. In the case of AD variables, it may be important not only when the argument values are equal, but when they are related to the 9.4.j.c: independent variables by the same operation sequence. After the assignment
     
y = x
these two AD objects would not only have equal values, but would also correspond to the same operation sequence.

4.5.5.d: x
The argument x has prototype
     const AD<
Base> &x

4.5.5.e: y
The argument y has prototype
     const AD<
Base> &y

4.5.5.f: b
The result b has prototype
     bool 
b
The result is true if and only if one of the following cases holds:
  1. Both x and y are variables and correspond to the same operation sequence.
  2. Both x and y are parameters, Base is an AD type, and EqualOpSeq( Value(x) , Value(y) ) is true.
  3. Both x and y are parameters, Base is not an AD type, and x == y is true.


4.5.5.g: Example
The file 4.5.5.1: EqualOpSeq.cpp contains an example and test of EqualOpSeq. It returns true if it succeeds and false otherwise.
Input File: cppad/local/equal_op_seq.hpp
4.5.5.1: EqualOpSeq: Example and Test
 
# include <cppad/cppad.hpp>

bool EqualOpSeq(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::EqualOpSeq;

	// domain space vector
	size_t n  = 1;
	double x0 = 1.;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 

	// declare independent variables and start tape recording
	CppAD::Independent(x);

	AD<double> a = 1. + x[0];  // this variable is 1 + x0
	AD<double> b = 2. * x[0];  // this variable is 2 * x0

	// both a and b are variables
	ok &= (a == b);            // 1 + 1     == 2 * 1
	ok &= ! EqualOpSeq(a, b);  // 1 + x[0]  != 2 * x[0] 

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = a;

	// both y[0] and a are variables
	EqualOpSeq(y[0], a);       // 2 * x[0] == 2 * x[0]

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// both a and b are parameters (after the creation of f above)
	ok &= EqualOpSeq(a, b);    // 1 + 1 == 2 * 1

	return ok;
}


Input File: example/equal_op_seq.cpp
4.6: AD Vectors that Record Index Operations

4.6.a: Syntax
VecAD<Basev(n)
v.size()
b = v[i]
r = v[x]

4.6.b: Purpose
If either v or x is a 9.4.l: variable , the indexing operation
     
y = v[x]
is recorded in the corresponding AD of Base 9.4.g.b: operation sequence and transferred to the corresponding 5: ADFun object f. Such an index can change each time zero order 5.6.1: f.Forward is used; i.e., f is evaluated with new value for the 9.4.j.c: independent variables . Note that the value of y depends on the value of x in a discrete fashion and CppAD computes its partial derivative with respect to x as zero.

4.6.c: Alternatives
If only the values in the vector, and not the indices, depend on the independent variables, the class Vector< AD<Base> > is much more efficient for storing AD values where Vector is any 6.7: SimpleVector template class, If only the indices, and not the values in the vector, depend on the independent variables, The 4.4.5: Discrete functions are a much more efficient way to represent these vectors.

4.6.d: VecAD<Base>::reference
The result y has type
     VecAD<
Base>::reference
which is very much like the AD<Base> type with some notable exceptions:

4.6.d.a: Exceptions
  1. The object y cannot be used with the 4.3.1: Value function to compute the corresponding Base value. If v is not a 9.4.l: variable
         v[
    i]
    can be used to compute the corresponding Base value.
  2. The object y cannot be used with the 4.4.1: computed assignments operators +=, -=, *=, or /=. For example, the following syntax is not valid:
         
    v[x] += z;
    no matter what the types of z.
  3. Assignment to y returns a void. For example, the following syntax is not valid:
         
    z = v[x] = u;
    no matter what the types of z, and u.
  4. The 4.4.4: CondExp functions do not accept VecAD<Base>::reference arguments. For example, the following syntax is not valid:
         CondExpGt(
    yzuv)
    no matter what the types of z, u, and v.
  5. The 4.5.4: Parameter and Variable functions cannot be used with VecAD<Base>::reference arguments (use the entire VecAD<Base> vector instead).
  6. The vectors passed to 5.1: Independent must have elements of type AD<Base>; i.e., 4.6: VecAD vectors cannot be passed to Independent.
  7. If one uses this type in a AD of Base 9.4.g.b: operation sequence , 9.4.i: sparsity pattern calculations (5.6.3: Sparse ) are only valid for the current independent variable values, instead of valid for all independent variables.


4.6.e: Constructor

4.6.e.a: v
The syntax
     VecAD<
Basev(n)
creates an VecAD object v with n elements. The initial value of the elements of v is unspecified.

4.6.f: n
The argument n has prototype
     size_t 
n

4.6.g: size
The syntax
     
v.size()
returns the number of elements in the vector v; i.e., the value of n when it was constructed.

4.6.h: size_t Indexing
We refer to the syntax
     
b = v[i]
as size_t indexing of a VecAD object. This indexing is only valid if the vector v is a 4.5.4: parameter ; i.e., it does not depend on the independent variables.

4.6.h.a: i
The operand i has prototype
     size_t 
i
It must be greater than or equal zero and less than n; i.e., less than the number of elements in v.

4.6.h.b: b
The result b has prototype
     
Base b
and is a reference to the i-th element in the vector v. It can be used to change the element value; for example,
     
v[i] = c
is valid where c is a Base object. The reference b is no longer valid once the destructor for v is called; for example, when v falls out of scope.

4.6.i: AD Indexing
We refer to the syntax
     
r = v[x]
as AD indexing of a VecAD object.

4.6.i.a: x
The argument x has prototype
     const AD<
Base> &x
The value of x must be greater than or equal zero and less than n; i.e., less than the number of elements in v.

4.6.i.b: r
The result y has prototype
     VecAD<
Base>::reference r
The object r has an AD type and its operations are recorded as part of the same AD of Base 9.4.g.b: operation sequence as for AD<Base> objects. It acts as a reference to the element with index  {\rm floor} (x) in the vector v (  {\rm floor} (x) is the greatest integer less than or equal x). Because it is a reference, it can be used to change the element value; for example,
     
v[x] = z
is valid where z is an VecAD<Base>::reference object. As a reference, r is no longer valid once the destructor for v is called; for example, when v falls out of scope.

4.6.j: Example
The file 4.6.1: VecAD.cpp contains an example and test using VecAD vectors. It returns true if it succeeds and false otherwise.

4.6.k: Speed and Memory
The 4.6: VecAD vector type is inefficient because every time an element of a vector is accessed, a new CppAD 9.4.l: variable is created on the tape using either the Ldp or Ldv operation (unless all of the elements of the vector are 9.4.h: parameters ). The effect of this can be seen by executing the following steps:
  1. In the file cppad/local/forward0sweep.h, change the definition of CPPAD_FORWARD0SWEEP_TRACE to
     
    	# define CPPAD_FORWARD0SWEEP_TRACE 1
    
  2. In the Example directory, execute the command
     
    	./OneTest LuVecADOk "lu_vec_ad.cpp -DNDEBUG" > LuVecADOk.log
    
    This will write a trace of all the forward tape operations, for the test case 8.2.3.1: LuVecADOk.cpp , to the file LuVecADOk.log.
  3. In the Example directory execute the commands
     
    	grep "op="           LuVecADOk.log | wc -l
    	grep "op=Ld[vp]"     LuVecADOk.log | wc -l
    	grep "op=St[vp][vp]" LuVecADOk.log | wc -l
    
    The first command counts the number of operators in the tracing, the second counts the number of VecAD load operations, and the third counts the number of VecAD store operations. (For CppAD version 05-11-20 these counts were 956, 348, and 118 respectively.)

Input File: cppad/local/vec_ad.hpp
4.6.1: AD Vectors that Record Index Operations: Example and Test
 

# include <cppad/cppad.hpp>
# include <cassert>

namespace {
	// return the vector x that solves the following linear system 
	//	a[0] * x[0] + a[1] * x[1] = b[0]
	//	a[2] * x[0] + a[3] * x[1] = b[1]
	// in a way that will record pivot operations on the AD<double> tape
	typedef CPPAD_TEST_VECTOR< CppAD::AD<double> > Vector;
	Vector Solve(const Vector &a , const Vector &b)
	{	using namespace CppAD;
		assert(a.size() == 4 && b.size() == 2);	

		// copy the vector b into the VecAD object B
		VecAD<double> B(2); 
		AD<double>    u;
		for(u = 0; u < 2; u += 1.)
			B[u] = b[ Integer(u) ];

		// copy the matrix a into the VecAD object A
		VecAD<double> A(4); 
		for(u = 0; u < 4; u += 1.)
			A[u] = a [ Integer(u) ];

		// tape AD operation sequence that determines the row of A
		// with maximum absolute element in column zero
		AD<double> zero(0), one(1);
		AD<double> rmax = CondExpGt(abs(a[0]), abs(a[2]), zero, one);

		// divide row rmax by A(rmax, 0)
		A[rmax * 2 + 1]  = A[rmax * 2 + 1] / A[rmax * 2 + 0];
		B[rmax]          = B[rmax]         / A[rmax * 2 + 0];
		A[rmax * 2 + 0]  = one;

		// subtract A(other,0) times row rmax from other row
		AD<double> other   = one - rmax;
		A[other * 2 + 1]   = A[other * 2 + 1]
		                   - A[other * 2 + 0] * A[rmax * 2 + 1];
		B[other]           = B[other]
		                   - A[other * 2 + 0] * B[rmax];
		A[other * 2 + 0] = zero;

		// back substitute to compute the solution vector x
		CPPAD_TEST_VECTOR< AD<double> > x(2);
		size_t iother = Integer(other);
		size_t imax   = Integer(rmax);
		x[iother]     = B[other] / A[other * 2 + 1];
		x[imax ]      = (B[rmax] - A[rmax * 2 + other] * x[iother])
		              / A[rmax * 2 + 0];

		return x;
	}
}

bool VecAD(void)
{	bool ok = true;
	
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 4;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 2.; X[1] = 0.;  // 2 * identity matrix (rmax in Solve will be 0)
	X[2] = 0.; X[3] = 2.; 

	// declare independent variables and start tape recording
	CppAD::Independent(X);

	// define the vector b
	CPPAD_TEST_VECTOR< AD<double> > B(2);
	B[0] = 0.;
	B[1] = 1.;

	// range space vector solves X * Y = b
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y = Solve(X, B);

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y); 

	// check value 
	ok &= NearEqual(Y[0] , B[0] / X[0],  1e-10 , 1e-10);
	ok &= NearEqual(Y[1] , B[1] / X[3],  1e-10 , 1e-10);

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.; dx[1] = 0.;
	dx[2] = 0.; dx[3] = 0.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0], - B[0] / (X[0] * X[0]) , 1e-10, 1e-10);
	ok   &= NearEqual(dy[1],                     0. , 1e-10, 1e-10);

	// compute the solution for a new x matrix such that pivioting
	// on the original rmax row would divide by zero
	CPPAD_TEST_VECTOR<double> x(n);  
	CPPAD_TEST_VECTOR<double> y(m);
	x[0] = 0.; x[1] = 2.;
	x[2] = 2.; x[3] = 0.;
	y    = f.Forward(0, x);
	ok &= NearEqual(y[0] , B[1] / x[2],  1e-10 , 1e-10);
	ok &= NearEqual(y[1] , B[0] / x[1],  1e-10 , 1e-10);
	
	// forward computation of partials w.r.t. x[1]
	dx[0] = 0.; dx[1] = 1.;
	dx[2] = 0.; dx[3] = 0.;
	dy    = f.Forward(1, dx);
	ok   &= NearEqual(dy[0],                     0. , 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], - B[0] / (x[1] * x[1]) , 1e-10, 1e-10);

	return ok;
}


Input File: example/vec_ad.cpp
4.7: AD<Base> Requirements for Base Type

4.7.a: Purpose
This section lists the requirements for the type Base so that the type AD<Base> can be used. In the case where Base is float, double, std::complex<float>, std::complex<double>, or AD<Other>, these requirements are provided by including he file cppad/cppad.hpp.

4.7.b: Warning
This is a preliminary version of these specifications and it is subject to change in future versions of CppAD.

4.7.c: Numeric Type
The type Base must support all the operations for a 6.5: NumericType .

4.7.d: declare.hpp
The base type requirements must be included before the rest of CppAD. It is however necessary to declare the enum type CompareOp (and possible other things). This should be done with the following include command:
 
	# include <cppad/local/declare.hpp>


4.7.e: CondExp
The type Base must support the syntax
     
result = CondExpOp(copleftrighttrueCasefalseCase)
which computes the result for the corresponding 4.4.4: CondExp function. The argument cop has prototype
     enum CppAD::CompareOp 
cop
The possible values for this enum type are CompareLt, CompareLe, CompareEq, CompareGe, and CompareGt. The other arguments have the prototype
     const 
Base      &left ,
     const 
Base     &right ,
     const 
Base  &trueCase ,
     const 
Base &falseCase )
The result has prototype
     
Base  &result

4.7.e.a: Ordered Type
If Base is a relatively simple type (does not record operations for future calculations) and it supports <, <=, ==, >=, and > operators its CondExpOp function can be defined by
namespace CppAD {
     inline 
Base CondExpOp(
     enum CppAD::CompareOp      cop ,
     const 
Base             &left ,
     const 
Base            &right ,
     const 
Base         &trueCase ,
     const 
Base        &falseCase )
     {    return CppAD::CondExpTemplate(
               cop, left, right, trueCase, falseCase);
     }
}

4.7.e.b: Not Ordered
If the type Base does not support ordering, the CondExpOp function does not make sense. In this case one might (but need not) define CondExpOp as follows:
namespace CppAD {
     inline 
Base CondExpOp(
     enum CompareOp      cop ,
     const 
Base      &left ,
     const 
Base     &right ,
     const 
Base  &trueCase ,
     const 
Base &falseCase )
     {    // attempt to use CondExp with a 
Base argument
          assert(0);
          return 
Base(0);
     }
}

4.7.f: EqualOpSeq
If function 4.5.5: EqualOpSeq is used with arguments of type AD<Base>, the type Base must support the syntax
     
b = EqualOpSeq(xy)
which returns true if and only if x is equal to y (this is used by the 4.5.5: EqualOpSeq function). The arguments x and y have prototype
     const 
Base &x
     const 
Base &y
The return value b has prototype
     bool 
b

4.7.f.a: Suggestion
If Base is a relatively simple type (does not record operations for future calculations), the EqualOpSeq function can be defined by
namespace CppAD {
     inline 
Base EqualOpSeq(const Base &x, const Base &y)
     {    return x == y; }
}

4.7.g: Identical
If the type Base records what operations are preformed by AD<Base>, CppAD must know if the Base value corresponding to an operation will be the same. For example, suppose the current operation is between two AD<Base> objects where Base is AD<double>; some optimizations depend on one of the objects being a 9.4.h: parameter as well as its corresponding Base value also being a parameter. In general, the type Base must support the following functions:
Syntax Result
b = IdenticalPar(x)    the Base value will always be the same
b = IdenticalZero(x)    x equals zero and IdenticalPar(x)
b = IdenticalOne(x)    x equals one and IdenticalPar(x)
b = IdenticalEqualPar(xy)    x equals y, IdenticalPar(x) and IdenticalPar(y)
The argument x has prototype
     const 
Base x
If it is present, the argument y has prototype
     const 
Base y
The result b has prototype
     bool 
b

4.7.g.a: Suggestion
Note that false is a slow but safer option for all of these functions. If Base is a relatively simple type (does not record operations for future calculations), the IdenticalPar function can be defined by
namespace CppAD {
     inline bool IdenticalPar(const 
Base &x)
     {    return true; }
}
and the IdenticalZero function can be defined by
namespace CppAD {
     inline bool IdenticalZero(const 
Base &x)
        {       return x == Base(0); }
}
The other functions could be defined in a similar manner.

If the Base type records operations and may change the value of x or y during some future calculation, these functions should return false. If you are not sure what should be returned, false is a safer value (but makes some calculations slower).

4.7.h: Integer
The type Base must support the syntax
     
i = Integer(x)
which converts x to an int. The argument x has prototype
     const 
Base &x
and the return value i has prototype
     int 
i

4.7.h.a: Suggestion
The Base version of the Integer function might be defined by
namespace CppAD {
     inline int Integer(const 
Base &x)
     {    return static_cast<int>(x); }
}

4.7.i: Ordered
So that CppAD can be used with a base type that does not support the ordering operations >, >=, <, or <=, Base must support the following functions:
Syntax Result
b = GreaterThanZero(x)     x > 0
b = GreaterThanOrZero(x)     x \geq 0
b = LessThanZero(x)     x < 0
b = LessThanOrZero(x)     x \leq 0
The argument x has prototype
     const 
Base &x
and the result b has prototype
     bool 
b

4.7.i.a: Ordered Type
If the type Base supports ordered operations, these functions should have their corresponding definitions. For example,
namespace CppAD {
     inline bool GreaterThanZero(const 
Base &x)
     {    return (x > 0);
     }
}
The other functions would replace > by the corresponding operator.

4.7.i.b: Not Ordered
If the type Base does not support ordering, one might (but need not) define GreaterThanZero as follows:
namespace CppAD {
     inline bool GreaterThanZero(const 
Base &x)
     {    // attempt to use GreaterThanZero with a 
Base argument
          assert(0);
          return x;
     }
}
The other functions would have the corresponding definition.

4.7.j: pow
The type Base must support the syntax
     
z = pow(xy)
which computes  z = x^y . The arguments x and y have prototypes
     const 
Base &x
     const 
Base &y
The return value z has prototype
     
Base z

4.7.k: Standard Math Unary
The type Base must support the following 4.4.2: standard math unary functions :
Syntax Result
y = acos(x) inverse cosine
y = asin(x) inverse sine
y = atan(x) inverse tangent
y = cos(x) cosine
y = cosh(x) hyperbolic cosine
y = exp(x) exponential
y = log(x) natural logarithm
y = sin(x) sine
y = sinh(x) hyperbolic sine
y = sqrt(x) square root
y = tan(x) tangent
The argument x has prototype
     const 
Base &x
and the result y has prototype
     
Base y

4.7.l: Example
The files 4.7.1: base_complex.hpp and 4.7.2: base_adolc.hpp contain example implementations of these requirements.
Input File: omh/base_require.omh
4.7.1: Enable use of AD<Base> where Base is std::complex<double>

4.7.1.a: Example
The file 4.7.1.1: ComplexPoly.cpp contains an example use of std::complex<double> type for a CppAD Base type. It returns true if it succeeds and false otherwise.

4.7.1.b: See Also
The file 4.7.1.2: not_complex_ad.cpp contains an example using complex arithmetic where the function is not complex differentiable.

4.7.1.c: Include File
This file is included before <cppad/cppad.hpp> so it is necessary to define the error handler in addition to including 4.7.d: declare.hpp
 
# include <complex>
# include <cppad/declare.hpp>
# include <cppad/error_handler.hpp>


4.7.1.d: CondExpOp
The conditional expressions 4.4.4: CondExp requires ordered comparisons (e.g., <) and the C++ standard complex types do not allow for ordered comparisons. Thus, we make it an error to use the conditional comparisons with complex types:
 
namespace CppAD {
	inline std::complex<double> CondExpOp(
		enum CppAD::CompareOp      cop        ,
		const std::complex<double> &left      ,
		const std::complex<double> &right     ,
		const std::complex<double> &trueCase  ,
		const std::complex<double> &falseCase )
	{	CppAD::ErrorHandler::Call(
			true     , __LINE__ , __FILE__ ,
			"std::complex<float> CondExpOp(...)",
			"Error: cannot use CondExp with a complex type"
		);
		return std::complex<double>(0);
	}
}


4.7.1.e: EqualOpSeq
Complex numbers do not carry operation sequence information. Thus they are equal in this sense if and only if there values are equal.
 
namespace CppAD {
	inline bool EqualOpSeq(
		const std::complex<double> &x , 
		const std::complex<double> &y )
	{	return x == y; 
	}
}


4.7.1.f: Identical
Complex numbers do not carry operation sequence information. Thus they are all parameters so the identical functions just check values.
 
namespace CppAD {
	inline bool IdenticalPar(const std::complex<double> &x)
	{	return true; }
	inline bool IdenticalZero(const std::complex<double> &x)
	{	return (x == std::complex<double>(0., 0.) ); }
	inline bool IdenticalOne(const std::complex<double> &x)
	{	return (x == std::complex<double>(1., 0.) ); }
	inline bool IdenticalEqualPar(
		const std::complex<double> &x, const std::complex<double> &y)
	{	return (x == y); }
}


4.7.1.g: Ordered
 
namespace CppAD {
	inline bool GreaterThanZero(const std::complex<double> &x)
	{	CppAD::ErrorHandler::Call(
			true     , __LINE__ , __FILE__ ,
			"GreaterThanZero(x)",
			"Error: cannot use GreaterThanZero with complex"
		);
		return false;
	}
	inline bool GreaterThanOrZero(const std::complex<double> &x)
	{	CppAD::ErrorHandler::Call(
			true     , __LINE__ , __FILE__ ,
			"GreaterThanZero(x)",
			"Error: cannot use GreaterThanZero with complex"
		);
		return false;
	}
	inline bool LessThanZero(const std::complex<double> &x)
	{	CppAD::ErrorHandler::Call(
			true     , __LINE__ , __FILE__ ,
			"LessThanZero(x)",
			"Error: cannot use LessThanZero with complex"
		);
		return false;
	}
	inline bool LessThanOrZero(const std::complex<double> &x)
	{	CppAD::ErrorHandler::Call(
			true     , __LINE__ , __FILE__ ,
			"LessThanZero(x)",
			"Error: cannot use LessThanZero with complex"
		);
		return false;
	}
}


4.7.1.h: Integer
The implementation of this function must agree with the CppAD user specifications for complex arguments to the 4.3.2.d.b: Integer function:
 
namespace CppAD {
	inline int Integer(const std::complex<double> &x)
	{	return static_cast<int>( x.real() ); }
}


4.7.1.i: Standard Functions

4.7.1.i.a: Valid Complex Functions
The following standard math functions, that are required by 4.7: base_require , are defined by std::complex: cos, cosh, exp, log, pow, sin, sinh, sqrt.
 
# define CPPAD_USER_MACRO(function)                                   \
inline std::complex<double> function(const std::complex<double> &x)   \
{	return std::function(x); }

namespace CppAD {
	CPPAD_USER_MACRO(cos)
	CPPAD_USER_MACRO(cosh)
	CPPAD_USER_MACRO(exp)
	CPPAD_USER_MACRO(log)
	inline std::complex<double> pow(
		const std::complex<double> &x , 
		const std::complex<double> &y )
	{	return std::pow(x, y); }
	CPPAD_USER_MACRO(sin)
	CPPAD_USER_MACRO(sinh)
	CPPAD_USER_MACRO(sqrt)
}
# undef CPPAD_USER_MACRO


4.7.1.i.b: Invalid Complex Functions
The other standard math functions, (and abs) required by 4.7: base_require are not defined for complex types (see 4.4.3.1.f: abs ). Hence we make it an error to use them. (Note that the standard math functions are not defined in the CppAD namespace.)
 
# define CPPAD_USER_MACRO(function)                                          \
inline std::complex<double> function(const std::complex<double> &x)          \
{      CppAD::ErrorHandler::Call(                                            \
               true     , __LINE__ , __FILE__ ,                              \
               "std::complex<double>",                                       \
               "Error: cannot use " #function " with complex<double> "       \
       );                                                                    \
       return std::complex<double>(0);                                       \
}

namespace CppAD {
	CPPAD_USER_MACRO(acos)
	CPPAD_USER_MACRO(asin)
	CPPAD_USER_MACRO(atan)
}
# undef CPPAD_USER_MACRO

Input File: cppad/local/base_complex.hpp
4.7.1.1: Complex Polynomial: Example and Test

4.7.1.1.a: See Also
4.7.1.2: not_complex_ad.cpp

4.7.1.1.b: Poly
Select this link to view specifications for 6.11: Poly :
 

# include <cppad/cppad.hpp>
# include <complex>

bool complex_poly(void)
{	bool ok    = true;
	size_t deg = 4;

	using CppAD::AD;
	using CppAD::Poly;
	typedef std::complex<double> Complex; 

	// polynomial coefficients
	CPPAD_TEST_VECTOR< Complex >     a   (deg + 1); // coefficients for p(z)
	CPPAD_TEST_VECTOR< AD<Complex> > A   (deg + 1); 
	size_t i;
	for(i = 0; i <= deg; i++)
		A[i] = a[i] = Complex(i, i);

	// independent variable vector
	CPPAD_TEST_VECTOR< AD<Complex> > Z(1);
	Complex z = Complex(1., 2.);
 	Z[0]      = z;
	Independent(Z);

	// dependent variable vector and indices
	CPPAD_TEST_VECTOR< AD<Complex> > P(1);

	// dependent variable values
	P[0] = Poly(0, A, Z[0]);

	// create f: Z -> P and vectors used for derivative calculations
	CppAD::ADFun<Complex> f(Z, P);
	CPPAD_TEST_VECTOR<Complex> v( f.Domain() );
	CPPAD_TEST_VECTOR<Complex> w( f.Range() );

	// check first derivative w.r.t z
	v[0]      = 1.;
	w         = f.Forward(1, v);
	Complex p = Poly(1, a, z);
	ok &= ( w[0]  == p );

	// second derivative w.r.t z is 2 times its second order Taylor coeff
	v[0] = 0.;
	w    = f.Forward(2, v);
	p    = Poly(2, a, z);
	ok &= ( 2. * w[0]  == p );

	return ok;
}


Input File: example/complex_poly.cpp
4.7.1.2: Not Complex Differentiable: Example and Test

4.7.1.2.a: Not Complex Differentiable
If x is complex, the functions real(x), imag(x), conj(x), and abs(x) are examples of functions that are not complex differentiable.

4.7.1.2.b: See Also
4.7.1.1: ComplexPoly.cpp

4.7.1.2.c: Poly
Select this link to view specifications for 6.11: Poly :
 

# include <cppad/cppad.hpp>
# include <complex>

bool not_complex_ad(void)
{	bool ok    = true;
	size_t deg = 4;

	using CppAD::AD;
	using CppAD::Poly;
	typedef std::complex<double>              Complex; 
	typedef std::complex< CppAD::AD<double> > ComplexAD; 

	// polynomial coefficients
	CPPAD_TEST_VECTOR< Complex >   a   (deg + 1); // coefficients for p(z)
	CPPAD_TEST_VECTOR< ComplexAD > A   (deg + 1); 
	size_t i;
	for(i = 0; i <= deg; i++)
	{	a[i] = Complex(i, i);
		A[i] = ComplexAD( AD<double>(i) , AD<double>(i) );
	}

	// declare independent variables and start taping
	CPPAD_TEST_VECTOR< AD<double> > Z_real(1);
	double z_real = 1.;
 	Z_real[0]     = z_real;
	Independent(Z_real);

	// complex calculations
	double z_imag = 2.;
	ComplexAD Z = ComplexAD( Z_real[0], AD<double>(z_imag) );
	ComplexAD P = Poly(0, A, Z);

	// range space vector
	CPPAD_TEST_VECTOR< AD<double> > P_real(1);
	P_real[0] = P.real();   // real() is not complex differentiable

	// create f: Z_real -> P_real  and stop taping
	CppAD::ADFun<double> f(Z_real, P_real);

	// check first derivative w.r.t z
	CPPAD_TEST_VECTOR<double> v( f.Domain() );
	CPPAD_TEST_VECTOR<double> w( f.Range() );
	v[0]      = 1.;
	w         = f.Forward(1, v);
	Complex z = Complex(z_real, z_imag);
	Complex p = Poly(1, a, z);
	ok &= ( w[0]  == p.real() );

	// second derivative w.r.t z is 2 times its second order Taylor coeff
	v[0] = 0.;
	w    = f.Forward(2, v);
	p    = Poly(2, a, z);
	ok &= ( 2. * w[0]  == p.real() );

	return ok;
}


Input File: example/not_complex_ad.cpp
4.7.2: Enable use of AD<Base> where Base is Adolc's adouble Type

4.7.2.a: Syntax
This file in located in the example directory. It can be copied into the current working directory and included with the command:
     # include "base_adolc.hpp"

4.7.2.b: Example
The file 4.7.2.1: mul_level_adolc.cpp contains an example use of Adolc's adouble type for a CppAD Base type. It returns true if it succeeds and false otherwise. The file 8.1.9: ode_taylor_adolc.cpp contains a more realistic (and complex) example.

4.7.2.c: Include File
This file is included before <cppad/cppad.hpp> so it is necessary to define the error handler in addition to including 4.7.d: declare.hpp
 
# include <cppad/declare.hpp>
# include <cppad/error_handler.hpp>



4.7.2.d: Standard Math Functions Defined by Adolc Package
The following 4.7: required functions are defined by the Adolc package:
acos, asin, atan, cos, cosh, exp, log, pow, sin, sinh, sqrt, tan.

4.7.2.e: CondExpOp
The type adouble supports a conditional assignment function with the syntax
     condassign(
abcd)
which evaluates to
     
a = (b > 0) ? c : d;
This enables one to include conditionals in the recording of adouble operations and later evaluation for different values of the independent variables (in the same spirit as the CppAD 4.4.4: CondExp function).
 
namespace CppAD {
	inline adouble CondExpOp(
		enum  CppAD::CompareOp     cop ,
		const adouble            &left ,
		const adouble           &right ,
		const adouble        &trueCase ,
		const adouble       &falseCase )
	{	adouble result;
		switch( cop )
		{
			case CompareLt: // left < right
			condassign(result, right - left, trueCase, falseCase);
			break;

			case CompareLe: // left <= right
			condassign(result, left - right, falseCase, trueCase);
			break;

			case CompareEq: // left == right
			condassign(result, left - right, falseCase, trueCase);
			condassign(result, right - left, falseCase, result);
			break;

			case CompareGe: // left >= right
			condassign(result, right - left, falseCase, trueCase);
			break;

			case CompareGt: // left > right
			condassign(result, left - right, trueCase, falseCase);
			break;

			default:
			CppAD::ErrorHandler::Call(
				true     , __LINE__ , __FILE__ ,
				"CppAD::CondExp",
				"Error: for unknown reason."
			);
			result = trueCase;
		}
		return result;
	}
}


4.7.2.f: EqualOpSeq
The Adolc user interface does not specify a way to determine if two adouble variables correspond to the same operations sequence. Make EqualOpSeq an error if it gets used:
 
namespace CppAD {
	inline bool EqualOpSeq(const adouble &x, const adouble &y)
	{	CppAD::ErrorHandler::Call(
			true     , __LINE__ , __FILE__ ,
			"CppAD::EqualOpSeq(x, y)",
			"Error: adouble does not support EqualOpSeq."
		);
		return false;
	}
}


4.7.2.g: Identical
The Adolc user interface does not specify a way to determine if an adouble depends on the independent variables. To be safe (but slow) return false in all the cases below.
 
namespace CppAD {
	inline bool IdenticalPar(const adouble &x)
	{	return false; }
	inline bool IdenticalZero(const adouble &x)
	{	return false; }
	inline bool IdenticalOne(const adouble &x)
	{	return false; }
	inline bool IdenticalEqualPar(const adouble &x, const adouble &y)
	{	return false; }
}


4.7.2.h: Ordered
 
	inline bool GreaterThanZero(const adouble &x)
	{    return (x > 0); }
	inline bool GreaterThanOrZero(const adouble &x)
	{    return (x >= 0); }
	inline bool LessThanZero(const adouble &x)
	{    return (x < 0); }
	inline bool LessThanOrZero(const adouble &x)
	{    return (x <= 0); }


4.7.2.i: Integer
 
	inline int Integer(const adouble &x)
	{    return static_cast<int>( x.getValue() ); }

Input File: example/base_adolc.hpp
4.7.2.1: Using Adolc with Multiple Levels of Taping: Example and Test

4.7.2.1.a: Purpose
This is an example and test of using Adolc's adouble type, together with CppAD's AD<adouble> type, for multiple levels of taping. The example computes  \[
     \frac{d}{dx} \left[ f^{(1)} (x) * v \right]
\] 
where  f : \R^n \rightarrow \R and  v \in \R^n . The example 5.6.2.2.2: HesTimesDir.cpp computes the same value using only one level of taping (more efficient) and the identity  \[
     \frac{d}{dx} \left[ f^{(1)} (x) * v \right] = f^{(2)} (x) * v
\] 
The example 8.1.11.1: mul_level.cpp computes the same values using AD<double> and AD< AD<double> >.

4.7.2.1.b: Tracking New and Delete
Adolc uses raw memory arrays that depend on the number of dependent and independent variables, hence new and delete are used to allocate this memory. The preprocessor macros 6.24.j: CPPAD_TRACK_NEW_VEC and 6.24.k: CPPAD_TRACK_DEL_VEC are used to check for errors in the use of new and delete when the example is compiled for debugging (when NDEBUG is not defined).

4.7.2.1.c: Configuration Requirement
This example will be compiled and tested provided that the value 2.1.o: AdolcDir is specified on the 2.1.d: configure command line.
 
# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/interfaces.h>

// adouble definitions not in Adolc distribution and 
// required in order to use CppAD::AD<adouble>
# include "base_adolc.hpp"

# include <cppad/cppad.hpp>



namespace { // put this function in the empty namespace

	// f(x) = |x|^2 = .5 * ( x[0]^2 + ... + x[n-1]^2 + .5 )
	template <class Type>
	Type f(CPPAD_TEST_VECTOR<Type> &x)
	{	Type sum;

		// check assignment of AD< AD<double> > = double
		sum  = .5;
		sum += .5;

		size_t i = x.size();
		while(i--)
			sum += x[i] * x[i];

		// check computed assignment AD< AD<double> > -= int
		sum -= 1; 
	
		// check double * AD< AD<double> > 
		return .5 * sum;
	} 
}

bool mul_level_adolc(void) 
{	bool ok = true;                   // initialize test result

	typedef adouble      ADdouble;         // for first level of taping
	typedef CppAD::AD<ADdouble> ADDdouble; // for second level of taping
	size_t n = 5;                          // number independent variables

	CPPAD_TEST_VECTOR<double>       x(n);
	CPPAD_TEST_VECTOR<ADdouble>   a_x(n);
	CPPAD_TEST_VECTOR<ADDdouble> aa_x(n);

	// value of the independent variables
	int tag = 0;                         // Adolc setup
	int keep = 1;
	trace_on(tag, keep);
	size_t j;
	for(j = 0; j < n; j++)
	{	x[j] = double(j);           // x[j] = j
		a_x[j] <<= x[j];            // a_x is independent for ADdouble
	}
	for(j = 0; j < n; j++)
		aa_x[j] = a_x[j];          // track how aa_x depends on a_x
	CppAD::Independent(aa_x);          // aa_x is independent for ADDdouble

	// compute function
	CPPAD_TEST_VECTOR<ADDdouble> aa_f(1);    // scalar valued function
	aa_f[0] = f(aa_x);                 // has only one component

	// declare inner function (corresponding to ADDdouble calculation)
	CppAD::ADFun<ADdouble> a_F(aa_x, aa_f);

	// compute f'(x) 
	size_t p = 1;                        // order of derivative of a_F
	CPPAD_TEST_VECTOR<ADdouble> a_w(1);  // weight vector for a_F
	CPPAD_TEST_VECTOR<ADdouble> a_df(n); // value of derivative
	a_w[0] = 1;                          // weighted function same as a_F
	a_df   = a_F.Reverse(p, a_w);        // gradient of f

	// declare outter function 
	// (corresponding to the tape of adouble operations)
	double df_j;
	for(j = 0; j < n; j++)
		a_df[j] >>= df_j;
	trace_off();

	// compute the d/dx of f'(x) * v = f''(x) * v
	size_t m      = n;                     // # dependent in f'(x)
	double *v, *ddf_v;
	v     = CPPAD_TRACK_NEW_VEC(m, v);     // track v = new double[m]
	ddf_v = CPPAD_TRACK_NEW_VEC(n, ddf_v); // track ddf_v = new double[n]
	for(j = 0; j < n; j++)
		v[j] = double(n - j);
	fos_reverse(tag, int(m), int(n), v, ddf_v);

	// f(x)       = .5 * ( x[0]^2 + x[1]^2 + ... + x[n-1]^2 )
	// f'(x)      = (x[0], x[1], ... , x[n-1])
	// f''(x) * v = ( v[0], v[1],  ... , x[n-1] )
	for(j = 0; j < n; j++)
		ok &= CppAD::NearEqual(ddf_v[j], v[j], 1e-10, 1e-10);

	CPPAD_TRACK_DEL_VEC(v);                 // check usage of delete
	CPPAD_TRACK_DEL_VEC(ddf_v);
	return ok;
}

Input File: example/mul_level_adolc.cpp
5: ADFun Objects

5.a: Purpose
An AD of Base 9.4.g.b: operation sequence is stored in an ADFun object by its 5.2: FunConstruct . The ADFun object can then be used to calculate function values, derivative values, and other values related to the corresponding function.

5.b: Contents
Independent: 5.1Declare Independent Variables and Start Recording
FunConstruct: 5.2Construct an ADFun Object and Stop Recording
Dependent: 5.3Stop Recording and Store Operation Sequence
abort_recording: 5.4Abort Recording of an Operation Sequence
SeqProperty: 5.5ADFun Sequence Properties
FunEval: 5.6Evaluate ADFun Functions, Derivatives, and Sparsity Patterns
Drivers: 5.7First and Second Derivatives: Easy Drivers
FunCheck: 5.8Check an ADFun Sequence of Operations
omp_max_thread: 5.9OpenMP Maximum Thread Number
FunDeprecated: 5.10ADFun Object Deprecated Member Functions

Input File: cppad/local/ad_fun.hpp
5.1: Declare Independent Variables and Start Recording

5.1.a: Syntax
Independent(x)

5.1.b: Purpose
Start a recording the 9.4.b: AD of Base operations with x as the vector of independent variables. Once the AD of Base 9.4.g.b: operation sequence is completed, it must be transferred to a function object; see below.

5.1.c: Variables for a Tape
A tape is create by the call
     Independent(
x)
The corresponding operation sequence is transferred to a function object, and the tape is deleted, using either (see 5.2: ADFun<Base> f(x, y) )
     ADFun<
Basefxy)
or using (see 5.3: f.Dependent(x, y) )
     
f.Dependent( xy)
Between when the tape is created and when it is destroyed, we refer to the elements of x, and the values that depend on the elements of x, as variables for the tape created by the call to Independent.

5.1.d: x
The vector x has prototype
     
VectorAD &x
(see VectorAD below). The size of the vector x, must be greater than zero, and is the number of independent variables for this AD operation sequence.

5.1.e: VectorAD
The type VectorAD must be a 6.7: SimpleVector class with 6.7.b: elements of type AD<Base>. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.1.f: Memory Leak
A memory leak will result if a tape is create by a call to Independent and not deleted by a corresponding call to
     ADFun<
Basefxy)
or using
     
f.Dependent( xy)

5.1.g: OpenMP
In the case of multi-threading with OpenMP, the call to Independent and the corresponding call to
     ADFun<
Basefxy)
or
     
f.Dependent( xy)
must be preformed by the same thread.

5.1.h: Example
The file 5.1.1: Independent.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/independent.hpp
5.1.1: Independent and ADFun Constructor: Example and Test
 
# include <cppad/cppad.hpp>

namespace { // --------------------------------------------------------
// define the template function Test<VectorAD>(void) in empty namespace
template <class VectorAD>
bool Test(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t  n  = 2;
	VectorAD X(n);  // VectorAD is the template parameter in call to Test
	X[0] = 0.;
	X[1] = 1.;

	// declare independent variables and start recording 
	// use the template parameter VectorAD for the vector type
	CppAD::Independent(X);

	AD<double> a = X[0] + X[1];      // first AD operation
	AD<double> b = X[0] * X[1];      // second AD operation

	// range space vector
	size_t m = 2;
	VectorAD Y(m);  // VectorAD is the template paraemter in call to Test
	Y[0] = a;
	Y[1] = b;

	// create f: X -> Y and stop tape recording
	// use the template parameter VectorAD for the vector type
	CppAD::ADFun<double> f(X, Y); 

	// check value 
	ok &= NearEqual(Y[0] , 1.,  1e-10 , 1e-10);
	ok &= NearEqual(Y[1] , 0.,  1e-10 , 1e-10);

	// compute f(1, 2)
	CPPAD_TEST_VECTOR<double> x(n);
	CPPAD_TEST_VECTOR<double> y(m);
	x[0] = 1.;
	x[1] = 2.;
	y    = f.Forward(0, x);
	ok &= NearEqual(y[0] , 3.,  1e-10 , 1e-10);
	ok &= NearEqual(y[1] , 2.,  1e-10 , 1e-10);

	// compute partial of f w.r.t x[0] at (1, 2)
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dx[1] = 0.;
	dy    = f.Forward(1, dx);
	ok &= NearEqual(dy[0] ,   1.,  1e-10 , 1e-10);
	ok &= NearEqual(dy[1] , x[1],  1e-10 , 1e-10);

	// compute partial of f w.r.t x[1] at (1, 2)
	dx[0] = 0.;
	dx[1] = 1.;
	dy    = f.Forward(1, dx);
	ok &= NearEqual(dy[0] ,   1.,  1e-10 , 1e-10);
	ok &= NearEqual(dy[1] , x[0],  1e-10 , 1e-10);

	return ok;
}
} // End of empty namespace -------------------------------------------

# include <vector>
# include <valarray>
bool Independent(void)
{	bool ok = true;
	typedef CppAD::AD<double> ADdouble;
	// Run with VectorAD equal to three different cases
	// all of which are Simple Vectors with elements of type AD<double>.
	ok &= Test< CppAD::vector  <ADdouble> >();
	ok &= Test< std::vector    <ADdouble> >();
	ok &= Test< std::valarray  <ADdouble> >();
	return ok;
}


Input File: example/independent.cpp
5.2: Construct an ADFun Object and Stop Recording

5.2.a: Syntax
ADFun<Basef
ADFun<Basef(xy)

5.2.b: Purpose
The AD<Base> object f can store an AD of Base 9.4.g.b: operation sequence . It can then be used to calculate derivatives of the corresponding 9.4.a: AD function  \[
     F : B^n \rightarrow B^m
\] 
where  B is the space corresponding to objects of type Base .

5.2.c: x
If the argument x is present, it has prototype
     const 
VectorAD &x
It must be the vector argument in the previous call to 5.1: Independent . Neither its size, or any of its values, are allowed to change between calling
     Independent(
x)
and
     ADFun<
Basef(xy)
.

5.2.d: y
If the argument y is present, it has prototype
     const 
VectorAD &y
The sequence of operations that map x to y are stored in the AD function object f .

5.2.e: VectorAD
The type VectorAD must be a 6.7: SimpleVector class with 6.7.b: elements of type AD<Base> . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.2.f: Default Constructor
The default constructor
     ADFun<
Basef
creates an AD<Base> object with no corresponding operation sequence; i.e.,
     
f.size_var()
returns the value zero (see 5.5.h: size_var ).

5.2.g: Sequence Constructor
The sequence constructor
     ADFun<
Basef(xy)
creates the AD<Base> object f , stops the recording of AD of Base operations corresponding to the call
     Independent(
x)
and stores the corresponding operation sequence in the object f . It then stores the first order taylor_ coefficients (corresponding to the value of x ) in f . This is equivalent to the following steps using the default constructor:
  1. Create f with the default constructor
         ADFun<
    Basef;
  2. Stop the tape and storing the operation sequence using
         
    f.Dependent(xy);
    (see 5.3: Dependent ).
  3. Calculating the first order taylor_ coefficients for all the variables in the operation sequence using
         
    f.Forward(px_p)
    with p equal to zero and the elements of x_p equal to the corresponding elements of x (see 5.6.1: Forward ).


5.2.h: OpenMP
In the case of multi-threading with OpenMP, the call to Independent and the corresponding call to
     ADFun<
Basefxy)
or
     
f.Dependent( xy)
must be preformed by the same thread.

5.2.i: Example

5.2.i.a: Sequence Constructor
The file 5.1.1: Independent.cpp contains an example and test of the sequence constructor. It returns true if it succeeds and false otherwise.

5.2.i.b: Default Constructor
The files 5.8.1: FunCheck.cpp and 5.7.4.2: HesLagrangian.cpp contain an examples and tests using the default constructor. They return true if they succeed and false otherwise.
Input File: cppad/local/fun_construct.hpp
5.3: Stop Recording and Store Operation Sequence

5.3.a: Syntax
f.Dependent(xy)

5.3.b: Purpose
Stop recording and the AD of Base 9.4.g.b: operation sequence that started with the call
     Independent(
x)
and store the operation sequence in f. The operation sequence defines an 9.4.a: AD function  \[
     F : B^n \rightarrow B^m
\] 
where  B is the space corresponding to objects of type Base. The value  n is the dimension of the 5.5.d: domain space for the operation sequence. The value  m is the dimension of the 5.5.e: range space for the operation sequence (which is determined by the size of y).

5.3.c: f
The object f has prototype
     ADFun<
Basef
The AD of Base operation sequence is stored in f; i.e., it becomes the operation sequence corresponding to f. If a previous operation sequence was stored in f, it is deleted.

5.3.d: x
The argument x must be the vector argument in a previous call to 5.1: Independent . Neither its size, or any of its values, are allowed to change between calling
     Independent(
x)
and
     
f.Dependent(xy)
.

5.3.e: y
The vector y has prototype
     const 
ADvector &y
(see 5.2: ADvector below). The length of y must be greater than zero and is the dimension of the range space for f.

5.3.f: ADvector
The type ADvector must be a 6.7: SimpleVector class with 6.7.b: elements of type AD<Base>. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.3.g: Taping
The tape, that was created when Independent(x) was called, will stop recording. The AD operation sequence will be transferred from the tape to the object f and the tape will then be deleted.

5.3.h: Forward
No 5.6.1: Forward calculation is preformed during this operation. Thus, directly after this operation,
     
f.size_taylor()
is zero (see 5.6.1.4: size_taylor ).

5.3.i: Example
The file 5.8.1: FunCheck.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/dependent.hpp
5.4: Abort Recording of an Operation Sequence

5.4.a: Syntax
AD<Base>::abort_recording()

5.4.b: Purpose
Sometimes it is necessary to abort the recording of an operation sequence that started with a call of the form
     Independent(
x)
If such a recording is currently in progress, this operation will stop the recording and delete the corresponding information.

5.4.c: Example
The file 5.4.1: abort_recording.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/abort_recording.hpp
5.4.1: Abort Current Recording: Example and Test
 

# include <cppad/cppad.hpp>
# include <limits>

bool abort_recording(void)
{	bool ok = true;
	double eps = 10. * std::numeric_limits<double>::epsilon();

	using CppAD::AD;

	try 
	{	// domain space vector
		size_t n = 1;
		CPPAD_TEST_VECTOR< AD<double> > x(n);
		x[0]     = 0.;

		// declare independent variables and start tape recording
		CppAD::Independent(x);

		// simulate an error during calculation of y and the execution
		// stream was aborted
		throw 1;
	}
	catch (int e)
	{	ok &= (e == 1);

		// do this incase throw occured after the call to Independent
		// (for case above this is known, but in general it is unknown)
		AD<double>::abort_recording();
	}
	/*
 	Now make sure that we can start another recording
	*/

	// declare independent variables and start tape recording
	size_t n  = 1;
	double x0 = 0.5;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]      = x0; 
	CppAD::Independent(x);

	// range space vector 
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0] = 2 * x[0];

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// forward computation of partials w.r.t. x[0]
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(1, dx);
	ok   &= CppAD::NearEqual(dy[0], 2., eps, eps);

	return ok;
}

Input File: example/abort_recording.cpp
5.5: ADFun Sequence Properties

5.5.a: Syntax
n = f.Domain()
m = f.Range()
p = f.Parameter(i)
u = f.use_VecAD()
v = f.size_var()

5.5.b: Purpose
The operations above return properties of the AD of Base 9.4.g.b: operation sequence stored in the ADFun object f. (If there is no operation sequence stored in f, size_var returns zero.)

5.5.c: f
The object f has prototype
     const ADFun<
Basef
(see ADFun<Base> 5.2: constructor ).

5.5.d: Domain
The result n has prototype
     size_t 
n
and is the dimension of the domain space corresponding to f. This is equal to the size of the vector x in the call
     Independent(
x)
that starting recording the operation sequence currently stored in f (see 5.2: FunConstruct and 5.3: Dependent ).

5.5.e: Range
The result m has prototype
     size_t 
m
and is the dimension of the range space corresponding to f. This is equal to the size of the vector y in syntax
     ADFun<
Base>f(x,y) or
     
f.Dependent(y)
depending on which stored the operation sequence currently in f (see 5.2: FunConstruct and 5.3: Dependent ).

5.5.f: Parameter
The argument i has prototype
     size_t 
i
and  0 \leq i < m . The result p has prototype
     bool 
p
It is true if the i-th component of range space for  F corresponds to a 9.4.h: parameter in the operation sequence. In this case, the i-th component of  F is constant and  \[
     \D{F_i}{x_j} (x) = 0
\] 
for  j = 0 , \ldots , n-1 and all  x \in B^n .

5.5.g: use_VecAD
The result u has prototype
     bool 
u
If it is true, the AD of Base 9.4.g.b: operation sequence stored in f contains 4.6.d: VecAD operands. Otherwise u is false.

5.5.h: size_var
The result v has prototype
     size_t 
v
and is the number of variables in the operation sequence plus the following: one for a phantom variable with tape address zero, one for each component of the domain that is a parameter. The amount of work and memory necessary for computing function values and derivatives using f is roughly proportional to v.

If there is no operation sequence stored in f, size_var returns zero (see 5.2.f: default constructor ).

5.5.i: Example
The file 5.5.1: SeqProperty.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: omh/seq_property.omh
5.5.1: ADFun Sequence Properties: Example and Test
 

# include <cppad/cppad.hpp>

bool SeqProperty(void)
{	bool ok = true;
	using CppAD::AD;

	// Use nvar to tracks number of variables in the operation sequence.
	// Start with one for the phantom variable at tape address zero.
	size_t nvar = 1;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > x(n);
	x[0]     = 0.;
	x[1]     = 1.;

	// declare independent variables and start tape recording
	CppAD::Independent(x); 
	nvar    += n;

	AD<double> u = x[0];  // use same variable as x[0]
	AD<double> w = x[1];  // use same variable as x[1]
	w      = w * (u + w); // requires two new variables
	nvar  += 2;

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> > y(m);
	y[0]   = 1.;          // a parameter value   
	y[1]   = u;           // use same variable as x[0]
	y[2]   = w;           // use same variable as w

	// create f: x -> y and stop tape recording
	CppAD::ADFun<double> f(x, y); 

	// check Domain, Range, Parameter, use_VecAD
	ok &= f.Domain()     == n;
	ok &= f.Range()      == m;
	ok &= f.Parameter(0) == true;
	ok &= f.Parameter(1) == false;
	ok &= f.Parameter(2) == false;
	ok &= f.use_VecAD()  == false;

	// add one for each range component that is a parameter
	size_t i;
	for(i = 0; i < m; i++)
		if( f.Parameter(i) ) nvar++;

	// number of variables corresponding to the sequence
	ok &= f.size_var()   == nvar;

	return ok;
}


Input File: example/seq_property.cpp
5.6: Evaluate ADFun Functions, Derivatives, and Sparsity Patterns

5.6.a: Contents
Forward: 5.6.1Forward Mode
Reverse: 5.6.2Reverse Mode
Sparse: 5.6.3Calculating Sparsity Patterns

Input File: cppad/local/fun_eval.hpp
5.6.1: Forward Mode

5.6.1.a: Contents
ForwardZero: 5.6.1.1Zero Order Forward Mode: Function Values
ForwardOne: 5.6.1.2First Order Forward Mode: Derivative Values
ForwardAny: 5.6.1.3Any Order Forward Mode
size_taylor: 5.6.1.4Number Taylor Coefficients, Per Variable, Currently Stored
CompareChange: 5.6.1.5Comparison Changes During Zero Order Forward Mode
capacity_taylor: 5.6.1.6Controlling taylor_ Coefficients Memory Allocation
Forward.cpp: 5.6.1.7Forward Mode: Example and Test

Input File: cppad/local/forward.hpp
5.6.1.1: Zero Order Forward Mode: Function Values

5.6.1.1.a: Syntax
y = f.Forward(0, x)

5.6.1.1.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The result of the syntax above is  \[
     y = F(x)
\] 
(See the 5.8.l: FunCheck discussion for possible differences between  F(x) and the algorithm that defined the operation sequence.)

5.6.1.1.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. After this call to Forward, the value returned by
     
f.size_taylor()
will be equal to one (see 5.6.1.4: size_taylor ).

5.6.1.1.d: x
The argument x has prototype
     const 
Vector &x
(see 5.6.1.1.f: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f.

5.6.1.1.e: y
The result y has prototype
     
Vector y
(see 5.6.1.1.f: Vector below) and its value is  F(x) . The size of y is equal to m, the dimension of the 5.5.e: range space for f.

5.6.1.1.f: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.6.1.1.g: Example
The file 5.6.1.7: Forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/forward.omh
5.6.1.2: First Order Forward Mode: Derivative Values

5.6.1.2.a: Syntax
dy = f.Forward(1, dx)

5.6.1.2.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The result of the syntax above is  \[
     dy = F^{(1)} (x) * dx
\] 
where  F^{(1)} (x) is the Jacobian of  F evaluated at  x .

5.6.1.2.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. Before this call to Forward, the value returned by
     
f.size_taylor()
must be greater than or equal one. After this call it will be will be two (see 5.6.1.4: size_taylor ).

5.6.1.2.d: x
The vector x in expression for dy above corresponds to the previous call to 5.6.1.1: ForwardZero using this ADFun object f; i.e.,
     
f.Forward(0, x)
If there is no previous call with the first argument zero, the value of the 5.1: independent variables during the recording of the AD sequence of operations is used for x.

5.6.1.2.e: dx
The argument dx has prototype
     const 
Vector &x
(see 5.6.1.2.g: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f.

5.6.1.2.f: dy
The result dy has prototype
     
Vector dy
(see 5.6.1.2.g: Vector below) and its value is  F^{(1)} (x) * dx . The size of dy is equal to m, the dimension of the 5.5.e: range space for f.

5.6.1.2.g: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.6.1.2.h: Example
The file 5.6.1.7: Forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/forward.omh
5.6.1.3: Any Order Forward Mode

5.6.1.3.a: Syntax
y_p = f.Forward(px_p )

5.6.1.3.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. Given a function  X : B \rightarrow B^n , defined by its 9.4.k: Taylor coefficients , forward mode computes the Taylor coefficients for the function  \[
     Y (t) = F [ X(t) ]
\] 
.

5.6.1.3.b.a: Function Values
If you are using forward mode to compute values for  F(x) , 5.6.1.1: ForwardZero is simpler to understand than this explanation of the general case.

5.6.1.3.b.b: Derivative Values
If you are using forward mode to compute values for  F^{(1)} (x) * dx , 5.6.1.2: ForwardOne is simpler to understand than this explanation of the general case.

5.6.1.3.c: X(t)
The function  X : B \rightarrow B^n is defined using a sequence of Taylor coefficients  x^{(k)} \in B^n :  \[
     X(t) = x^{(0)} + x^{(1)} * t + \cdots + x^{(p)} * t^p 
\] 
For  k = 0, \ldots , p , the vector  x^{(k)} above is defined as the value of x_k in the previous call (counting this call) of the form
     
f.Forward(kx_k)
If there is no previous call with  k = 0 ,  x^{(0)} is the value of the independent variables when the corresponding AD of Base 9.4.g.b: operation sequence was recorded. Note that  x^{(k)} is related to the k-th derivative of  X(t) by  \[
     x^{(k)} = \frac{1}{k !} X^{(k)} (0) 
\] 


5.6.1.3.d: Y(t)
The function  Y : B \rightarrow B^m is defined by  Y(t) = F[ X(t) ]  . We use  y^{(k)} \in B^m to denote the k-th order Taylor coefficient of  Y(t) ; i.e.,  \[
     Y(t) = y^{(0)} + y^{(1)} * t + \cdots, + y^{(p)} * t^p + o( t^p ) 
\] 
where  o( t^p ) * t^{-p} \rightarrow 0 as  t \rightarrow 0 . Note that  y^{(k)} is related to the k-th derivative of  Y(t) by  \[
     y^{(k)} = \frac{1}{k !} Y^{(k)} (0) 
\] 


5.6.1.3.e: f
The 5: ADFun object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. Before this call to Forward, the value returned by
     
f.size_taylor()
must be greater than or equal  p . After this call it will be will be  p+1 (see 5.6.1.4: size_taylor ).

5.6.1.3.f: p
The argument p has prototype
     size_t 
p
and specifies the order of the Taylor coefficients to be calculated.

5.6.1.3.g: x_p
The argument x_p has prototype
     const 
Vector &x_p
(see 5.6.1.3.i: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. The p-th order Taylor coefficient for  X(t) is defined by this value; i.e.,  x^{(p)} = x\_p . (The lower order Taylor coefficients for  X(t) are defined by previous calls to Forward.)

5.6.1.3.h: y_p
The return value y_p has prototype
     
Vector y_p
(see 5.6.1.3.i: Vector below) and its value is The p-th order Taylor coefficient for  Y(t) ; i.e.,  y^{(p)} = y\_p . The size of y_p is equal to m, the dimension of the 5.5.e: range space for f.

5.6.1.3.i: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.6.1.3.j: Zero Order
In the case where  p = 0 , the result y_p is given by  \[
\begin{array}{rcl}
y^{(0)} & = & (F \circ X) ( 0 ) \\
     & = & F[ x^{(0)} ]
\end{array}
\] 
The agrees with the simplification where  p ,  x^{(0)} , and  y^{(0)} above are replaced by 0, x, and y in 5.6.1.1: ForwardZero .

5.6.1.3.k: First Order
In the case where  p = 1 , the result y_p is given by  \[
\begin{array}{rcl}
y^{(1)} & = & (F \circ X)^{(1)} ( 0 ) \\
     & = & F^{(1)} [ X(0) ] *  X^{(1)} (0) \\
     & = & F^{(1)} ( x^{(0)} ) *  x^{(1)}
\end{array}
\] 
The agrees with the simplification where  p ,  x^{(0)} ,  x^{(1)} , and  y^{(1)} above are replaced by 1, x, dx, and dy in 5.6.1.2: ForwardOne .

Note that if  x^{(1)} is the j-th 9.4.f: elementary vector  \[
y^{(1)} = \D{F}{x_j} ( x^{(0)} ) 
\] 


5.6.1.3.l: Second Order
In the case where  p = 2 , the i-th element of the result y_p is given by  \[
\begin{array}{rcl}
y_i^{(2)} 
& = & \frac{1}{2} (F_i \circ X)^{(2)} ( 0 ) 
\\
& = & \frac{1}{2} \left[ F_i^{(1)} [ X(0) ] * X^{(2)} (0) 
  + X^{(1)} (0)^T * F_i^{(2)} [ X(0) ] * X^{(1)} (0) \right]
\\
& = & \frac{1}{2}  \left[
     2 * F_i^{(1)} ( x^{(0)} ) * x^{(2)}
     +
     ( x^{(1)} )^T * F_i^{(2)} ( x^{(0)} ) * x^{(1)}
\right  ]
\end{array}
\] 
Note that if  x^{(1)} is the j-th 9.4.f: elementary vector and  x^{(2)} is zero,  \[
\begin{array}{rcl}
     \DD{F_i}{x_j}{x_j} ( x^{(0)} ) = 2 y_i^{(2)} 
\end{array}
\] 
If  x^{(1)} is the sum of the j-th and l-th 9.4.f: elementary vectors and  x^{(2)} is zero,  \[
\begin{array}{rcl}
     y_i^{(2)} 
     & = & \frac{1}{2} \left[
          \DD{F_i}{x_j}{x_j} ( x^{(0)} )
          +
          \DD{F_i}{x_j}{x_\ell} ( x^{(0)} )
          +
          \DD{F_i}{x_\ell}{x_j} ( x^{(0)} )
          +
          \DD{F_i}{x_\ell}{x_\ell} ( x^{(0)} )
     \right]
     \\
     \DD{F_i}{x_\ell}{x_j} ( x^{(0)} )
     & = & 
     y_i^{(2)} 
     -
     \frac{1}{2} \DD{F_i}{x_j}{x_j} ( x^{(0)} )
     -
     \frac{1}{2} \DD{F_i}{x_\ell}{x_\ell} ( x^{(0)} )
\end{array} 
\] 


5.6.1.3.m: Example
The file 5.6.1.7: Forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/forward.omh
5.6.1.4: Number Taylor Coefficients, Per Variable, Currently Stored

5.6.1.4.a: Syntax
s = f.size_taylor()

5.6.1.4.b: Purpose
Determine the number of Taylor coefficients, per variable, currently calculated and stored in the ADFun object f. See the discussion under 5.6.1.4.e: Constructor , 5.6.1.4.f: Forward , and 5.6.1.4.g: capacity_taylor for a description of when this value can change.

5.6.1.4.c: f
The object f has prototype
     const ADFun<
Basef

5.6.1.4.d: s
The result s has prototype
     size_t 
s
and is the number of Taylor coefficients, per variable in the AD operation sequence, currently calculated and stored in the ADFun object f.

5.6.1.4.e: Constructor
Directly after the 5.2: FunConstruct syntax
     ADFun<
Basef(xy)
the value of s returned by size_taylor is one. This is because there is an implicit call to Forward that computes the zero order Taylor coefficients during this constructor.

5.6.1.4.f: Forward
After a call to 5.6.1.3: Forward with the syntax
        
f.Forward(px_p)
the value of s returned by size_taylor would be  p + 1 . The call to Forward above uses the lower order Taylor coefficients to compute and store the p-th order Taylor coefficients for all the variables in the operation sequence corresponding to f. Thus there are  p + 1 (order zero through p) Taylor coefficients per variable. (You can determine the number of variables in the operation sequence using the 5.5.h: size_var function.)

5.6.1.4.g: capacity_taylor
If the number of Taylor coefficients currently stored in f is less than or equal c, a call to 5.6.1.6: capacity_taylor with the syntax
     
f.capacity_taylor(c)
does not affect the value s returned by size_taylor. Otherwise, the value s returned by size_taylor is equal to c (only Taylor coefficients of order zero through  c-1 have been retained).

5.6.1.4.h: Example
The file 5.6.1.7: Forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/forward.omh
5.6.1.5: Comparison Changes During Zero Order Forward Mode

5.6.1.5.a: Syntax
c = f.CompareChange()
See Also 5.8: FunCheck

5.6.1.5.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. This function may be not agree with the algorithm that was used to create the corresponding AD of Base 9.4.g.b: operation sequence because of changes in AD 4.5.1: comparison results. The CompareChange function can be used to detect these changes.

5.6.1.5.c: f
The object f has prototype
     const ADFun<
Basef

5.6.1.5.d: c
The result c has prototype
     size_t 
c
It is the number of AD<Base> 4.5.1: comparison operations, corresponding to the previous call to 5.6.1: Forward
     
f.Forward(0, x)
that have a different result from when F was created by taping an algorithm.

5.6.1.5.e: Discussion
If c is not zero, the boolean values resulting from some of the 4.5.1: comparison operations corresponding to x are different from when the AD of Base 9.4.g.b: operation sequence was created. In this case, you may want to re-tape the algorithm with the 9.4.j.c: independent variables equal to the values in x (so AD operation sequence properly represents the algorithm for this value of independent variables). On the other hand, re-taping the AD operation sequence usually takes significantly more time than evaluation using 5.6.1.1: ForwardZero . If the functions values have not changed (see 5.8: FunCheck ) it may not be worth re-taping a new AD operation sequence.

5.6.1.5.f: Restrictions
Computation of this function requires extra operations in the tape. If NDEBUG is defined, these operations are not included in the tape and

5.6.1.5.g: Example
The file 5.6.1.5.1: CompareChange.cpp contains an example and test of this operation. They return true if they succeed and false otherwise.
Input File: omh/forward.omh
5.6.1.5.1: CompareChange and Re-Tape: Example and Test
 

# include <cppad/cppad.hpp>

namespace { // put this function in the empty namespace
	template <typename Type>
	Type Minimum(const Type &x, const Type &y)
	{	// Use a comparision to compute the min(x, y)
		// (note that CondExp would never require retaping). 
		if( x < y )  
			return x;
		return y;
	}
}

bool CompareChange(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::ADFun;
	using CppAD::Independent;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 3.;
	X[1] = 4.;

	// declare independent variables and start tape recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = Minimum(X[0], X[1]);

	// create f: x -> y and stop tape recording
	ADFun<double> f(X, Y);

	// evaluate zero mode Forward where conditional has the same result
	// note that f.CompareChange is not defined when NDEBUG is true
	CPPAD_TEST_VECTOR<double> x(n);
	CPPAD_TEST_VECTOR<double> y(m);
	x[0] = 3.5;
	x[1] = 4.;  
	y    = f.Forward(0, x);
	ok  &= (y[0] == x[0]);
	ok  &= (y[0] == Minimum(x[0], x[1]));
	ok  &= (f.CompareChange() == 0);

	// evaluate zero mode Forward where conditional has different result
	x[0] = 4.;
	x[1] = 3.;
	y    = f.Forward(0, x);
	ok  &= (y[0] == x[0]);
	ok  &= (y[0] != Minimum(x[0], x[1]));
	ok  &= (f.CompareChange() == 1); 

	// re-tape to obtain the new AD operation sequence
	X[0] = 4.;
	X[1] = 3.;
	Independent(X);
	Y[0] = Minimum(X[0], X[1]);

	// stop tape and store result in f
	f.Dependent(Y);

	// evaluate the function at new argument values
	y    = f.Forward(0, x);
	ok  &= (y[0] == x[1]);
	ok  &= (y[0] == Minimum(x[0], x[1]));
	ok  &= (f.CompareChange() == 0); 

	return ok;
}



Input File: example/compare_change.cpp
5.6.1.6: Controlling taylor_ Coefficients Memory Allocation

5.6.1.6.a: Syntax
f.capacity_taylor(c)

5.6.1.6.b: Purpose
The taylor_ coefficients calculated by Forward mode calculations are retained in an 5: ADFun object for subsequent use during 5.6.2: Reverse mode or higher order Forward mode calculations. This operation allow you to control that amount of memory that is retained by an AD function object (for subsequent calculations).

5.6.1.6.c: f
The object f has prototype
     ADFun<
Basef

5.6.1.6.d: c
The argument c has prototype
     size_t 
c
It specifies the number of taylor_ coefficients that are allocated for each variable in the AD operation sequence corresponding to f.

5.6.1.6.e: Discussion
A call to 5.6.1.3: Forward with the syntax
        
y_p = f.Forward(px_p)
uses the lower order taylor_ coefficients and computes the p-th order taylor_ coefficients for all the variables in the operation sequence corresponding to f. (You can determine the number of variables in the operation sequence using the 5.5.h: size_var function.)

5.6.1.6.e.a: Pre-Allocating Memory
If you plan to make calls to Forward with the maximum value of p equal to q, it should be faster to pre-allocate memory for these calls using
     
f.capacity_taylor(c)
with c equal to  q + 1 . If you do no do this, Forward will automatically allocate memory and will copy the results to a larger buffer, when necessary.

Note that each call to 5.3: Dependent frees the old memory connected to the function object and sets the corresponding taylor capacity to zero.

5.6.1.6.e.b: Freeing Memory
If you no longer need the taylor_ coefficients of order q and higher (that are stored in f), you can reduce the memory allocated to f using
     
f.capacity_taylor(c)
with c equal to q.

5.6.1.6.e.c: Original State
If f is 5.2: constructed with the syntax
     ADFun<
Basef(xy)
, there is an implicit call to Forward with p equal to zero and x_p equal to the value of the 9.4.j.c: independent variables when the AD operation sequence was recorded.

5.6.1.6.f: Example
The file 5.6.1.7: Forward.cpp contains an example and test of these operations. It returns true if it succeeds and false otherwise.
Input File: cppad/local/cap_taylor.hpp
5.6.1.7: Forward Mode: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // --------------------------------------------------------
// define the template function ForwardCases<Vector> in empty namespace
template <class Vector> 
bool ForwardCases(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[0] * X[0] * X[1];

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// The highest order Forward mode calculation below is is second order.
	// This corresponds to three Taylor coefficients per variable 
	// (zero, first, and second order).
	f.capacity_taylor(3);  // pre-allocate memory for speed of execution

	// initially, the variable values during taping are stored in f
	ok &= f.size_taylor() == 1;

	// zero order forward mode using notaiton in ForwardZero
	// use the template parameter Vector for the vector type
	Vector x(n);
	Vector y(m);
	x[0] = 3.;
	x[1] = 4.;
	y    = f.Forward(0, x);
	ok  &= NearEqual(y[0] , x[0]*x[0]*x[1], 1e-10, 1e-10);
	ok  &= f.size_taylor() == 1;

	// first order forward mode using notation in ForwardOne
	// X(t)           = x + dx * t
	// Y(t) = F[X(t)] = y + dy * t + o(t)
	Vector dx(n);
	Vector dy(m);
	dx[0] = 1.;
	dx[1] = 0.;
	dy    = f.Forward(1, dx); // partial F w.r.t. x[0]
	ok   &= NearEqual(dy[0] , 2.*x[0]*x[1], 1e-10, 1e-10);
	ok   &= f.size_taylor() == 2;

	// second order forward mode using notaiton in ForwardAny
	// X(t) =           x + dx * t + x_2 * t^2
	// Y(t) = F[X(t)] = y + dy * t + y_2 * t^2 + o(t^3)
	Vector x_2(n);
	Vector y_2(m);
	x_2[0]      = 0.;
	x_2[1]      = 0.;
	y_2         = f.Forward(2, x_2);
	double F_00 = 2. * y_2[0]; // second partial F w.r.t. x[0], x[0]
	ok         &= NearEqual(F_00, 2.*x[1], 1e-10, 1e-10);
	ok         &= f.size_taylor() == 3;

	// suppose we no longer need second order Taylor coefficients
	f.capacity_taylor(2);
	ok &= f.size_taylor() == 2;

	// actually we no longer need any Taylor coefficients
	f.capacity_taylor(0);
	ok &= f.size_taylor() == 0;

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool Forward(void)
{	bool ok = true;
	// Run with Vector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	ok &= ForwardCases< CppAD::vector  <double> >();
	ok &= ForwardCases< std::vector    <double> >();
	ok &= ForwardCases< std::valarray  <double> >();
	return ok;
}

Input File: example/forward.cpp
5.6.2: Reverse Mode

5.6.2.a: Contents
reverse_one: 5.6.2.1First Order Reverse Mode
reverse_two: 5.6.2.2Second Order Reverse Mode
reverse_any: 5.6.2.3Any Order Reverse Mode

Input File: cppad/local/reverse.hpp
5.6.2.1: First Order Reverse Mode

5.6.2.1.a: Syntax
dw = f.Reverse(1, w)

5.6.2.1.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The function  W : B^n \rightarrow B is defined by  \[
     W(x) = w_0 * F_0 ( x ) + \cdots + w_{m-1} * F_{m-1} (x)
\] 
The result of this operation is the derivative  dw = W^{(1)} (x) ; i.e.,  \[
     dw = w_0 * F_0^{(1)} ( x ) + \cdots + w_{m-1} * F_{m-1}^{(1)} (x)
\] 
Note that if  w is the i-th 9.4.f: elementary vector ,  dw = F_i^{(1)} (x) .

5.6.2.1.c: f
The object f has prototype
     const ADFun<
Basef
Before this call to Reverse, the value returned by
     
f.size_taylor()
must be greater than or equal one (see 5.6.1.4: size_taylor ).

5.6.2.1.d: x
The vector x in expression for dw above corresponds to the previous call to 5.6.1.1: ForwardZero using this ADFun object f; i.e.,
     
f.Forward(0, x)
If there is no previous call with the first argument zero, the value of the 5.1: independent variables during the recording of the AD sequence of operations is used for x.

5.6.2.1.e: w
The argument w has prototype
     const 
Vector &w
(see 5.6.2.1.g: Vector below) and its size must be equal to m, the dimension of the 5.5.e: range space for f.

5.6.2.1.f: dw
The result dw has prototype
     
Vector dw
(see 5.6.2.1.g: Vector below) and its value is the derivative  W^{(1)} (x) . The size of dw is equal to n, the dimension of the 5.5.d: domain space for f.

5.6.2.1.g: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.6.2.1.h: Example
The file 5.6.2.1.1: reverse_one.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/reverse.omh
5.6.2.1.1: First Order Reverse Mode: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // ----------------------------------------------------------
// define the template function reverse_one_cases<Vector> in empty namespace
template <typename Vector> 
bool reverse_one_cases(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[0] * X[0] * X[1];

	// create f : X -> Y and stop recording
	CppAD::ADFun<double> f(X, Y);

	// use first order reverse mode to evaluate derivative of y[0]
	// and use the values in X for the independent variables.
	CPPAD_TEST_VECTOR<double> w(m), dw(n);
	w[0] = 1.;
	dw   = f.Reverse(1, w);
	ok  &= NearEqual(dw[0] , 2.*X[0]*X[1], 1e-10, 1e-10);
	ok  &= NearEqual(dw[1] ,    X[0]*X[0], 1e-10, 1e-10);

	// use zero order forward mode to evaluate y at x = (3, 4)
	// and use the template parameter Vector for the vector type
	Vector x(n), y(m);
	x[0]    = 3.;
	x[1]    = 4.;
	y       = f.Forward(0, x);
	ok     &= NearEqual(y[0] , x[0]*x[0]*x[1], 1e-10, 1e-10);

	// use first order reverse mode to evaluate derivative of y[0]
	// and using the values in x for the independent variables.
	w[0] = 1.;
	dw   = f.Reverse(1, w);
	ok  &= NearEqual(dw[0] , 2.*x[0]*x[1], 1e-10, 1e-10);
	ok  &= NearEqual(dw[1] ,    x[0]*x[0], 1e-10, 1e-10);

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool reverse_one(void)
{	bool ok = true;
	// Run with Vector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	ok &= reverse_one_cases< CppAD::vector  <double> >();
	ok &= reverse_one_cases< std::vector    <double> >();
	ok &= reverse_one_cases< std::valarray  <double> >();
	return ok;
}

Input File: example/reverse_one.cpp
5.6.2.2: Second Order Reverse Mode

5.6.2.2.a: Syntax
dw = f.Reverse(2, w)

5.6.2.2.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. Reverse mode computes the derivative of the 5.6.1: Forward mode 9.4.k: Taylor coefficients with respect to the domain variable  x .

5.6.2.2.c: x^(k)
For  k = 0, 1 , the vector  x^{(k)} \in B^n is defined as the value of x_k in the previous call (counting this call) of the form
     
f.Forward(kx_k)
If there is no previous call with  k = 0 ,  x^{(0)} is the value of the independent variables when the corresponding AD of Base 9.4.g.b: operation sequence was recorded.

5.6.2.2.d: W
The functions  W_0 : B^n \rightarrow B and  W_1 : B^n \rightarrow B are defined by  \[
\begin{array}{rcl}
W_0 ( u ) & = & w_0 * F_0 ( u ) + \cdots + w_{m-1} * F_{m-1} (u)
\\
W_1 ( u ) & = & 
w_0 * F_0^{(1)} ( u ) * x^{(1)} 
     + \cdots + w_{m-1} * F_{m-1}^{(1)} (u) * x^{(1)}
\end{array}
\] 
This operation computes the derivatives  \[
\begin{array}{rcl}
W_0^{(1)} (u) & = & 
     w_0 * F_0^{(1)} ( u ) + \cdots + w_{m-1} * F_{m-1}^{(1)} (u)
\\
W_1^{(1)} (u) & = & 
     w_0 * \left( x^{(1)} \right)^\T * F_0^{(2)} ( u ) 
     + \cdots + 
     w_{m-1} * \left( x^{(1)} \right)^\T F_{m-1}^{(2)} (u)
\end{array}
\] 
at  u = x^{(0)} .

5.6.2.2.e: f
The object f has prototype
     const ADFun<
Basef
Before this call to Reverse, the value returned by
     
f.size_taylor()
must be greater than or equal two (see 5.6.1.4: size_taylor ).

5.6.2.2.f: w
The argument w has prototype
     const 
Vector &w
(see 5.6.2.2.h: Vector below) and its size must be equal to m, the dimension of the 5.5.e: range space for f.

5.6.2.2.g: dw
The result dw has prototype
     
Vector dw
(see 5.6.2.2.h: Vector below). It contains both the derivative  W^{(1)} (x) and the derivative  U^{(1)} (x) . The size of dw is equal to  n \times 2 , where  n is the dimension of the 5.5.d: domain space for f.

5.6.2.2.g.a: First Order Partials
For  j = 0 , \ldots , n - 1 ,  \[
dw [ j * 2 + 0 ] 
=  
\D{ W_0 }{ u_j } \left( x^{(0)} \right) 
=
w_0 * \D{ F_0 }{ u_j } \left( x^{(0)} \right)
+ \cdots + 
w_{m-1} * \D{ F_{m-1} }{ u_j } \left( x^{(0)} \right)
\] 
This part of dw contains the same values as are returned by 5.6.2.1: reverse_one .

5.6.2.2.g.b: Second Order Partials
For  j = 0 , \ldots , n - 1 ,  \[
dw [ j * 2 + 1 ] 

\D{ W_1 }{ u_j } \left( x^{(0)} \right) 
=
\sum_{\ell=0}^{n-1} x_\ell^{(1)} \left[
w_0 * \DD{ F_0 }{ u_\ell }{ u_j } \left( x^{(0)} \right)
+ \cdots + 
w_{m-1} * \DD{ F_{m-1} }{ u_\ell }{ u_j } \left( x^{(0)} \right)
\right]
\] 


5.6.2.2.h: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.6.2.2.i: Hessian Times Direction
Suppose that  w is the i-th elementary vector. It follows that for  j = 0, \ldots, n-1  \[
\begin{array}{rcl}
dw[ j * 2 + 1 ] 
& = & 
w_i \sum_{\ell=0}^{n-1} 
\DD{F_i}{ u_j }{ u_\ell } \left( x^{(0)} \right) x_\ell^{(1)} 
\\
& = &
\left[ F_i^{(2)} \left( x^{(0)} \right) * x^{(1)} \right]_j
\end{array}
\] 
Thus the vector  ( dw[1], dw[3], \ldots , dw[ n * p - 1 ] ) is equal to the Hessian of  F_i (x) times the direction  x^{(1)} . In the special case where  x^{(1)} is the l-th 9.4.f: elementary vector ,  \[
dw[ j * 2 + 1 ] = \DD{ F_i }{ x_j }{ x_\ell } \left( x^{(0)} \right)
\] 


5.6.2.2.j: Example
The files 5.6.2.2.1: reverse_two.cpp and 5.6.2.2.2: HesTimesDir.cpp contain a examples and tests of reverse mode calculations. They return true if they succeed and false otherwise.
Input File: omh/reverse.omh
5.6.2.2.1: Second Order Reverse ModeExample and Test
 
# include <cppad/cppad.hpp>
namespace { // ----------------------------------------------------------
// define the template function reverse_two_cases<Vector> in empty namespace
template <typename Vector> 
bool reverse_two_cases(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[0] * X[0] * X[1];

	// create f : X -> Y and stop recording
	CppAD::ADFun<double> f(X, Y);

	// use zero order forward mode to evaluate y at x = (3, 4)
	// use the template parameter Vector for the vector type
	Vector x(n), y(m);
	x[0]  = 3.;
	x[1]  = 4.;
	y     = f.Forward(0, x);
	ok    &= NearEqual(y[0] , x[0]*x[0]*x[1], 1e-10, 1e-10);

	// use first order forward mode in x[0] direction
	// (all second order partials below involve x[0])
	Vector dx(n), dy(m);
	dx[0] = 1.;
	dx[1] = 1.;
	dy    = f.Forward(1, dx);
	double check = 2.*x[0]*x[1]*dx[0] + x[0]*x[0]*dx[1];
	ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

	// use second order reverse mode to evalaute second partials of y[0]
	// with respect to (x[0], x[0]) and with respect to (x[0], x[1])
	Vector w(m), dw( n * 2 );
	w[0]  = 1.;
	dw    = f.Reverse(2, w);

	// check derivative of f
	ok   &= NearEqual(dw[0*2+0] , 2.*x[0]*x[1], 1e-10, 1e-10);
	ok   &= NearEqual(dw[1*2+0] ,    x[0]*x[0], 1e-10, 1e-10);

	// check derivative of f^{(1)} (x) * dx
	check = 2.*x[1]*dx[1] + 2.*x[0]*dx[1];
	ok   &= NearEqual(dw[0*2+1] , check, 1e-10, 1e-10); 
	check = 2.*x[0]*dx[1];
	ok   &= NearEqual(dw[1*2+1] , check, 1e-10, 1e-10); 

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool reverse_two(void)
{	bool ok = true;
	ok &= reverse_two_cases< CppAD::vector  <double> >();
	ok &= reverse_two_cases< std::vector    <double> >();
	ok &= reverse_two_cases< std::valarray  <double> >();
	return ok;
}

Input File: example/reverse_two.cpp
5.6.2.2.2: Hessian Times Direction: Example and Test
 
// Example and test of computing the Hessian times a direction; i.e.,
// given F : R^n -> R and a direction dx in R^n, we compute F''(x) * dx

# include <cppad/cppad.hpp>

namespace { // put this function in the empty namespace
	// F(x) = |x|^2 = x[0]^2 + ... + x[n-1]^2
	template <class Type>
	Type F(CPPAD_TEST_VECTOR<Type> &x)
	{	Type sum = 0;
		size_t i = x.size();
		while(i--)
			sum += x[i] * x[i];
		return sum;
	} 
}

bool HesTimesDir() 
{	bool ok = true;                   // initialize test result
	size_t j;                         // a domain variable variable

	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 5; 
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	for(j = 0; j < n; j++)
		X[j] = AD<double>(j); 

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = F(X);

	// create f : X -> Y and stop recording
	CppAD::ADFun<double> f(X, Y);

	// choose a direction dx and compute dy(x) = F'(x) * dx
	CPPAD_TEST_VECTOR<double> dx(n);
	CPPAD_TEST_VECTOR<double> dy(m);
	for(j = 0; j < n; j++)
		dx[j] = double(n - j);
	dy = f.Forward(1, dx);

	// compute ddw = F''(x) * dx
	CPPAD_TEST_VECTOR<double> w(m);
	CPPAD_TEST_VECTOR<double> ddw(2 * n);
	w[0] = 1.;
	ddw  = f.Reverse(2, w);
	
	// F(x)        = x[0]^2 + x[1]^2 + ... + x[n-1]^2
	// F''(x)      = 2 * Identity_Matrix
	// F''(x) * dx = 2 * dx
	for(j = 0; j < n; j++)
		ok &= NearEqual(ddw[j * 2 + 1], 2.*dx[j], 1e-10, 1e-10);

	return ok;
}

Input File: example/hes_times_dir.cpp
5.6.2.3: Any Order Reverse Mode

5.6.2.3.a: Syntax
dw = f.Reverse(pw)

5.6.2.3.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. Reverse mode computes the derivative of the 5.6.1: Forward mode 9.4.k: Taylor coefficients with respect to the domain variable  x .

5.6.2.3.c: x^(k)
For  k = 0, \ldots , p-1 , the vector  x^{(k)} \in B^n is defined as the value of x_k in the previous call (counting this call) of the form
     
f.Forward(kx_k)
If there is no previous call with  k = 0 ,  x^{(0)} is the value of the independent variables when the corresponding AD of Base 9.4.g.b: operation sequence was recorded.

5.6.2.3.d: X(t, u)
The function  X : B \times B^n \rightarrow B^n is defined using a sequence of Taylor coefficients  x^{(k)} \in B^n :  \[
     X ( t , u ) = u + x^{(1)} * t + \cdots + x^{(p-1)} * t^{p-1} 
\] 
Note that for  k = 1 , \ldots , p-1 ,  x^{(k)} is related to the k-th partial of  X(t, u) with respect to  t by  \[
     x^{(k)} = \frac{1}{k !} \Dpow{k}{t} X(0, u) 
\] 
Hence, these partial derivatives are constant; i.e., do not depend on  u .

5.6.2.3.e: W(t, u)
The function  W : B \times B^n \rightarrow B is defined by  \[
W(t, u) = w_0 * F_0 [ X(t,u) ] + \cdots + w_{m-1} * F_{m-1} [ X(t, u) ]
\]
For  k = 0 , \ldots , p-1 , we use  W_k : B^n \rightarrow B^m to denote the k-th order Taylor coefficient of  W(t, u) with respect to  t ; i.e.,  \[
\begin{array}{rcl}
     W_k (u) & = & \frac{1}{k !} \Dpow{k}{t} W( 0 , u ) 
     \\
     W(t, u) & = & W_0 (u) + W_1 (u) * t 
             + \cdots + W_{p-1} (u) * t^{p-1}
             + o ( t^{p-1} )
\end{array}
\] 
where  o ( t^{p-1} ) / t^{1-p} \rightarrow 0 as  t \rightarrow 0 .

5.6.2.3.f: f
The object f has prototype
     const ADFun<
Basef
Before this call to Reverse, the value returned by
     
f.size_taylor()
must be greater than or equal p (see 5.6.1.4: size_taylor ).

5.6.2.3.g: p
The argument p has prototype
     size_t 
p
and specifies the number of Taylor coefficients to be differentiated.

5.6.2.3.h: w
The argument w has prototype
     const 
Vector &w
(see 5.6.2.3.j: Vector below) and its size must be equal to m, the dimension of the 5.5.e: range space for f. It specifies the weighting vector w in the definition of  W(t, x) above.

5.6.2.3.i: dw
The return value dw has prototype
     
Vector dw
(see 5.6.2.3.j: Vector below). It is a vector with size  n \times p . For  j = 0, \ldots, n-1 and  k = 0 , \ldots , p-1  \[
     dw[ j * p + k ] = W_k^{(1)} \left( x^{(0)} \right) 
\] 


5.6.2.3.i.a: First Order
The first order derivatives computed by this call can be expressed as  \[
\begin{array}{rcl}
W_0^{(1)} \left( x^{(0)} \right) 
& = &
w_0 * \left[ \D{ F_0 \circ X }{ u }  ( 0, u ) \right]_{u = x^{(0)}} 
+ \cdots + 
w_{m-1} * \left[ \D{ F_{m-1} \circ X }{ u }  ( 0, u ) \right]_{u = x^{(0)}} 
\\
& = &
w_0 * F_0^{(1)} \left( x^{(0)} \right) 
+ \cdots + 
w_{m-1} * F_{m-1}^{(1)} \left( x^{(0)} \right) 
\end{array}
\] 
This is the same as the result documented in 5.6.2.1: reverse_one .

5.6.2.3.i.b: Second Order
The second order derivatives computed by this call can be expressed as  \[
\begin{array}{rcl}
W_1^{(1)} \left( x^{(0)} \right) 
& = &
w_0 * \left[ 
     \D{}{u} \D{ F_0 \circ X }{ t }  ( 0, u ) 
\right]_{u = x^{(0)}} 
+ \cdots + w_{m-1} * \left[ 
     \D{}{u} \D{ F_{m-1} \circ X }{ t }  ( 0, u ) 
\right]_{u = x^{(0)}} 
\\
& = &
w_0 * \left[ \D{ }{ u } F_0^{(1)} ( u ) * x^{(1)} \right]_{u = x^{(0)}} 
+ \cdots + 
w_{m-1} * \left[ \D{ }{ u } F_{m-1}^{(1)} ( u ) * x^{(1)} \right]_{u = x^{(0)}} 
\\
& = &
w_0 * \left( x^{(1)} \right)^\T * F_0^{(2)} \left( x^{(0)} \right) 
+ \cdots + 
w_{m-1} * \left( x^{(1)} \right)^\T * F_{m-1}^{(2)} \left( x^{(0)} \right) 
\end{array}
\] 
This is the same as the result documented in 5.6.2.2: reverse_two .

5.6.2.3.j: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case. This agrees with the simplification in 5.6.2.1: reverse_one .

5.6.2.3.k: Example
The file 5.6.2.3.1: reverse_any.cpp contains an example and test of this operation. It returns true if they succeeds and false otherwise.
Input File: omh/reverse.omh
5.6.2.3.1: Any Order Reverse Mode: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // ----------------------------------------------------------
// define the template function reverse_any_cases<Vector> in empty namespace
template <typename Vector> 
bool reverse_any_cases(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// domain space vector
	size_t n = 3;
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;
	X[2] = 2.;

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[0] * X[1] * X[2];

	// create f : X -> Y and stop recording
	CppAD::ADFun<double> f(X, Y);

	// define W(t, u) = (u_0 + dx_0*t)*(u_1 + dx_1*t)*(u_2 + dx_2*t)
	// use zero order forward to evaluate W0(u) = W(0, u)
	Vector u(n), W0(m);
	u[0]    = 2.;
	u[1]    = 3.;
	u[2]    = 4.;
	W0      = f.Forward(0, u);
	double check;
	check   =  u[0]*u[1]*u[2];
	ok     &= NearEqual(W0[0] , check, 1e-10, 1e-10);

	// define W_t(t, u) = partial W(t, u) w.r.t t
	// W_t(t, u)  = (u_0 + dx_0*t)*(u_1 + dx_1*t)*dx_2
	//            + (u_0 + dx_0*t)*(u_2 + dx_2*t)*dx_1
	//            + (u_1 + dx_1*t)*(u_2 + dx_2*t)*dx_0
	// use first order forward mode to evaluate W1(u) = W_t(0, u)
	Vector dx(n), W1(m);
	dx[0] = .2;
	dx[1] = .3;
	dx[2] = .4;
	W1    = f.Forward(1, dx);
        check =  u[0]*u[1]*dx[2] + u[0]*u[2]*dx[1] + u[1]*u[2]*dx[0];
	ok   &= NearEqual(W1[0], check, 1e-10, 1e-10);

	// define W_tt (t, u) = partial W_t(t, u) w.r.t t
	// W_tt(t, u) = 2*(u_0 + dx_0*t)*dx_1*dx_2
	//            + 2*(u_1 + dx_1*t)*dx_0*dx_2
	//            + 2*(u_3 + dx_3*t)*dx_0*dx_1
	// use second order forward to evaluate W2(u) = 1/2 * W_tt(0, u)
	Vector ddx(n), W2(m);
	ddx[0] = ddx[1] = ddx[2] = 0.;
        W2     = f.Forward(2, ddx);
        check  =  u[0]*dx[1]*dx[2] + u[1]*dx[0]*dx[2] + u[2]*dx[0]*dx[1];
	ok    &= NearEqual(W2[0], check, 1e-10, 1e-10);

	// use third order reverse mode to evaluate derivatives
	size_t p = 3;
	Vector w(m), dw(n * p);
	w[0]   = 1.;
	dw     = f.Reverse(p, w);

	// check derivative of W0(u) w.r.t. u
	ok    &= NearEqual(dw[0*p+0], u[1]*u[2], 1e-10, 1e-10);
	ok    &= NearEqual(dw[1*p+0], u[0]*u[2], 1e-10, 1e-10);
	ok    &= NearEqual(dw[2*p+0], u[0]*u[1], 1e-10, 1e-10);

	// check derivative of W1(u) w.r.t. u
	ok    &= NearEqual(dw[0*p+1], u[1]*dx[2] + u[2]*dx[1], 1e-10, 1e-10);
	ok    &= NearEqual(dw[1*p+1], u[0]*dx[2] + u[2]*dx[0], 1e-10, 1e-10);
	ok    &= NearEqual(dw[2*p+1], u[0]*dx[1] + u[1]*dx[0], 1e-10, 1e-10);

	// check derivative of W2(u) w.r.t u
	ok    &= NearEqual(dw[0*p+2], dx[1]*dx[2], 1e-10, 1e-10);
	ok    &= NearEqual(dw[1*p+2], dx[0]*dx[2], 1e-10, 1e-10);
	ok    &= NearEqual(dw[2*p+2], dx[0]*dx[1], 1e-10, 1e-10);

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool reverse_any(void)
{	bool ok = true;
	ok &= reverse_any_cases< CppAD::vector  <double> >();
	ok &= reverse_any_cases< std::vector    <double> >();
	ok &= reverse_any_cases< std::valarray  <double> >();
	return ok;
}

Input File: example/reverse_any.cpp
5.6.3: Calculating Sparsity Patterns

5.6.3.a: Contents
ForSparseJac: 5.6.3.1Jacobian Sparsity Pattern: Forward Mode
RevSparseJac: 5.6.3.2Jacobian Sparsity Pattern: Reverse Mode
RevSparseHes: 5.6.3.3Hessian Sparsity Pattern: Reverse Mode

Input File: cppad/local/sparse.hpp
5.6.3.1: Jacobian Sparsity Pattern: Forward Mode

5.6.3.1.a: Syntax
s = f.ForSparseJac(qr)

5.6.3.1.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. For a fixed  n \times q matrix  R , the Jacobian of  F[ x + R * u ] with respect to  u at  u = 0 is  \[
     J(x) = F^{(1)} ( x ) * R
\] 
Given a 9.4.i: sparsity pattern for  R , ForSparseJac returns a sparsity pattern for the  J(x) .

5.6.3.1.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const. After this the sparsity pattern for each of the variables in the operation sequence is stored in the object f.

5.6.3.1.d: x
If no 4.6: VecAD objects are used by the AD of Base 9.4.g.b: operation sequence stored in f, the sparsity pattern is valid for all values  x \in B^n .

If 5.5.g: f.useVecAD is true, the sparsity patter is only valid for the value of x in the previous 5.6.1.1: zero order forward mode call
     
f.Forward(0, x)
If there is no previous zero order forward mode call using f, the value of the 5.1: independent variables during the recording of the AD sequence of operations is used for x.

5.6.3.1.e: q
The argument q has prototype
     size_t 
q
It specifies the number of columns in the Jacobian  J(x) . Note that the memory required for the calculation is proportional to  q times the total number of variables in the AD operation sequence corresponding to f (5.5.h: f.size_var ). Smaller values for q can be used to break the sparsity calculation into groups that do not require to much memory.

5.6.3.1.f: r
The argument r has prototype
     const 
Vector &r
(see 5.6.3.1.h: Vector below) and its size is  n * q . It specifies a 9.4.i: sparsity pattern for the matrix R as follows: for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , q-1 .  \[
     R_{i,j} \neq 0 ; \Rightarrow \; r [ i * q + j ] = {\rm true}
\] 


5.6.3.1.g: s
The return value s has prototype
     
Vector s
(see 5.6.3.1.h: Vector below) and its size is  m * q . It specifies a 9.4.i: sparsity pattern for the matrix  J(x) as follows: for  x \in B^n , for  i = 0 , \ldots , m-1 , and  j = 0 , \ldots , q-1  \[
     J(x)_{i,j} \neq 0 ; \Rightarrow \; s [ i * q + j ] = {\rm true}
\] 


5.6.3.1.h: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type bool . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case. In order to save memory, you may want to use a class that packs multiple elements into one storage location; for example, 6.23.j: vectorBool .

5.6.3.1.i: Entire Sparsity Pattern
Suppose that  q = n and  R is the  n \times n identity matrix, If follows that  \[
r [ i * q + j ] = \left\{ \begin{array}{ll}
     {\rm true}  & {\rm if} \; i = j \\
     {\rm flase} & {\rm otherwise}
\end{array} \right. 
\] 
is an efficient sparsity pattern for  R ; i.e., the choice for r has as few true values as possible. In this case, the corresponding value for s is a sparsity pattern for the Jacobian  J(x) = F^{(1)} ( x ) .

5.6.3.1.j: Example
The file 5.6.3.1.1: ForSparseJac.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/for_sparse_jac.hpp
5.6.3.1.1: Forward Mode Jacobian Sparsity: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------------
// define the template function ForSparseJacCases<Vector> in empty namespace
template <typename Vector> 
bool ForSparseJacCases(void)
{	bool ok = true;
	using CppAD::AD;

	// domain space vector
	size_t n = 2; 
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[0];
	Y[1] = X[0] * X[1];
	Y[2] = X[1];

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// sparsity pattern for the identity matrix
	Vector r(n * n);
	size_t i, j;
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
			r[ i * n + j ] = false;
		r[ i * n + i ] = true;
	}

	// sparsity pattern for F'(x)
	Vector s(m * n);
	s = f.ForSparseJac(n, r);

	// check values
	ok &= (s[ 0 * n + 0 ] == true);  // Y[0] does     depend on X[0]
	ok &= (s[ 0 * n + 1 ] == false); // Y[0] does not depend on X[1]
	ok &= (s[ 1 * n + 0 ] == true);  // Y[1] does     depend on X[0]
	ok &= (s[ 1 * n + 1 ] == true);  // Y[1] does     depend on X[1]
	ok &= (s[ 2 * n + 0 ] == false); // Y[2] does not depend on X[0]
	ok &= (s[ 2 * n + 1 ] == true);  // Y[2] does     depend on X[1]

	return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool ForSparseJac(void)
{	bool ok = true;
	// Run with Vector equal to four different cases
	// all of which are Simple Vectors with elements of type bool.
	ok &= ForSparseJacCases< CppAD::vectorBool     >();
	ok &= ForSparseJacCases< CppAD::vector  <bool> >();
	ok &= ForSparseJacCases< std::vector    <bool> >(); 
	ok &= ForSparseJacCases< std::valarray  <bool> >(); 

	return ok;
}


Input File: example/for_sparse_jac.cpp
5.6.3.2: Jacobian Sparsity Pattern: Reverse Mode

5.6.3.2.a: Syntax
r = F.RevSparseJac(ps)

5.6.3.2.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. For a fixed  p \times m matrix  S , the Jacobian of  S * F( x ) with respect to  x is  \[
     J(x) = S * F^{(1)} ( x )
\] 
Given a 9.4.i: sparsity pattern for  S , RevSparseJac returns a sparsity pattern for the  J(x) .

5.6.3.2.c: f
The object f has prototype
     const ADFun<
Basef

5.6.3.2.d: x
If no 4.6: VecAD objects are used by the AD of Base 9.4.g.b: operation sequence stored in f, the sparsity pattern is valid for all values  x \in B^n .

If 5.5.g: f.use_VecAD is true, the sparsity patter is only valid for the value of x in the previous 5.6.1.1: zero order forward mode call
     
f.Forward(0, x)
If there is no previous zero order forward mode call using f, the value of the 5.1: independent variables during the recording of the AD sequence of operations is used for x.

5.6.3.2.e: p
The argument p has prototype
     size_t 
p
It specifies the number of rows in the Jacobian  J(x) . Note that the memory required for the calculation is proportional to  p times the total number of variables in the AD operation sequence corresponding to f (5.5.h: f.size_var ). Smaller values for p can be used to break the sparsity calculation into groups that do not require to much memory.

5.6.3.2.f: s
The argument s has prototype
     const 
Vector &s
(see 5.6.3.2.h: Vector below) and its size is  p * m . It specifies a 9.4.i: sparsity pattern for the matrix S as follows: for  i = 0 , \ldots , p-1 and  j = 0 , \ldots , m-1 .  \[
     S_{i,j} \neq 0 ; \Rightarrow \; s [ i * m + j ] = {\rm true}
\] 


5.6.3.2.g: r
The return value r has prototype
     
Vector r
(see 5.6.3.2.h: Vector below) and its size is  p * n . It specifies a 9.4.i: sparsity pattern for the matrix  J(x) as follows: for  x \in B^n , for  i = 0 , \ldots , p-1 , and  j = 0 , \ldots , n-1  \[
     J(x)_{i,j} \neq 0 ; \Rightarrow \; r [ i * n + j ] = {\rm true}
\] 


5.6.3.2.h: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type bool . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case. In order to save memory, you may want to use a class that packs multiple elements into one storage location; for example, 6.23.j: vectorBool .

5.6.3.2.i: Entire Sparsity Pattern
Suppose that  p = m and  S is the  m \times m identity matrix, If follows that  \[
s [ i * q + j ] = \left\{ \begin{array}{ll}
     {\rm true}  & {\rm if} \; i = j \\
     {\rm flase} & {\rm otherwise}
\end{array} \right. 
\] 
is an efficient sparsity pattern for the Jacobian of  S ; i.e., the choice for s has as few true values as possible. In this case, the corresponding value for r is a sparsity pattern for the Jacobian  J(x) = F^{(1)} ( x ) .

5.6.3.2.j: Example
The file 5.6.3.2.1: RevSparseJac.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/rev_sparse_jac.hpp
5.6.3.2.1: Reverse Mode Jacobian Sparsity: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------------
// define the template function RevSparseJacCases<Vector> in empty namespace
template <typename Vector> 
bool RevSparseJacCases(void)
{	bool ok = true;
	using CppAD::AD;

	// domain space vector
	size_t n = 2; 
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = X[0];
	Y[1] = X[0] * X[1];
	Y[2] = X[1];

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// sparsity pattern for the identity matrix
	Vector s(m * m);
	size_t i, j;
	for(i = 0; i < m; i++)
	{	for(j = 0; j < m; j++)
			s[ i * m + j ] = false;
		s[ i * m + i ] = true;
	}

	// sparsity pattern for F'(x)
	Vector r(m * n);
	r = f.RevSparseJac(m, s);

	// check values
	ok &= (r[ 0 * n + 0 ] == true);  // Y[0] does     depend on X[0]
	ok &= (r[ 0 * n + 1 ] == false); // Y[0] does not depend on X[1]
	ok &= (r[ 1 * n + 0 ] == true);  // Y[1] does     depend on X[0]
	ok &= (r[ 1 * n + 1 ] == true);  // Y[1] does     depend on X[1]
	ok &= (r[ 2 * n + 0 ] == false); // Y[2] does not depend on X[0]
	ok &= (r[ 2 * n + 1 ] == true);  // Y[2] does     depend on X[1]

	return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool RevSparseJac(void)
{	bool ok = true;
	// Run with Vector equal to four different cases
	// all of which are Simple Vectors with elements of type bool.
	ok &= RevSparseJacCases< CppAD::vectorBool     >();
	ok &= RevSparseJacCases< CppAD::vector  <bool> >();
	ok &= RevSparseJacCases< std::vector    <bool> >(); 
	ok &= RevSparseJacCases< std::valarray  <bool> >(); 

	return ok;
}


Input File: example/rev_sparse_jac.cpp
5.6.3.3: Hessian Sparsity Pattern: Reverse Mode

5.6.3.3.a: Syntax
h = f.RevSparseHes(qs)

5.6.3.3.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. For a fixed  n \times q matrix  R and a fixed  1 \times m matrix  S , the second partial of  S * F[ x + R * u ] with respect to  u at  u = 0 and with respect to x  \[
     H(x)  =  R^T * (S * F)^{(2)} ( x )
\] 
where  (S * F)^{(2)} (x) is the Hessian of the scalar valued function  S * F (x) . Given a 9.4.i: sparsity pattern for  R and  S , RevSparseHes returns a sparsity pattern for the  H(x) .

5.6.3.3.c: f
The object f has prototype
     const ADFun<
Basef

5.6.3.3.d: x
If no 4.6: VecAD objects are used by the AD of Base 9.4.g.b: operation sequence stored in f, the sparsity pattern is valid for all values  x \in B^n .

If 5.5.g: f.use_VecAD is true, the sparsity patter is only valid for the value of x in the previous 5.6.1.1: zero order forward mode call
     
f.Forward(0, x)
If there is no previous zero order forward mode call using f, the value of the 5.1: independent variables during the recording of the AD sequence of operations is used for x.

5.6.3.3.e: q
The argument q has prototype
     size_t 
q
It specifies the number of columns in the Jacobian  J(x) . It must be the same value as in the previous 5.6.3.1: ForSparseJac call
     
f.ForSparseJac(qr)
Note that the memory required for the calculation is proportional to  q times the total number of variables in the AD operation sequence corresponding to f (5.5.h: f.size_var ).

5.6.3.3.f: r
The argument r in the previous call
     
f.ForSparseJac(qr)
is a sparsity pattern for the matrix  R above; i.e., for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , q-1 .  \[
     R_{i,j} \neq 0 ; \Rightarrow \; r [ i * q + j ] = {\rm true}
\] 


5.6.3.3.g: s
The argument s has prototype
     const 
Vector &s
(see 5.6.3.3.i: Vector below) and its size is  m . It specifies a 9.4.i: sparsity pattern for the matrix S as follows: for  j = 0 , \ldots , m-1 .  \[
     S_{0,j} \neq 0 ; \Rightarrow \; s [ j ] = {\rm true}
\] 


5.6.3.3.h: h
The result h has prototype
     
Vector &h
(see 5.6.3.3.i: Vector below) and its size is  q * n , It specifies a 9.4.i: sparsity pattern for the matrix  H(x) as follows: for  x \in B^n , for  i = 0 , \ldots , q-1 , and  j = 0 , \ldots , n-1  \[
     H(x)_{i,j} \neq 0 ; \Rightarrow \; h [ i * n + j ] = {\rm true}
\] 


5.6.3.3.i: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type bool . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case. In order to save memory, you may want to use a class that packs multiple elements into one storage location; for example, 6.23.j: vectorBool .

5.6.3.3.j: Entire Sparsity Pattern
Suppose that  q = n and  R is the  n \times n identity matrix, If follows that  \[
r [ i * q + j ] = \left\{ \begin{array}{ll}
     {\rm true}  & {\rm if} \; i = j \\
     {\rm false} & {\rm otherwise}
\end{array} \right. 
\] 
is an efficient sparsity pattern for  R ; i.e., the choice for r has as few true values as possible. Further suppose that the  S is the k-th 9.4.f: elementary vector If follows that  \[
s [ j ] = \left\{ \begin{array}{ll}
     {\rm true}  & {\rm if} \; j = k \\
     {\rm false} & {\rm otherwise}
\end{array} \right. 
\] 
is an efficient sparsity pattern for  S . In the case defined above, the result h corresponds to a sparsity pattern for the Hessian  F_k^{(2)} (x) .

5.6.3.3.k: Example
The file 5.6.3.3.1: RevSparseHes.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/rev_sparse_hes.hpp
5.6.3.3.1: Reverse Mode Hessian Sparsity: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------------
// define the template function RevSparseHesCases<Vector> in empty namespace
template <typename Vector> // vector class, elements of type bool
bool RevSparseHesCases(void)
{	bool ok = true;
	using CppAD::AD;

	// domain space vector
	size_t n = 3; 
	CPPAD_TEST_VECTOR< AD<double> > X(n);
	X[0] = 0.; 
	X[1] = 1.;
	X[2] = 2.;

	// declare independent variables and start recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 2;
	CPPAD_TEST_VECTOR< AD<double> > Y(m);
	Y[0] = sin( X[2] );
	Y[1] = X[0] * X[1];

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// sparsity pattern for the identity matrix
	Vector r(n * n);
	size_t i, j;
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
			r[ i * n + j ] = false;
		r[ i * n + i ] = true;
	}

	// compute sparsity pattern for J(x) = F^{(1)} (x)
	f.ForSparseJac(n, r);

	// compute sparsity pattern for H(x) = F_0^{(2)} (x)
	Vector s(m);
	for(i = 0; i < m; i++)
		s[i] = false;
	s[0] = true;
	Vector h(n * n);
	h    = f.RevSparseHes(n, s);

	// check values
	ok &= (h[ 0 * n + 0 ] == false);  // second partial w.r.t X[0], X[0]
	ok &= (h[ 0 * n + 1 ] == false);  // second partial w.r.t X[0], X[1]
	ok &= (h[ 0 * n + 2 ] == false);  // second partial w.r.t X[0], X[2]

	ok &= (h[ 1 * n + 0 ] == false);  // second partial w.r.t X[1], X[0]
	ok &= (h[ 1 * n + 1 ] == false);  // second partial w.r.t X[1], X[1]
	ok &= (h[ 1 * n + 2 ] == false);  // second partial w.r.t X[1], X[2]

	ok &= (h[ 2 * n + 0 ] == false);  // second partial w.r.t X[2], X[0]
	ok &= (h[ 2 * n + 1 ] == false);  // second partial w.r.t X[2], X[1]
	ok &= (h[ 2 * n + 2 ] == true);   // second partial w.r.t X[2], X[2]

	// compute sparsity pattern for H(x) = F_1^{(2)} (x)
	for(i = 0; i < m; i++)
		s[i] = false;
	s[1] = true;
	h    = f.RevSparseHes(n, s);

	// check values
	ok &= (h[ 0 * n + 0 ] == false);  // second partial w.r.t X[0], X[0]
	ok &= (h[ 0 * n + 1 ] == true);   // second partial w.r.t X[0], X[1]
	ok &= (h[ 0 * n + 2 ] == false);  // second partial w.r.t X[0], X[2]

	ok &= (h[ 1 * n + 0 ] == true);   // second partial w.r.t X[1], X[0]
	ok &= (h[ 1 * n + 1 ] == false);  // second partial w.r.t X[1], X[1]
	ok &= (h[ 1 * n + 2 ] == false);  // second partial w.r.t X[1], X[2]

	ok &= (h[ 2 * n + 0 ] == false);  // second partial w.r.t X[2], X[0]
	ok &= (h[ 2 * n + 1 ] == false);  // second partial w.r.t X[2], X[1]
	ok &= (h[ 2 * n + 2 ] == false);  // second partial w.r.t X[2], X[2]

	return ok;
}
} // End empty namespace

# include <vector>
# include <valarray>
bool RevSparseHes(void)
{	bool ok = true;
	// Run with Vector equal to four different cases
	// all of which are Simple Vectors with elements of type bool.
	ok &= RevSparseHesCases< CppAD::vector  <bool> >();
	ok &= RevSparseHesCases< CppAD::vectorBool     >();
	ok &= RevSparseHesCases< std::vector    <bool> >(); 
	ok &= RevSparseHesCases< std::valarray  <bool> >(); 

	return ok;
}



Input File: example/rev_sparse_hes.cpp
5.7: First and Second Derivatives: Easy Drivers

5.7.a: Contents
Jacobian: 5.7.1Jacobian: Driver Routine
ForOne: 5.7.2First Order Partial Derivative: Driver Routine
RevOne: 5.7.3First Order Derivative: Driver Routine
Hessian: 5.7.4Hessian: Easy Driver
ForTwo: 5.7.5Forward Mode Second Partial Derivative Driver
RevTwo: 5.7.6Reverse Mode Second Partial Derivative Driver
sparse_jacobian: 5.7.7Sparse Jacobian: Easy Driver
sparse_hessian: 5.7.8Sparse Hessian: Easy Driver

Input File: cppad/local/drivers.hpp
5.7.1: Jacobian: Driver Routine

5.7.1.a: Syntax
jac = f.Jacobian(x)

5.7.1.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The syntax above sets jac to the Jacobian of F evaluated at x; i.e.,  \[
     jac = F^{(1)} (x)
\] 


5.7.1.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.1.g: Forward or Reverse below).

5.7.1.d: x
The argument x has prototype
     const 
Vector &x
(see 5.7.1.f: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. It specifies that point at which to evaluate the Jacobian.

5.7.1.e: jac
The result jac has prototype
     
Vector jac
(see 5.7.1.f: Vector below) and its size is  m * n ; i.e., the product of the 5.5.d: domain and 5.5.e: range dimensions for f. For  i = 0 , \ldots , m - 1  and  j = 0 , \ldots , n - 1  \[.
     jac[ i * n + j ] = \D{ F_i }{ x_j } ( x )
\] 


5.7.1.f: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.1.g: Forward or Reverse
This will use order zero Forward mode and either order one Forward or order one Reverse to compute the Jacobian (depending on which it estimates will require less work). After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After each call to Jacobian, the previous calls to 5.6.1: Forward are unspecified.

5.7.1.h: Example
The routine 5.7.1.1: Jacobian is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/local/jacobian.hpp
5.7.1.1: Jacobian: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function JacobianCases<Vector> in empty namespace
template <typename Vector> 
bool JacobianCases()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::exp;
	using CppAD::sin;
	using CppAD::cos;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	X[0] = 1.;
	X[1] = 2.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// a calculation between the domain and range values
	AD<double> Square = X[0] * X[0];

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = Square * exp( X[1] );
	Y[1] = Square * sin( X[1] );
	Y[2] = Square * cos( X[1] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	Vector x(n);
	x[0] = 2.;
	x[1] = 1.;

	// compute the derivative at this x
	Vector jac( m * n );
	jac = f.Jacobian(x);

	/*
	F'(x) = [ 2 * x[0] * exp(x[1]) ,  x[0] * x[0] * exp(x[1]) ]
	        [ 2 * x[0] * sin(x[1]) ,  x[0] * x[0] * cos(x[1]) ]
	        [ 2 * x[0] * cos(x[1]) , -x[0] * x[0] * sin(x[i]) ]
	*/
	ok &=  NearEqual( 2.*x[0]*exp(x[1]), jac[0*n+0], 1e-10, 1e-10 );
	ok &=  NearEqual( 2.*x[0]*sin(x[1]), jac[1*n+0], 1e-10, 1e-10 );
	ok &=  NearEqual( 2.*x[0]*cos(x[1]), jac[2*n+0], 1e-10, 1e-10 );

	ok &=  NearEqual( x[0] * x[0] *exp(x[1]), jac[0*n+1], 1e-10, 1e-10 );
	ok &=  NearEqual( x[0] * x[0] *cos(x[1]), jac[1*n+1], 1e-10, 1e-10 );
	ok &=  NearEqual(-x[0] * x[0] *sin(x[1]), jac[2*n+1], 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool Jacobian(void)
{	bool ok = true;
	// Run with Vector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	ok &= JacobianCases< CppAD::vector  <double> >();
	ok &= JacobianCases< std::vector    <double> >();
	ok &= JacobianCases< std::valarray  <double> >();
	return ok;
}

Input File: example/jacobian.cpp
5.7.2: First Order Partial Derivative: Driver Routine

5.7.2.a: Syntax
dy = f.ForOne(xj)

5.7.2.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The syntax above sets dy to the partial of  F with respect to  x_j ; i.e.,  \[
dy 
= \D{F}{ x_j } (x) 
= \left[ 
     \D{ F_0 }{ x_j } (x) , \cdots , \D{ F_{m-1} }{ x_j } (x) 
\right]
\] 


5.7.2.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.2.h: ForOne Uses Forward below).

5.7.2.d: x
The argument x has prototype
     const 
Vector &x
(see 5.7.2.g: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. It specifies that point at which to evaluate the partial derivative.

5.7.2.e: j
The argument j has prototype
     size_t 
j
an is less than n, 5.5.d: domain space for f. It specifies the component of F for which we are computing the partial derivative.

5.7.2.f: dy
The result dy has prototype
     
Vector dy
(see 5.7.2.g: Vector below) and its size is  m , the dimension of the 5.5.e: range space for f. The value of dy is the partial of  F with respect to  x_j evaluated at x; i.e., for  i = 0 , \ldots , m - 1  \[.
     dy[i] = \D{ F_i }{ x_j } ( x )
\] 


5.7.2.g: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.2.h: ForOne Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After ForOne, the previous calls to 5.6.1: Forward are undefined.

5.7.2.i: Example
The routine 5.7.2.1: ForOne is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/local/for_one.hpp
5.7.2.1: First Order Partial Driver: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------
// define the template function ForOneCases<Vector> in empty namespace
template <typename Vector> 
bool ForOneCases()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::exp;
	using CppAD::sin;
	using CppAD::cos;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	X[0] = 1.;
	X[1] = 2.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = X[0] * exp( X[1] );
	Y[1] = X[0] * sin( X[1] );
	Y[2] = X[0] * cos( X[1] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	Vector x(n);
	x[0] = 2.;
	x[1] = 1.;

	// compute partial of y w.r.t x[0]
	Vector dy(m);
	dy  = f.ForOne(x, 0);
	ok &= NearEqual( dy[0], exp(x[1]), 1e-10, 1e-10 ); // for y[0]
	ok &= NearEqual( dy[1], sin(x[1]), 1e-10, 1e-10 ); // for y[1]
	ok &= NearEqual( dy[2], cos(x[1]), 1e-10, 1e-10 ); // for y[2]

	// compute partial of F w.r.t x[1]
	dy  = f.ForOne(x, 1);
	ok &= NearEqual( dy[0],  x[0]*exp(x[1]), 1e-10, 1e-10 );
	ok &= NearEqual( dy[1],  x[0]*cos(x[1]), 1e-10, 1e-10 );
	ok &= NearEqual( dy[2], -x[0]*sin(x[1]), 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool ForOne(void)
{	bool ok = true;
	// Run with Vector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	ok &= ForOneCases< CppAD::vector  <double> >();
	ok &= ForOneCases< std::vector    <double> >();
	ok &= ForOneCases< std::valarray  <double> >();
	return ok;
}

Input File: example/for_one.cpp
5.7.3: First Order Derivative: Driver Routine

5.7.3.a: Syntax
dw = f.RevOne(xi)

5.7.3.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The syntax above sets dw to the derivative of  F_i with respect to  x ; i.e.,  \[
dw =
F_i^{(1)} (x) 
= \left[ 
     \D{ F_i }{ x_0 } (x) , \cdots , \D{ F_i }{ x_{n-1} } (x) 
\right]
\] 


5.7.3.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.3.h: RevOne Uses Forward below).

5.7.3.d: x
The argument x has prototype
     const 
Vector &x
(see 5.7.3.g: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. It specifies that point at which to evaluate the derivative.

5.7.3.e: i
The index i has prototype
     size_t 
i
and is less than  m , the dimension of the 5.5.e: range space for f. It specifies the component of  F that we are computing the derivative of.

5.7.3.f: dw
The result dw has prototype
     
Vector dw
(see 5.7.3.g: Vector below) and its size is n, the dimension of the 5.5.d: domain space for f. The value of dw is the derivative of  F_i evaluated at x; i.e., for  j = 0 , \ldots , n - 1   \[.
     dw[ j ] = \D{ F_i }{ x_j } ( x )
\] 


5.7.3.g: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.3.h: RevOne Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After RevOne, the previous calls to 5.6.1: Forward are undefined.

5.7.3.i: Example
The routine 5.7.3.1: RevOne is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/local/rev_one.hpp
5.7.3.1: First Order Derivative Driver: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // -------------------------------------------------------
// define the template function RevOneCases<Vector> in empty namespace
template <typename Vector>
bool RevOneCases()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::exp;
	using CppAD::sin;
	using CppAD::cos;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	X[0] = 1.;
	X[1] = 2.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = X[0] * exp( X[1] );
	Y[1] = X[0] * sin( X[1] );
	Y[2] = X[0] * cos( X[1] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	Vector x(n);
	x[0] = 2.;
	x[1] = 1.;

	// compute and check derivative of y[0] 
	Vector dw(n);
	dw  = f.RevOne(x, 0);
	ok &= NearEqual(dw[0],      exp(x[1]), 1e-10, 1e-10 ); // w.r.t x[0]
	ok &= NearEqual(dw[1], x[0]*exp(x[1]), 1e-10, 1e-10 ); // w.r.t x[1]

	// compute and check derivative of y[1] 
	dw  = f.RevOne(x, 1);
	ok &= NearEqual(dw[0],      sin(x[1]), 1e-10, 1e-10 );
	ok &= NearEqual(dw[1], x[0]*cos(x[1]), 1e-10, 1e-10 );

	// compute and check derivative of y[2] 
	dw  = f.RevOne(x, 2);
	ok &= NearEqual(dw[0],        cos(x[1]), 1e-10, 1e-10 );
	ok &= NearEqual(dw[1], - x[0]*sin(x[1]), 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool RevOne(void)
{	bool ok = true;
	// Run with Vector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	ok &= RevOneCases< CppAD::vector  <double> >();
	ok &= RevOneCases< std::vector    <double> >();
	ok &= RevOneCases< std::valarray  <double> >();
	return ok;
}

Input File: example/rev_one.cpp
5.7.4: Hessian: Easy Driver

5.7.4.a: Syntax
hes = f.Hessian(xw)
hes = f.Hessian(xl)

5.7.4.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The syntax above sets hes to the Hessian The syntax above sets h to the Hessian  \[
     hes = \dpow{2}{x} \sum_{i=1}^m w_i F_i (x) 
\] 
The routine 5.7.8: sparse_hessian may be faster in the case where the Hessian is sparse.

5.7.4.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.4.i: Hessian Uses Forward below).

5.7.4.d: x
The argument x has prototype
     const 
Vector &x
(see 5.7.4.h: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. It specifies that point at which to evaluate the Hessian.

5.7.4.e: l
If the argument l is present, it has prototype
     size_t 
l
and is less than m, the dimension of the 5.5.e: range space for f. It specifies the component of F for which we are evaluating the Hessian. To be specific, in the case where the argument l is present,  \[
     w_i = \left\{ \begin{array}{ll}
          1 & i = l \\
          0 & {\rm otherwise}
     \end{array} \right.
\] 


5.7.4.f: w
If the argument w is present, it has prototype
     const 
Vector &w
and size  m . It specifies the value of  w_i in the expression for h.

5.7.4.g: hes
The result hes has prototype
     
Vector hes
(see 5.7.4.h: Vector below) and its size is  n * n . For  j = 0 , \ldots , n - 1  and  \ell = 0 , \ldots , n - 1  \[
     hes [ j * n + \ell ] = \DD{ w^{\rm T} F }{ x_j }{ x_\ell } ( x )
\] 


5.7.4.h: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.4.i: Hessian Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After Hessian, the previous calls to 5.6.1: Forward are undefined.

5.7.4.j: Example
The routines 5.7.4.1: Hessian.cpp and 5.7.4.2: HesLagrangian.cpp are examples and tests of Hessian. They return true, if they succeed and false otherwise.
Input File: cppad/local/hessian.hpp
5.7.4.1: Hessian: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function HessianCases<Vector> in empty namespace
template <typename Vector> 
bool HessianCases()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::exp;
	using CppAD::sin;
	using CppAD::cos;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	X[0] = 1.;
	X[1] = 2.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// a calculation between the domain and range values
	AD<double> Square = X[0] * X[0];

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = Square * exp( X[1] );
	Y[1] = Square * sin( X[1] );
	Y[2] = Square * cos( X[1] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	Vector x(n);
	x[0] = 2.;
	x[1] = 1.;

	// second derivative of y[1] 
	Vector hes( n * n );
	hes = f.Hessian(x, 1);
	/*
	F_1       = x[0] * x[0] * sin(x[1])

	F_1^{(1)} = [ 2 * x[0] * sin(x[1]) , x[0] * x[0] * cos(x[1]) ]

	F_1^{(2)} = [        2 * sin(x[1]) ,      2 * x[0] * cos(x[1]) ]
	            [ 2 * x[0] * cos(x[1]) , - x[0] * x[0] * sin(x[1]) ]
	*/
	ok &=  NearEqual(          2.*sin(x[1]), hes[0*n+0], 1e-10, 1e-10 );
	ok &=  NearEqual(     2.*x[0]*cos(x[1]), hes[0*n+1], 1e-10, 1e-10 );
	ok &=  NearEqual(     2.*x[0]*cos(x[1]), hes[1*n+0], 1e-10, 1e-10 );
	ok &=  NearEqual( - x[0]*x[0]*sin(x[1]), hes[1*n+1], 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool Hessian(void)
{	bool ok = true;
	// Run with Vector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	ok &= HessianCases< CppAD::vector  <double> >();
	ok &= HessianCases< std::vector    <double> >();
	ok &= HessianCases< std::valarray  <double> >();
	return ok;
}

Input File: example/hessian.cpp
5.7.4.2: Hessian of Lagrangian and ADFun Default Constructor: Example and Test
 

# include <cppad/cppad.hpp>
# include <cassert>

namespace {
	CppAD::AD<double> Lagragian(
		const CppAD::vector< CppAD::AD<double> > &xyz )
	{	using CppAD::AD;
		assert( xyz.size() == 6 );

		AD<double> x0 = xyz[0];
		AD<double> x1 = xyz[1];
		AD<double> x2 = xyz[2];
		AD<double> y0 = xyz[3];
		AD<double> y1 = xyz[4];
		AD<double> z  = xyz[5];
	
		// compute objective function
		AD<double> f = x0 * x0;
		// compute constraint functions
		AD<double> g0 = 1. + 2.*x1 + 3.*x2;
		AD<double> g1 = log( x0 * x2 );
		// compute the Lagragian
		AD<double> L = y0 * g0 + y1 * g1 + z * f;
	
		return L;
	
	}
	CppAD::vector< CppAD::AD<double> > fg(
		const CppAD::vector< CppAD::AD<double> > &x )
	{	using CppAD::AD;
		using CppAD::vector;
		assert( x.size() == 3 );

		vector< AD<double> > fg(3);
		fg[0] = x[0] * x[0];
		fg[1] = 1. + 2. * x[1] + 3. * x[2];
		fg[2] = log( x[0] * x[2] );

		return fg;
	}
	bool CheckHessian(
	CppAD::vector<double> H , 
	double x0, double x1, double x2, double y0, double y1, double z )
	{	using CppAD::NearEqual;
		bool ok  = true;
		size_t n = 3;
		assert( H.size() == n * n );
		/*
		L   =    z*x0*x0 + y0*(1 + 2*x1 + 3*x2) + y1*log(x0*x2)

		L_0 = 2 * z * x0 + y1 / x0
		L_1 = y0 * 2 
		L_2 = y0 * 3 + y1 / x2 
		*/
		// L_00 = 2 * z - y1 / ( x0 * x0 )
		double check = 2. * z - y1 / (x0 * x0);
		ok &= NearEqual(H[0 * n + 0], check, 1e-10, 1e-10); 
		// L_01 = L_10 = 0
		ok &= NearEqual(H[0 * n + 1], 0., 1e-10, 1e-10);
		ok &= NearEqual(H[1 * n + 0], 0., 1e-10, 1e-10);
		// L_02 = L_20 = 0
		ok &= NearEqual(H[0 * n + 2], 0., 1e-10, 1e-10);
		ok &= NearEqual(H[2 * n + 0], 0., 1e-10, 1e-10);
		// L_11 = 0
		ok &= NearEqual(H[1 * n + 1], 0., 1e-10, 1e-10);
		// L_12 = L_21 = 0
		ok &= NearEqual(H[1 * n + 2], 0., 1e-10, 1e-10);
		ok &= NearEqual(H[2 * n + 1], 0., 1e-10, 1e-10);
		// L_22 = - y1 / (x2 * x2)
		check = - y1 / (x2 * x2);
		ok &= NearEqual(H[2 * n + 2], check, 1e-10, 1e-10);

		return ok;
	}
	bool UseL()
	{	using CppAD::AD;
		using CppAD::vector;

		// double values corresponding to XYZ vector
		double x0(.5), x1(1e3), x2(1), y0(2.), y1(3.), z(4.);

		// domain space vector
		size_t n = 3;
		vector< AD<double> >  XYZ(n);
		XYZ[0] = x0;
		XYZ[1] = x1;
		XYZ[2] = x2;

		// declare X as independent variable vector and start recording
		CppAD::Independent(XYZ);

		// add the Lagragian multipliers to XYZ
		// (note that this modifies the vector XYZ)
		XYZ.push_back(y0);
		XYZ.push_back(y1);
		XYZ.push_back(z);

		// range space vector
		size_t m = 1;
		vector< AD<double> >  L(m);
		L[0] = Lagragian(XYZ);

		// create K: X -> L and stop tape recording
		// We cannot use the ADFun sequence constructor because XYZ has
		// changed between the call to Independent and here.
		CppAD::ADFun<double> K;
		K.Dependent(L);

		// Operation sequence corresponding to K does depends on 
		// value of y0, y1, and z. Must redo calculations above when 
		// y0, y1, or z changes.

		// declare independent variable vector and Hessian
		vector<double> x(n);
		vector<double> H( n * n );

		// point at which we are computing the Hessian
		// (must redo calculations below each time x changes)
		x[0] = x0;
		x[1] = x1;
		x[2] = x2;
		H = K.Hessian(x, 0);

		// check this Hessian calculation
		return CheckHessian(H, x0, x1, x2, y0, y1, z); 
	}
	bool Usefg()
	{	using CppAD::AD;
		using CppAD::vector;

		// parameters defining problem
		double x0(.5), x1(1e3), x2(1), y0(2.), y1(3.), z(4.);

		// domain space vector
		size_t n = 3;
		vector< AD<double> >  X(n);
		X[0] = x0;
		X[1] = x1;
		X[2] = x2;

		// declare X as independent variable vector and start recording
		CppAD::Independent(X);

		// range space vector
		size_t m = 3;
		vector< AD<double> >  FG(m);
		FG = fg(X);

		// create K: X -> FG and stop tape recording
		CppAD::ADFun<double> K;
		K.Dependent(FG);

		// Operation sequence corresponding to K does not depend on 
		// value of x0, x1, x2, y0, y1, or z. 

		// forward and reverse mode arguments and results 
		vector<double> x(n);
		vector<double> H( n * n );
		vector<double>  dx(n);
		vector<double>   w(m);
		vector<double>  dw(2*n);

		// compute Hessian at this value of x
		// (must redo calculations below each time x changes)
		x[0] = x0;
		x[1] = x1;
		x[2] = x2;
		K.Forward(0, x);

		// set weights to Lagrange multiplier values
		// (must redo calculations below each time y0, y1, or z changes)
		w[0] = z;
		w[1] = y0;
		w[2] = y1;

		// initialize dx as zero
		size_t i, j;
		for(i = 0; i < n; i++)
			dx[i] = 0.;
		// loop over components of x
		for(i = 0; i < n; i++)
		{	dx[i] = 1.;             // dx is i-th elementary vector
			K.Forward(1, dx);       // partial w.r.t dx
			dw = K.Reverse(2, w);   // deritavtive of partial
			for(j = 0; j < n; j++)
				H[ i * n + j ] = dw[ j * 2 + 1 ];
			dx[i] = 0.;             // dx is zero vector
		}

		// check this Hessian calculation
		return CheckHessian(H, x0, x1, x2, y0, y1, z); 
	}
}

bool HesLagrangian(void)
{	bool ok = true;

	// UseL is simpler, but must retape every time that y of z changes
	ok     &= UseL();

	// Usefg does not need to retape unless operation sequence changes
	ok     &= Usefg();
	return ok;
}


Input File: example/hes_lagrangian.cpp
5.7.5: Forward Mode Second Partial Derivative Driver

5.7.5.a: Syntax
ddy = f.ForTwo(xjk)

5.7.5.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The syntax above sets  \[
     ddy [ i * p + \ell ]
     =
     \DD{ F_i }{ x_{j[ \ell ]} }{ x_{k[ \ell ]} } (x) 
\] 
for  i = 0 , \ldots , m-1 and  \ell = 0 , \ldots , p , where  p is the size of the vectors j and k.

5.7.5.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.5.j: ForTwo Uses Forward below).

5.7.5.d: x
The argument x has prototype
     const 
VectorBase &x
(see 5.7.5.h: VectorBase below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. It specifies that point at which to evaluate the partial derivatives listed above.

5.7.5.e: j
The argument j has prototype
     const 
VectorSize_t &j
(see 5.7.5.i: VectorSize_t below) We use p to denote the size of the vector j. All of the indices in j must be less than n; i.e., for  \ell = 0 , \ldots , p-1 ,  j[ \ell ]  < n .

5.7.5.f: k
The argument k has prototype
     const 
VectorSize_t &k
(see 5.7.5.i: VectorSize_t below) and its size must be equal to p, the size of the vector j. All of the indices in k must be less than n; i.e., for  \ell = 0 , \ldots , p-1 ,  k[ \ell ]  < n .

5.7.5.g: ddy
The result ddy has prototype
     
VectorBase ddy
(see 5.7.5.h: VectorBase below) and its size is  m * p . It contains the requested partial derivatives; to be specific, for  i = 0 , \ldots , m - 1  and  \ell = 0 , \ldots , p - 1  \[
     ddy [ i * p + \ell ]
     =
     \DD{ F_i }{ x_{j[ \ell ]} }{ x_{k[ \ell ]} } (x) 
\] 


5.7.5.h: VectorBase
The type VectorBase must be a 6.7: SimpleVector class with 6.7.b: elements of type Base . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.5.i: VectorSize_t
The type VectorSize_t must be a 6.7: SimpleVector class with 6.7.b: elements of type size_t . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.5.j: ForTwo Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After ForTwo, the previous calls to 5.6.1: Forward are undefined.

5.7.5.k: Examples
The routine 5.7.5.1: ForTwo is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/local/for_two.hpp
5.7.5.1: Subset of Second Order Partials: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // -----------------------------------------------------
// define the template function in empty namespace
// bool ForTwoCases<VectorBase, VectorSize_t>(void)
template <class VectorBase, class VectorSize_t> 
bool ForTwoCases()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::exp;
	using CppAD::sin;
	using CppAD::cos;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	X[0] = 1.;
	X[1] = 2.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// a calculation between the domain and range values
	AD<double> Square = X[0] * X[0];

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = Square * exp( X[1] );
	Y[1] = Square * sin( X[1] );
	Y[2] = Square * cos( X[1] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	VectorBase x(n);
	x[0] = 2.;
	x[1] = 1.;

	// set j and k to compute specific second partials of y 
	size_t p = 2;
	VectorSize_t j(p);
	VectorSize_t k(p);
	j[0] = 0; k[0] = 0; // for second partial w.r.t. x[0] and x[0]
	j[1] = 0; k[1] = 1; // for second partial w.r.t x[0] and x[1]

	// compute the second partials
	VectorBase ddy(m * p);
	ddy = f.ForTwo(x, j, k);
	/* 
	partial of y w.r.t x[0] is
	[ 2 * x[0] * exp(x[1]) ]
	[ 2 * x[0] * sin(x[1]) ]
	[ 2 * x[0] * cos(x[1]) ] 
	*/
	// second partial of y w.r.t x[0] and x[1]
	ok &=  NearEqual( 2.*exp(x[1]), ddy[0*p+0], 1e-10, 1e-10 );
	ok &=  NearEqual( 2.*sin(x[1]), ddy[1*p+0], 1e-10, 1e-10 );
	ok &=  NearEqual( 2.*cos(x[1]), ddy[2*p+0], 1e-10, 1e-10 );

	// second partial of F w.r.t x[0] and x[1]
	ok &=  NearEqual( 2.*x[0]*exp(x[1]), ddy[0*p+1], 1e-10, 1e-10 );
	ok &=  NearEqual( 2.*x[0]*cos(x[1]), ddy[1*p+1], 1e-10, 1e-10 );
	ok &=  NearEqual(-2.*x[0]*sin(x[1]), ddy[2*p+1], 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool ForTwo(void)
{	bool ok = true;
        // Run with VectorBase equal to three different cases
        // all of which are Simple Vectors with elements of type double.
	ok &= ForTwoCases< CppAD::vector <double>, std::vector<size_t> >();
	ok &= ForTwoCases< std::vector   <double>, std::vector<size_t> >();
	ok &= ForTwoCases< std::valarray <double>, std::vector<size_t> >();

        // Run with VectorSize_t equal to two other cases
        // which are Simple Vectors with elements of type size_t.
	ok &= ForTwoCases< std::vector <double>, CppAD::vector<size_t> >();
	ok &= ForTwoCases< std::vector <double>, std::valarray<size_t> >();

	return ok;
}

Input File: example/for_two.cpp
5.7.6: Reverse Mode Second Partial Derivative Driver

5.7.6.a: Syntax
ddw = f.RevTwo(xij)

5.7.6.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. The syntax above sets  \[
     ddw [ k * p + \ell ]
     =
     \DD{ F_{i[ \ell ]} }{ x_{j[ \ell ]} }{ x_k } (x) 
\] 
for  k = 0 , \ldots , n-1 and  \ell = 0 , \ldots , p , where  p is the size of the vectors i and j.

5.7.6.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.6.j: RevTwo Uses Forward below).

5.7.6.d: x
The argument x has prototype
     const 
VectorBase &x
(see 5.7.6.h: VectorBase below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f. It specifies that point at which to evaluate the partial derivatives listed above.

5.7.6.e: i
The argument i has prototype
     const 
VectorSize_t &i
(see 5.7.6.i: VectorSize_t below) We use p to denote the size of the vector i. All of the indices in i must be less than m, the dimension of the 5.5.e: range space for f; i.e., for  \ell = 0 , \ldots , p-1 ,  i[ \ell ]  < m .

5.7.6.f: j
The argument j has prototype
     const 
VectorSize_t &j
(see 5.7.6.i: VectorSize_t below) and its size must be equal to p, the size of the vector i. All of the indices in j must be less than n; i.e., for  \ell = 0 , \ldots , p-1 ,  j[ \ell ]  < n .

5.7.6.g: ddw
The result ddw has prototype
     
VectorBase ddw
(see 5.7.6.h: VectorBase below) and its size is  n * p . It contains the requested partial derivatives; to be specific, for  k = 0 , \ldots , n - 1  and  \ell = 0 , \ldots , p - 1  \[
     ddw [ k * p + \ell ]
     =
     \DD{ F_{i[ \ell ]} }{ x_{j[ \ell ]} }{ x_k } (x) 
\] 


5.7.6.h: VectorBase
The type VectorBase must be a 6.7: SimpleVector class with 6.7.b: elements of type Base . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.6.i: VectorSize_t
The type VectorSize_t must be a 6.7: SimpleVector class with 6.7.b: elements of type size_t . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.6.j: RevTwo Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After RevTwo, the previous calls to 5.6.1: Forward are undefined.

5.7.6.k: Examples
The routine 5.7.6.1: RevTwo is both an example and test. It returns true, if it succeeds and false otherwise.
Input File: cppad/local/rev_two.hpp
5.7.6.1: Second Partials Reverse Driver: Example and Test
 
# include <cppad/cppad.hpp>
namespace { // -----------------------------------------------------
// define the template function in empty namespace
// bool RevTwoCases<VectorBase, VectorSize_t>(void)
template <class VectorBase, class VectorSize_t> 
bool RevTwoCases()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::exp;
	using CppAD::sin;
	using CppAD::cos;

	// domain space vector
	size_t n = 2;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	X[0] = 1.;
	X[1] = 2.;

	// declare independent variables and starting recording
	CppAD::Independent(X);

	// a calculation between the domain and range values
	AD<double> Square = X[0] * X[0];

	// range space vector
	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = Square * exp( X[1] );
	Y[1] = Square * sin( X[1] );
	Y[2] = Square * cos( X[1] );

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	VectorBase x(n);
	x[0] = 2.;
	x[1] = 1.;

	// set i and j to compute specific second partials of y
	size_t p = 2;
	VectorSize_t i(p);
	VectorSize_t j(p);
	i[0] = 0; j[0] = 0; // for partials y[0] w.r.t x[0] and x[k]
	i[1] = 1; j[1] = 1; // for partials y[1] w.r.t x[1] and x[k]

	// compute the second partials
	VectorBase ddw(n * p);
	ddw = f.RevTwo(x, i, j);

	// partials of y[0] w.r.t x[0] is 2 * x[0] * exp(x[1])
	// check partials of y[0] w.r.t x[0] and x[k] for k = 0, 1 
	ok &=  NearEqual(      2.*exp(x[1]), ddw[0*p+0], 1e-10, 1e-10 );
	ok &=  NearEqual( 2.*x[0]*exp(x[1]), ddw[1*p+0], 1e-10, 1e-10 );

	// partials of y[1] w.r.t x[1] is x[0] * x[0] * cos(x[1])
	// check partials of F_1 w.r.t x[1] and x[k] for k = 0, 1 
	ok &=  NearEqual(    2.*x[0]*cos(x[1]), ddw[0*p+1], 1e-10, 1e-10 );
	ok &=  NearEqual( -x[0]*x[0]*sin(x[1]), ddw[1*p+1], 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool RevTwo(void)
{	bool ok = true;
        // Run with VectorBase equal to three different cases
        // all of which are Simple Vectors with elements of type double.
	ok &= RevTwoCases< CppAD::vector <double>, std::vector<size_t> >();
	ok &= RevTwoCases< std::vector   <double>, std::vector<size_t> >();
	ok &= RevTwoCases< std::valarray <double>, std::vector<size_t> >();

        // Run with VectorSize_t equal to two other cases
        // which are Simple Vectors with elements of type size_t.
	ok &= RevTwoCases< std::vector <double>, CppAD::vector<size_t> >();
	ok &= RevTwoCases< std::vector <double>, std::valarray<size_t> >();

	return ok;
}

Input File: example/rev_two.cpp
5.7.7: Sparse Jacobian: Easy Driver

5.7.7.a: Syntax
jac = f.SparseJacobian(x)
jac = f.SparseJacobian(xp)

5.7.7.b: Purpose
We use  F : B^n \rightarrow B^m do denote the 9.4.a: AD function corresponding to f . The syntax above sets jac to the Jacobian  \[
     jac = F^{(1)} (x) 
\] 
This is a preliminary implementation of a method for using the fact that the matrix is sparse to reduce the amount of computation necessary. One should use speed tests to verify that results are computed faster than when using the routine 5.7.1: Jacobian .

5.7.7.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.7.i: Uses Forward below).

5.7.7.d: x
The argument x has prototype
     const 
BaseVector &x
(see 5.7.7.g: BaseVector below) and its size must be equal to n , the dimension of the 5.5.d: domain space for f . It specifies that point at which to evaluate the Jacobian.

5.7.7.e: p
The argument p is optional and has prototype
     const 
BoolVector &p
(see 5.7.7.h: BoolVector below) and its size is  m * n . It specifies a 9.4.i: sparsity pattern for the Jacobian; i.e., for  i = 0 , \ldots , m-1 and  j = 0 , \ldots , n-1 .  \[
     \D{ F_i }{ x_j } \neq 0 ; \Rightarrow \; p [ i * n + j ] = {\rm true}
\] 


If this sparsity pattern does not change between calls to SparseJacobian , it should be faster to calculate p once and pass this argument to SparseJacobian .

5.7.7.f: jac
The result jac has prototype
     
BaseVector jac
and its size is  m * n . For  i = 0 , \ldots , m - 1 , and  j = 0 , \ldots , n - 1   \[
     jac [ i * n + j ] = \D{ F_i }{ x_j }
\] 


5.7.7.g: BaseVector
The type BaseVector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.7.h: BoolVector
The type BoolVector must be a 6.7: SimpleVector class with 6.7.b: elements of type bool . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case. In order to save memory, you may want to use a class that packs multiple elements into one storage location; for example, 6.23.j: vectorBool .

5.7.7.i: Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After SparseJacobian, the previous calls to 5.6.1: Forward are undefined.

5.7.7.j: Example
The routine 5.7.7.1: sparse_jacobian.cpp is examples and tests of sparse_jacobian. It return true, if it succeeds and false otherwise.
Input File: cppad/local/sparse_jacobian.hpp
5.7.7.1: Sparse Jacobian: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function in empty namespace
template <class BaseVector, class BoolVector> 
bool reverse_case()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	size_t i, j, k;

	// domain space vector
	size_t n = 4;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	for(j = 0; j < n; j++)
		X[j] = AD<double> (0);

	// declare independent variables and starting recording
	CppAD::Independent(X);

	size_t m = 3;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = X[0] + X[1];
	Y[1] = X[2] + X[3];
	Y[2] = X[0] + X[1] + X[2] + X[3] * X[3] / 2.;

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	BaseVector x(n);
	for(j = 0; j < n; j++)
		x[j] = double(j);

	// Jacobian of y without sparsity pattern
	BaseVector jac(m * n);
	jac = f.SparseJacobian(x);
	/*
	      [ 1 1 0 0  ]
	jac = [ 0 0 1 1  ]
	      [ 1 1 1 x_3]
	*/
	BaseVector check(m * n);
	check[0] = 1.; check[1] = 1.; check[2]  = 0.; check[3]  = 0.;
	check[4] = 0.; check[5] = 0.; check[6]  = 1.; check[7]  = 1.;
	check[8] = 1.; check[9] = 1.; check[10] = 1.; check[11] = x[3];
	for(k = 0; k < 12; k++)
		ok &=  NearEqual(check[k], jac[k], 1e-10, 1e-10 );

	// test passing sparsity pattern
	BoolVector s(m * m);
	BoolVector p(m * n);
	for(i = 0; i < m; i++)
	{	for(k = 0; k < m; k++)
			s[i * m + k] = false;
		s[i * m + i] = true;
	}
	p   = f.RevSparseJac(m, s);
	jac = f.SparseJacobian(x);
	for(k = 0; k < 12; k++)
		ok &=  NearEqual(check[k], jac[k], 1e-10, 1e-10 );

	return ok;
}

template <class BaseVector, class BoolVector> 
bool forward_case()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	size_t j, k;

	// domain space vector
	size_t n = 3;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	for(j = 0; j < n; j++)
		X[j] = AD<double> (0);

	// declare independent variables and starting recording
	CppAD::Independent(X);

	size_t m = 4;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = X[0] + X[2];
	Y[1] = X[0] + X[2];
	Y[2] = X[1] + X[2];
	Y[3] = X[1] + X[2] * X[2] / 2.;

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	BaseVector x(n);
	for(j = 0; j < n; j++)
		x[j] = double(j);

	// Jacobian of y without sparsity pattern
	BaseVector jac(m * n);
	jac = f.SparseJacobian(x);
	/*
	      [ 1 0 1   ]
	jac = [ 1 0 1   ]
	      [ 0 1 1   ]
	      [ 0 1 x_2 ]
	*/
	BaseVector check(m * n);
	check[0] = 1.; check[1]  = 0.; check[2]  = 1.; 
	check[3] = 1.; check[4]  = 0.; check[5]  = 1.;
	check[6] = 0.; check[7]  = 1.; check[8]  = 1.; 
	check[9] = 0.; check[10] = 1.; check[11] = x[2];
	for(k = 0; k < 12; k++)
		ok &=  NearEqual(check[k], jac[k], 1e-10, 1e-10 );

	// test passing sparsity pattern
	BoolVector r(n * n);
	BoolVector p(m * n);
	for(j = 0; j < n; j++)
	{	for(k = 0; k < n; k++)
			r[j * n + k] = false;
		r[j * n + j] = true;
	}
	p   = f.ForSparseJac(n, r);
	jac = f.SparseJacobian(x);
	for(k = 0; k < 12; k++)
		ok &=  NearEqual(check[k], jac[k], 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool sparse_jacobian(void)
{	bool ok = true;
	// Run with BaseVector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	// Also vary the type of vector for BoolVector.
	ok &= forward_case< CppAD::vector<double>, CppAD::vectorBool   >();
	ok &= reverse_case< CppAD::vector<double>, CppAD::vector<bool> >();
	//
	ok &= forward_case< std::vector<double>,   std::vector<bool>   >();
	ok &= reverse_case< std::vector<double>,   std::valarray<bool> >();
	//
	ok &= forward_case< std::valarray<double>, CppAD::vectorBool   >();
	ok &= reverse_case< std::valarray<double>, CppAD::vector<bool> >();
	//
	return ok;
}

Input File: example/sparse_jacobian.cpp
5.7.8: Sparse Hessian: Easy Driver

5.7.8.a: Syntax
hes = f.SparseHessian(xw)
hes = f.SparseHessian(xwp)

5.7.8.b: Purpose
We use  F : B^n \rightarrow B^m do denote the 9.4.a: AD function corresponding to f . The syntax above sets hes to the Hessian  \[
     hes = \dpow{2}{x} \sum_{i=1}^m w_i F_i (x) 
\] 
This is a preliminary implementation of a method for using the fact that the matrix is sparse to reduce the amount of computation necessary. One should use speed tests to verify that results are computed faster than when using the routine 5.7.4: Hessian .

5.7.8.c: f
The object f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.7.8.j: Uses Forward below).

5.7.8.d: x
The argument x has prototype
     const 
BaseVector &x
(see 5.7.8.h: BaseVector below) and its size must be equal to n , the dimension of the 5.5.d: domain space for f . It specifies that point at which to evaluate the Hessian.

5.7.8.e: w
The argument w has prototype
     const 
BaseVector &w
and size  m . It specifies the value of  w_i in the expression for hes . The more components of  w that are identically zero, the more spares the resulting Hessian may be (and hence the more efficient the calculation of hes may be).

5.7.8.f: p
The argument p is optional and has prototype
     const 
BoolVector &p
(see 5.7.8.i: BoolVector below) and its size is  n * n . It specifies a 9.4.i: sparsity pattern for the Hessian; i.e., for  j = 0 , \ldots , n-1 and  k = 0 , \ldots , n-1 .  \[
     \sum_i w_i \DD{ F_i }{ x_j }{ x_k } \neq 0 
          ; \Rightarrow \; p [ j * n + k ] = {\rm true}
\] 


If this sparsity pattern does not change between calls to SparseHessian , it should be faster to calculate p once and pass this argument to SparseHessian .

5.7.8.g: hes
The result hes has prototype
     
BaseVector hes
and its size is  n * n . For  j = 0 , \ldots , n - 1  and  \ell = 0 , \ldots , n - 1  \[
     hes [ j * n + \ell ] = \DD{ w^{\rm T} F }{ x_j }{ x_\ell } ( x )
\] 


5.7.8.h: BaseVector
The type BaseVector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.7.8.i: BoolVector
The type BoolVector must be a 6.7: SimpleVector class with 6.7.b: elements of type bool . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case. In order to save memory, you may want to use a class that packs multiple elements into one storage location; for example, 6.23.j: vectorBool .

5.7.8.j: Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After SparseHessian, the previous calls to 5.6.1: Forward are undefined.

5.7.8.k: Example
The routine 5.7.8.1: sparse_hessian.cpp is examples and tests of sparse_hessian. It return true, if it succeeds and false otherwise.
Input File: cppad/local/sparse_hessian.hpp
5.7.8.1: Sparse Hessian: Example and Test
 

# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function in empty namespace
template <class BaseVector, class BoolVector> 
bool Case()
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;
	size_t i, j, k;

	// domain space vector
	size_t n = 3;
	CPPAD_TEST_VECTOR< AD<double> >  X(n);
	for(i = 0; i < n; i++)
		X[i] = AD<double> (0);

	// declare independent variables and starting recording
	CppAD::Independent(X);

	size_t m = 1;
	CPPAD_TEST_VECTOR< AD<double> >  Y(m);
	Y[0] = X[0] * X[0] + X[0] * X[1] + X[1] * X[1] + X[2] * X[2];

	// create f: X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// new value for the independent variable vector
	BaseVector x(n);
	for(i = 0; i < n; i++)
		x[i] = double(i);

	// second derivative of y[1] 
	BaseVector w(m);
	w[0] = 1.;
	BaseVector h( n * n );
	h = f.SparseHessian(x, w);
	/*
	    [ 2 1 0 ]
	h = [ 1 2 0 ]
            [ 0 0 2 ]
	*/
	BaseVector check(n * n);
	check[0] = 2.; check[1] = 1.; check[2] = 0.;
	check[3] = 1.; check[4] = 2.; check[5] = 0.;
	check[6] = 0.; check[7] = 0.; check[8] = 2.;
	for(k = 0; k < n * n; k++)
		ok &=  NearEqual(check[k], h[k], 1e-10, 1e-10 );

	// determine the sparsity pattern p for Hessian of w^T F
        BoolVector r(n * n);
        for(j = 0; j < n; j++)
        {       for(k = 0; k < n; k++)
                        r[j * n + k] = false;
                r[j * n + j] = true;
        }
        f.ForSparseJac(n, r);
        //
        BoolVector s(m);
        for(i = 0; i < m; i++)
                s[i] = w[i] != 0;
        BoolVector p = f.RevSparseHes(n, s);

	// test passing sparsity pattern
	h = f.SparseHessian(x, w, p);
	for(k = 0; k < n * n; k++)
		ok &=  NearEqual(check[k], h[k], 1e-10, 1e-10 );

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool sparse_hessian(void)
{	bool ok = true;
	// Run with BaseVector equal to three different cases
	// all of which are Simple Vectors with elements of type double.
	// Also vary the type of vector for BoolVector.
	ok &= Case< CppAD::vector  <double>, CppAD::vectorBool   >();
	ok &= Case< std::vector    <double>, CppAD::vector<bool> >();
	ok &= Case< std::valarray  <double>, std::vector<bool>   >();
	return ok;
}

Input File: example/sparse_hessian.cpp
5.8: Check an ADFun Sequence of Operations

5.8.a: Syntax
ok = FunCheck(fgxra)
See Also 5.6.1.5: CompareChange

5.8.b: Purpose
We use  F : B^n \rightarrow B^m to denote the 9.4.a: AD function corresponding to f. We use  G : B^n \rightarrow B^m to denote the function corresponding to the C++ function object g. This routine check if  \[
     F(x) = G(x)
\]
If  F(x) \neq G(x) , the 9.4.g.b: operation sequence corresponding to f does not represents the algorithm used by g to calculate values for  G (see 5.8.l: Discussion below).

5.8.c: f
The FunCheck argument f has prototype
     ADFun<
Basef
Note that the 5: ADFun object f is not const (see 5.8.k: Forward below).

5.8.d: g
The FunCheck argument g has prototype
     
Fun &g
(Fun is defined the properties of g). The C++ function object g supports the syntax
     
y = g(x)
which computes  y = G(x) .

5.8.d.a: x
The g argument x has prototype
     const 
Vector &x
(see 5.8.j: Vector below) and its size must be equal to n, the dimension of the 5.5.d: domain space for f.

5.8.e: y
The g result y has prototype
     
Vector y
and its value is  G(x) . The size of y is equal to m, the dimension of the 5.5.e: range space for f.

5.8.f: x
The FunCheck argument x has prototype
     const 
Vector &x
and its size must be equal to n, the dimension of the 5.5.d: domain space for f. This specifies that point at which to compare the values calculated by f and G.

5.8.g: r
The FunCheck argument r has prototype
     const 
Base &r
It specifies the relative error the element by element comparison of the value of  F(x) and  G(x) .

5.8.h: a
The FunCheck argument a has prototype
     const 
Base &a
It specifies the absolute error the element by element comparison of the value of  F(x) and  G(x) .

5.8.i: ok
The FunCheck result ok has prototype
     bool 
ok
It is true, if for  i = 0 , \ldots , m-1 either the relative error bound is satisfied  \[
| F_i (x) - G_i (x) | 
\leq 
r ( | F_i (x) | + | G_i (x) | ) 
\] 
or the absolute error bound is satisfied  \[
     | F_i (x) - G_i (x) | \leq a
\] 
It is false if for some  (i, j) neither of these bounds is satisfied.

5.8.j: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Base. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

5.8.k: FunCheck Uses Forward
After each call to 5.6.1: Forward , the object f contains the corresponding 9.4.k: Taylor coefficients . After FunCheck, the previous calls to 5.6.1: Forward are undefined.

5.8.l: Discussion
Suppose that the algorithm corresponding to g contains
     if( 
x >= 0 )
          
y = exp(x)
     else 
y = exp(-x)
where x and y are AD<double> objects. It follows that the AD of double 9.4.g.b: operation sequence depends on the value of x. If the sequence of operations stored in f corresponds to g with  x \geq 0 , the function values computed using f when  x < 0 will not agree with the function values computed by  g . This is because the operation sequence corresponding to g changed (and hence the object f does not represent the function  G for this value of x). In this case, you probably want to re-tape the calculations performed by g with the 9.4.j.c: independent variables equal to the values in x (so AD operation sequence properly represents the algorithm for this value of independent variables).

5.8.m: Example
The file 5.8.1: FunCheck.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
Input File: cppad/local/fun_check.hpp
5.8.1: ADFun Check and Re-Tape: Example and Test
 
# include <cppad/cppad.hpp>

namespace { // -----------------------------------------------------------
// define the template function object Fun<Type,Vector> in empty namespace
template <class Type, class Vector>
class Fun {
private:
	size_t n;
public:
	// function constructor
	Fun(size_t n_) : n(n_)
	{ }
	// function evaluator
	Vector operator() (const Vector &x)
	{	Vector y(n);
		size_t i;
		for(i = 0; i < n; i++)
		{	// This operaiton sequence depends on x
			if( x[i] >= 0 ) 
				y[i] = exp(x[i]);
			else	y[i] = exp(-x[i]);
		}
		return y;
	}	
};
// template function FunCheckCases<Vector, ADVector> in empty namespace
template <class Vector, class ADVector>
bool FunCheckCases(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::ADFun;
	using CppAD::Independent;

	// use the ADFun default constructor
	ADFun<double> f;

	// domain space vector
	size_t n = 2;
	ADVector X(n);
	X[0] = -1.;
	X[1] = 1.;

	// declare independent variables and starting recording
	Independent(X);

	// create function object to use with AD<double>
	Fun< AD<double>, ADVector > G(n);

	// range space vector
	size_t m = n;
	ADVector Y(m);
	Y = G(X);

	// stop tape and store operation sequence in f : X -> Y
	f.Dependent(X, Y);
	ok &= (f.size_taylor() == 0);  // no implicit forward operation

	// create function object to use with double
	Fun<double, Vector> g(n);

	// function values should agree when the independent variable 
	// values are the same as during recording
	Vector x(n);
	size_t j;
	for(j = 0; j < n; j++)
		x[j] = Value(X[j]);
	double r = 1e-10; 
	double a = 1e-10;
	ok      &= FunCheck(f, g, x, a, r);

	// function values should not agree when the independent variable
	// values are the negative of values during recording
	for(j = 0; j < n; j++)
		x[j] = - Value(X[j]);
	ok      &= ! FunCheck(f, g, x, a, r);

	// re-tape to obtain the new AD of double operation sequence
	for(j = 0; j < n; j++)
		X[j] = x[j];
	Independent(X);
	Y = G(X);

	// stop tape and store operation sequence in f : X -> Y
	f.Dependent(X, Y);
	ok &= (f.size_taylor() == 0);  // no implicit forward with this x

	// function values should agree now
	ok      &= FunCheck(f, g, x, a, r);

	return ok;
}
} // End empty namespace 
# include <vector>
# include <valarray>
bool FunCheck(void)
{	bool ok = true;
	typedef CppAD::vector<double>                Vector1;
	typedef CppAD::vector< CppAD::AD<double> > ADVector1;
	typedef   std::vector<double>                Vector2;
	typedef   std::vector< CppAD::AD<double> > ADVector2;
	typedef std::valarray<double>                Vector3;
	typedef std::valarray< CppAD::AD<double> > ADVector3;
	// Run with Vector and ADVector equal to three different cases
	// all of which are Simple Vectors with elements of type 
	// double and AD<double> respectively.
	ok &= FunCheckCases< Vector1, ADVector2 >();
	ok &= FunCheckCases< Vector2, ADVector3 >();
	ok &= FunCheckCases< Vector3, ADVector1 >();
	return ok;
}

Input File: example/fun_check.cpp
5.9: OpenMP Maximum Thread Number

5.9.a: Syntax
AD<Base>::omp_max_thread(number)

5.9.b: Purpose
By default, for each AD<Base> class there is only one tape that records 9.4.b: AD of Base operations. This tape is a global variable and hence it cannot be used by multiple OpenMP threads at the same time. The omp_max_thread function is used to set the maximum number of OpenMP threads that can be active. In this case, there is a different tape corresponding to each AD<Base> class and thread pair.

5.9.c: number
The argument number has prototype
     size_t 
number
It must be greater than zero and specifies the maximum number of OpenMp threads that will be active at one time.

5.9.d: Independent
Each call to 5.1: Independent(x) creates a new 9.4.j.a: active tape. All of the operations with the corresponding variables must be preformed by the same OpenMP thread. This includes the corresponding call to 5.3: f.Dependent(x,y) or the 5.2.g: ADFun f(x, y) during which the tape stops recording and the variables become parameters.

5.9.e: Restriction
No tapes can be 9.4.j.a: active when this function is called.

5.9.f: Example and Tests
The shell script 5.9.1: openmp_run.sh can be used to compile and run the OpenMP examples and tests.
Input File: cppad/local/omp_max_thread.hpp
5.9.1: Compile and Run the OpenMP Test

5.9.1.a: Syntax
openmp/run.sh


5.9.1.b: Purpose
This script file, openmp/run.sh, compiles and runs the speed and correctness tests for using OpenMP. The following are a list of parameters in this file that can be changed by directly editing the file (there are no command line parameters to the script):

5.9.1.b.a: Compiler Command
The following sets the name of the C++ compiler command:
 
compiler="g++"


5.9.1.b.b: Version Flag
The following compiler flag requests its version information:
 
version_flag="--version"


5.9.1.b.c: OpenMP Flag
The following compiler flag requests openmp support You can run these tests with a compiler that does not support OpenMP by setting this flag to "".
 
openmp_flag=""
For g++ version 4.1 and higher, you can use "-fopenmp" for this flag.

5.9.1.b.d: Other Flag
The following other flags will be used during compilation:
 
other_flags="-DNDEBUG -O2 -Wall"


5.9.1.b.e: Boost Directory
If the 2.1.r: BoostDir is specified on the 2.1.d: configure command line, you must add the corresponding include directory; e.g.,
 
if [ -d /usr/include/boost-1_33_1 ]
then
	other_flags="-DNDEBUG -O2 -Wall -I/usr/include/boost-1_33_1"
fi


5.9.1.b.f: Number of Repeats
The following specifies the number of times to repeat the calculation corresponding to one timing test. If this is equal to "automatic", the number of repeats is determined automatically. If it is not equal to "automatic", it must be a positive integer.
 
n_repeat="automatic"


5.9.1.b.g: Number of Threads
The following determines a set of number of threads to test. Each value in the set must be a positive integer or zero (the value zero is used for dynamic thread adjustment). If the 5.9.1.b.c: openmp_flag is equal to "", this setting is not used.
 
n_thread_set="0 1 2 3 4"


5.9.1.b.h: example_a11c
The following setting determine the corresponding command line arguments for the 5.9.1.1: example_a11c.cpp program:
 
example_a11c_size="10000"


5.9.1.b.i: multi_newton
The following settings determine the corresponding command line arguments for the 5.9.1.2.1: multi_newton program:
 
multi_newton_n_zero="10"
multi_newton_n_grid="40"
multi_newton_n_sum="10"


5.9.1.b.j: sum_i_inv
The following setting determine the corresponding command line arguments for the 5.9.1.3: sum_i_inv.cpp program:
 
sum_i_inv_mega_sum="1"


5.9.1.c: Restrictions
Current this script only runs under the bash shell; e.g., it will not run in an MSDOS box.

5.9.1.d: Contents
example_a11c.cpp: 5.9.1.1A Simple Parallel Loop
multi_newton.cpp: 5.9.1.2Multi-Threaded Newton's Method Main Program
sum_i_inv.cpp: 5.9.1.3Sum of 1/i Main Program

Input File: openmp/run.sh
5.9.1.1: A Simple Parallel Loop

5.9.1.1.a: Syntax
example_a11c n_thread repeat size

5.9.1.1.b: Purpose
Runs a timing test of Example A.1.1.1c of the OpenMP 2.5 standard document.

5.9.1.1.c: n_thread
If the argument n_thread is equal to automatic, dynamic thread adjustment is used. Otherwise, n_thread must be a positive number specifying the number of OpenMP threads to use.

5.9.1.1.d: repeat
If the argument repeat is equal to automatic, the number of times to repeat the calculation of the number of zeros in total interval is automatically determined. In this case, the rate of execution of the total solution is reported.

If the argument repeat is not equal to automatic, it must be a positive integer. In this case repeat determination of the number of times the calculation of the zeros in the total interval is repeated. The rate of execution is not reported (it is assumed that the program execution time is being calculated some other way).

5.9.1.1.e: size
The argument size is the length of the arrays in the example code.

5.9.1.1.f: Example Source
 

# ifdef _OPENMP
# include <omp.h>
# endif

# include <cmath>
# include <cstring>
# include <cstdlib>

// see http://www.coin-or.org/CppAD/Doc/cppad_vector.htm
# include <cppad/vector.hpp>

// see http://www.coin-or.org/CppAD/Doc/speed_test.htm
# include <cppad/speed_test.hpp>

// Beginning of Example A.1.1.1c of OpenMP 2.5 standard document ---------
void a1(int n, float *a, float *b)
{	int i;
# ifdef _OPENMP
# pragma omp parallel for
# endif
	for(i = 1; i < n; i++) /* i is private by default */
		b[i] = (a[i] + a[i-1]) / 2.0;
}
// End of Example A.1.1.1c of OpenMP 2.5 standard document ---------------
		
// routine that is called to repeat the example a number of times
void test(size_t size, size_t repeat)
{	// setup
	size_t i;
	float *a = new float[size];
	float *b = new float[size];
	for(i = 0; i < size; i++)
		a[i] = float(i);
	int n = int(size);
	// run test
	for(i = 0; i < repeat; i++)
		a1(n, a, b);
	// tear down
	delete [] a;
	delete [] b;
	return;
}

// main program
int main(int argc, char *argv[])
{
	using std::cout;
	using std::endl;

	// get command line arguments -----------------------------------
	char *usage = "example_a11c n_thread repeat size";
	if( argc != 4 )
	{	std::cerr << usage << endl;
		exit(1);
	}
	argv++;
	// n_thread 
	int n_thread;
	if( std::strcmp(*argv, "automatic") == 0 )
		n_thread = 0;
	else	n_thread = std::atoi(*argv);
	argv++;
	// repeat 
	size_t repeat;
	if( std::strcmp(*argv, "automatic") == 0 )
		repeat = 0;
	else
	{	assert( std::atoi(*argv) > 0 );
		repeat = std::atoi(*argv);
	}
	argv++;
	// size 
	assert( std::atoi(*argv) > 1 );
	size_t size = std::atoi(*argv++);
	// ---------------------------------------------------------------

	// minimum time for test (repeat until this much time)
	double time_min = 1.;
# ifdef _OPENMP
	if( n_thread > 0 )
	{	omp_set_dynamic(0);            // off dynamic thread adjust
		omp_set_num_threads(n_thread); // set the number of threads 
	}
	// now determine the maximum number of threads
	n_thread = omp_get_max_threads();
	assert( n_thread > 0 );
	
	// inform the user of the maximum number of threads
	cout << "OpenMP: version = "         << _OPENMP;
	cout << ", max number of threads = " << n_thread << endl;
# else
	cout << "_OPENMP is not defined, ";
	cout << "running in single tread mode" << endl;
	n_thread = 1;
# endif
	// Correctness check (store result in ok)
	size_t i;
	float *a = new float[size];
	float *b = new float[size];
	for(i = 0; i < size; i++)
		a[i] = float(i);
	int n = size;
	a1(n, a, b);
	bool ok = true;
	for(i = 1; i < size ; i++)
		ok &= std::fabs( 2. * b[i] - a[i] - a[i-1] ) <= 1e-6; 
	delete [] a;
	delete [] b;

	if( repeat > 0 )
	{	// user specified the number of times to repeat the test
		test(size, repeat);
	}
	else
	{	// automatic determination of number of times to repeat test

	 	// speed test uses a SimpleVector with size_t elements
		CppAD::vector<size_t> size_vec(1);
		size_vec[0] = size;
		CppAD::vector<size_t> rate_vec =
			CppAD::speed_test(test, size_vec, time_min);

		// report results
		cout << "size             = " << size_vec[0] << endl;
		cout << "repeats per sec  = " << rate_vec[0] << endl;
	}
	if( ok )
		cout << "Correctness Test Passed" << endl;
	else	cout << "Correctness Test Failed" << endl;

	return static_cast<int>( ! ok );
}

Input File: openmp/example_a11c.cpp
5.9.1.2: Multi-Threaded Newton's Method Main Program

5.9.1.2.a: Syntax
multi_newton n_thread repeat n_zero n_grid n_sum

5.9.1.2.b: Purpose
Runs a timing test of the 5.9.1.2.1: multi_newton routine. This routine uses Newton's method to determine if there is a zero of a function on each of a set of sub-intervals. CppAD is used to calculate the derivatives required by Newton's method. OpenMP is used to parallelize the calculation on the different sub-intervals.

5.9.1.2.c: n_thread
If the argument n_thread is equal to automatic, dynamic thread adjustment is used. Otherwise, n_thread must be a positive number specifying the number of OpenMP threads to use.

5.9.1.2.d: repeat
If the argument repeat is equal to automatic, the number of times to repeat the calculation of the number of zeros in total interval is automatically determined. In this case, the rate of execution of the total solution is reported.

If the argument repeat is not equal to automatic, it must be a positive integer. In this case repeat determination of the number of times the calculation of the zeros in the total interval is repeated. The rate of execution is not reported (it is assumed that the program execution time is being calculated some other way).

5.9.1.2.e: n_zero
The argument n_zero is the actual number of zeros that there should be in the test function,  \sin(x) . It must be an integer greater than one. The total interval searched for zeros is  [ 0 , (n\_zero - 1) \pi ] .

5.9.1.2.f: n_grid
The argument n_grid specifies the number of sub-intervals to divide the total interval into. It must an integer greater than zero (should probably be greater than two times n_zero).

5.9.1.2.g: n_sum
The actual function that is used is  \[
     f(x) = \frac{1}{n\_sum} \sum_{i=1}^{n\_sum} \sin (x)
\] 
where n_sum is a positive integer. The larger the value of n_sum, the more computation is required to calculate the function.

5.9.1.2.h: Subroutines
5.9.1.2.1: multi_newton Multi-Threaded Newton's Method Routine
5.9.1.2.2: multi_newton.hpp OpenMP Multi-Threading Newton's Method Source Code

5.9.1.2.i: Example Source
 

# include <cppad/cppad.hpp>
# include <cmath>
# include <cstring>
# include "multi_newton.hpp"

# ifdef _OPENMP
# include <omp.h>
# endif


namespace { // empty namespace
	size_t n_sum;  // larger values make fun(x) take longer to calculate
        size_t n_zero; // number of zeros of fun(x) in the total interval
}

// A slow version of the sine function
CppAD::AD<double> fun(const CppAD::AD<double> &x)
{	CppAD::AD<double> sum = 0.;
	size_t i;
	for(i = 0; i < n_sum; i++)
		sum += sin(x);

	return sum / double(n_sum);
}

void test_once(CppAD::vector<double> &xout, size_t n_grid)
{	assert( n_zero > 1 );
	double pi      = 4. * std::atan(1.); 
	double xlow    = 0.;
	double xup     = (n_zero - 1) * pi;
	double epsilon = 1e-6;
	size_t max_itr = 20;

	multi_newton(
		xout    ,
		fun     ,
		n_grid  ,
		xlow    ,
		xup     ,
		epsilon ,
		max_itr
	);
	return;
}

void test_repeat(size_t size, size_t repeat)
{	size_t i;
	CppAD::vector<double> xout;
	for(i = 0; i < repeat; i++)
		test_once(xout, size);
	return;
}

int main(int argc, char *argv[])
{
	using std::cout;
	using std::endl;
	using CppAD::vector;

	char *usage = "multi_newton n_thread repeat n_zero n_grid n_sum";
	if( argc != 6 )
	{	std::cerr << usage << endl;
		exit(1);
	}
	argv++;

	// n_thread command line argument
	int n_thread;
	if( std::strcmp(*argv, "automatic") == 0 )
		n_thread = 0;
	else	n_thread = std::atoi(*argv);
	argv++;

	// repeat command line argument
	size_t repeat;
	if( std::strcmp(*argv, "automatic") == 0 )
		repeat = 0;
	else
	{	assert( std::atoi(*argv) > 0 );
		repeat = std::atoi(*argv);
	}
	argv++;

	// n_zero command line argument 
	assert( std::atoi(*argv) > 1 );
	n_zero = std::atoi(*argv++);

	// n_grid command line argument
	assert( std::atoi(*argv) > 0 );
	size_t n_grid = std::atoi(*argv++);
       
	// n_sum command line argument 
	assert( std::atoi(*argv) > 0 );
	n_sum = std::atoi(*argv++);

	// minimum time for test (repeat until this much time)
	double time_min = 1.;

# ifdef _OPENMP
	if( n_thread > 0 )
	{	omp_set_dynamic(0);            // off dynamic thread adjust
		omp_set_num_threads(n_thread); // set the number of threads 
	}
	// now determine the maximum number of threads
	n_thread = omp_get_max_threads();
	assert( n_thread > 0 );
	
	// No tapes are currently active,
	// so we can inform CppAD of the maximum number of threads
	CppAD::AD<double>::omp_max_thread(size_t(n_thread));

	// inform the user of the maximum number of threads
	cout << "OpenMP: version = "         << _OPENMP;
	cout << ", max number of threads = " << n_thread << endl;
# else
	cout << "_OPENMP is not defined, ";
	cout << "running in single tread mode" << endl;
	n_thread = 1;
# endif
	// initialize flag
	bool ok = true;

	// sub-block so xout gets deallocated before call to CPPAD_TRACK_COUNT
	{	// Correctness check
		vector<double> xout;
		test_once(xout, n_grid);
		double epsilon = 1e-6;
		double pi      = 4. * std::atan(1.);
		ok            &= (xout.size() == n_zero);
		size_t i       = 0;
		while( ok & (i < n_zero) )
		{	ok &= std::fabs( xout[i] - pi * i) <= 2 * epsilon;
			++i;
		}
	}
	if( repeat > 0 )
	{	// run the calculation the requested number of time
		test_repeat(n_grid, repeat);
	}
	else
	{	// actually time the calculation	 

		// size of the one test case
		vector<size_t> size_vec(1);
		size_vec[0] = n_grid;

		// run the test case
		vector<size_t> rate_vec =
		CppAD::speed_test(test_repeat, size_vec, time_min);

		// report results
		cout << "n_grid           = " << size_vec[0] << endl;
		cout << "repeats per sec  = " << rate_vec[0] << endl;
	}
	// check all the threads for a CppAD memory leak
	if( CPPAD_TRACK_COUNT() != 0 )
	{	ok = false;
		cout << "Error: memory leak detected" << endl;
	}
	if( ok )
		cout << "Correctness Test Passed" << endl;
	else	cout << "Correctness Test Failed" << endl;

	return static_cast<int>( ! ok );
}


Input File: openmp/multi_newton.cpp
5.9.1.2.1: Multi-Threaded Newton's Method Routine

5.9.1.2.1.a: Syntax
multi_newton(xoutfunn_gridxlowxupepsilonmax_itr)

5.9.1.2.1.b: Purpose
Determine the argument values  x \in [a, b] (where  a < b ) such that  f(x) = 0 .

5.9.1.2.1.c: Method
For  i = 0 , \ldots , n , we define the i-th grid point  g_i and the i-th interval  I_i by  \[
\begin{array}{rcl}
     g_i & = & a \frac{n - i}{n} +  b \frac{i}{n}
     \\
     I_i & = & [ g_i , g_{i+1} ]
\end{array}
\] 
Newton's method is applied starting at the center of each of the intervals  I_i for  i = 0 , \ldots , n-1 and at most one zero is found for each interval.

5.9.1.2.1.d: xout
The argument xout has the prototype
     CppAD::vector<double> &
xout
The input size and value of the elements of xout do not matter. Upon return from multi_newton, the size of xout is less than  n and  \[
     | f( xout[i] ) | \leq epsilon
\] 
for each valid index i. Two  x solutions are considered equal (and joined as one) if the absolute difference between the solutions is less than  (b - a) / n .

5.9.1.2.1.e: fun
The argument fun has prototype
     
Fun &fun
This argument must evaluate the function  f(x) using the syntax
     
f = fun(x)
where the argument x and the result f have the prototypes
     const AD<double> &
x 
     AD<double>        
f
.

5.9.1.2.1.f: n_grid
The argument n_grid has prototype
     size_t 
n_grid
It specifies the number of grid points; i.e.,  n in the 5.9.1.2.1.c: method above.

5.9.1.2.1.g: xlow
The argument xlow has prototype
     double 
xlow
It specifies the lower limit for the entire search; i.e.,  a in the 5.9.1.2.1.c: method above.

5.9.1.2.1.h: xup
The argument xup has prototype
     double 
xup
It specifies the upper limit for the entire search; i.e.,  b in the 5.9.1.2.1.c: method above.

5.9.1.2.1.i: epsilon
The argument epsilon has prototype
     double 
epsilon
It specifies the convergence criteria for Newton's method in terms of how small the function value must be.

5.9.1.2.1.j: max_itr
The argument max_itr has prototype
     size_t 
max_itr
It specifies the maximum number of iterations of Newton's method to try before giving up on convergence.
Input File: openmp/multi_newton.hpp
5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
 

# include <cppad/cppad.hpp>
# include <cassert>

# ifdef _OPENMP
# include <omp.h>
# endif

namespace { // BEGIN CppAD namespace

template <class Fun>
void one_newton(double &fcur, double &xcur, Fun &fun, 
	double xlow, double xin, double xup, double epsilon, size_t max_itr)
{	using CppAD::AD;
	using CppAD::vector;
	using CppAD::abs;

	// domain space vector
	size_t n = 1;
	vector< AD<double> > X(n);
	// range space vector
	size_t m = 1;
	vector< AD<double> > Y(m);
	// domain and range differentials
	vector<double> dx(n), dy(m);

	size_t itr;
	xcur = xin;
	for(itr = 0; itr < max_itr; itr++)
	{	// domain space vector
		X[0] = xcur;
		CppAD::Independent(X);
		// range space vector
		Y[0] = fun(X[0]);
		// F : X -> Y
		CppAD::ADFun<double> F(X, Y);
		// fcur = F(xcur)
		fcur  = Value(Y[0]);
		// evaluate dfcur = F'(xcur)
		dx[0] = 1;
		dy = F.Forward(1, dx);
		double dfcur = dy[0];
		// check end of iterations
		if( abs(fcur) <= epsilon )
			return;
		if( (xcur == xlow) & (fcur * dfcur > 0.) )
			return; 
		if( (xcur == xup)  & (fcur * dfcur < 0.) )
			return; 
		if( dfcur == 0. )
			return;
		// next Newton iterate
		double delta_x = - fcur / dfcur;
		if( xlow - xcur >= delta_x )
			xcur = xlow;
		else if( xup - xcur <= delta_x )
			xcur = xup;
		else	xcur = xcur + delta_x;
	}
	return;
}

template <class Fun>
void multi_newton(
	CppAD::vector<double> &xout , 
	Fun &fun                    , 
	size_t n_grid               , 
	double xlow                 , 
	double xup                  , 
	double epsilon              , 
	size_t max_itr              )
{	using CppAD::AD;
	using CppAD::vector;
	using CppAD::abs;

	// check argument values
	assert( xlow < xup );
	assert( n_grid > 0 );

	// OpenMP uses integers in place of size_t
	int i, n = int(n_grid);

	// set up grid
	vector<double> grid(n_grid + 1);
	vector<double> fcur(n_grid), xcur(n_grid), xmid(n_grid);
	double dx = (xup - xlow) / double(n_grid);
	for(i = 0; size_t(i) < n_grid; i++)
	{	grid[i] = xlow + i * dx;
		xmid[i] = xlow + (i + .5) * dx;
	}
	grid[n_grid] = xup;

# ifdef _OPENMP
# pragma omp parallel for 
# endif
	for(i = 0; i < n; i++) 
	{	one_newton(
			fcur[i]   ,
			xcur[i]   ,
			fun       , 
			grid[i]   , 
			xmid[i]   , 
			grid[i+1] , 
			epsilon   , 
			max_itr
		);
	}
// end omp parallel for

	// remove duplicates and points that are not solutions
	double xlast  = xlow;
	size_t ilast  = 0;
	size_t n_zero = 0;
	for(i = 0; size_t(i) < n_grid; i++)
	{	if( abs( fcur[i] ) <= epsilon )
		{	if( n_zero == 0 )
			{	xcur[n_zero++] = xlast = xcur[i];
				ilast = i;
			}
			else if( fabs( xcur[i] - xlast ) > dx ) 
			{	xcur[n_zero++] = xlast = xcur[i];
				ilast = i;
			}
			else if( fabs( fcur[i] ) < fabs( fcur[ilast] ) )
			{	xcur[n_zero - 1] = xlast = xcur[i]; 
				ilast = i;
			}
		}
	}

	// resize output vector and set its values
	xout.resize(n_zero);
	for(i = 0; size_t(i) < n_zero; i++)
		xout[i] = xcur[i];

	return;
}

} // END CppAD namespace


Input File: openmp/multi_newton.hpp
5.9.1.3: Sum of 1/i Main Program

5.9.1.3.a: Syntax
sum_i_inv n_thread repeat mega_sum

5.9.1.3.b: Purpose
Runs a timing test of computing
     1 + 1/2 + 1/3 + ... + 1/
n_sum
where n_sum = 1,000,000 * mega_sum

5.9.1.3.c: n_thread
If the argument n_thread is equal to automatic, dynamic thread adjustment is used. Otherwise, n_thread must be a positive number specifying the number of OpenMP threads to use.

5.9.1.3.d: repeat
If the argument repeat is equal to automatic, the number of times to repeat the calculation of the number of zeros in total interval is automatically determined. In this case, the rate of execution of the total solution is reported.

If the argument repeat is not equal to automatic, it must be a positive integer. In this case repeat determination of the number of times the calculation of the summation above. The rate of execution is not reported (it is assumed that the program execution time is being calculated some other way).

5.9.1.3.e: mega_sum
Is the value of mega_sum in the summation (it must be greater than or equal to the number of threads).

5.9.1.3.f: Example Source
 
# include <cppad/cppad.hpp>
# ifdef _OPENMP
# include <omp.h>
# endif

# include <cassert>
# ifdef _OPENMP
# include <omp.h>
# endif

# include <cstring>

namespace { // empty namespace
	int n_thread;
}

double sum_using_one_thread(int start, int stop)
{	// compute 1./start + 1./(start+1) + ... + 1./(stop-1)
	double sum = 0.;
	int i = stop;
	while( i > start )
	{	i--;
		sum += 1. / double(i);	
	}
	return sum;
}
double sum_using_multiple_threads(int n_sum)
{	// compute 1. + 1./2 + ... + 1./n_sum
	assert( n_sum >= n_thread );   // assume n_sum / n_thread > 1

	// limit holds start and stop values for each thread
	int    limit[n_thread + 1];
	int i;
	for(i = 1; i < n_thread; i++)
		limit[i] = (n_sum * i ) / n_thread;
	limit[0]         = 1;
	limit[n_thread]  = n_sum + 1;

	// compute sum_one[i] = 1/limit[i] + ... + 1/(limit[i+1} - 1)
	double sum_one[n_thread];
//--------------------------------------------------------------------------
# ifdef _OPENMP
# pragma omp parallel for 
# endif
	for(i = 0; i < n_thread; i++)
		sum_one[i] = sum_using_one_thread(limit[i], limit[i+1]);
// -------------------------------------------------------------------------

	// compute sum_all = sum_one[0] + ... + sum_one[n_thread-1]
	double sum_all = 0.;
	for(i = 0; i < n_thread; i++)
		sum_all += sum_one[i];

	return sum_all;
}

void test_once(double &sum, size_t mega_sum)
{	assert( mega_sum >= 1 );
	int n_sum = int(mega_sum * 1000000);
	sum = sum_using_multiple_threads(n_sum); 
	return;
}

void test_repeat(size_t size, size_t repeat)
{	size_t i;
	double sum;
	for(i = 0; i < repeat; i++)
		test_once(sum, size);
	return;
}

int main(int argc, char *argv[])
{
	using std::cout;
	using std::endl;
	using CppAD::vector;

	char *usage = "sum_i_inv n_thread repeat mega_sum";
	if( argc != 4 )
	{	std::cerr << usage << endl;
		exit(1);
	}
	argv++;

	// n_thread command line argument
	if( std::strcmp(*argv, "automatic") == 0 )
		n_thread = 0;
	else	n_thread = std::atoi(*argv);
	argv++;

	// repeat command line argument
	size_t repeat;
	if( std::strcmp(*argv, "automatic") == 0 )
		repeat = 0;
	else
	{	assert( std::atoi(*argv) > 0 );
		repeat = std::atoi(*argv);
	}
	argv++;

	// mega_sum command line argument 
	size_t mega_sum;
	assert( std::atoi(*argv) > 0 );
	mega_sum = size_t( std::atoi(*argv++) );

	// minimum time for test (repeat until this much time)
	double time_min = 1.;

# ifdef _OPENMP
	if( n_thread > 0 )
	{	omp_set_dynamic(0);            // off dynamic thread adjust
		omp_set_num_threads(n_thread); // set the number of threads 
	}
	// now determine the maximum number of threads
	n_thread = omp_get_max_threads();
	assert( n_thread > 0 );
	
	// No tapes are currently active,
	// so we can inform CppAD of the maximum number of threads
	CppAD::AD<double>::omp_max_thread(size_t(n_thread));

	// inform the user of the maximum number of threads
	cout << "OpenMP: version = "         << _OPENMP;
	cout << ", max number of threads = " << n_thread << endl;
# else
	cout << "_OPENMP is not defined, ";
	cout << "running in single tread mode" << endl;
	n_thread = 1;
# endif
	// initialize flag
	bool ok = true;

	// Correctness check
	double sum;
	test_once(sum, mega_sum);
	double epsilon = 1e-6;
	size_t i = 0;
	size_t n_sum = mega_sum * 1000000;
	while(i < n_sum)
		sum -= 1. / double(++i); 
	ok &= std::fabs(sum) <= epsilon;

	if( repeat > 0 )
	{	// run the calculation the requested number of time
		test_repeat(mega_sum, repeat);
	}
	else
	{	// actually time the calculation	 

		// size of the one test case
		vector<size_t> size_vec(1);
		size_vec[0] = mega_sum;

		// run the test case
		vector<size_t> rate_vec =
		CppAD::speed_test(test_repeat, size_vec, time_min);

		// report results
		cout << "mega_sum         = " << size_vec[0] << endl;
		cout << "repeats per sec  = " << rate_vec[0] << endl;
	}
	// check all the threads for a CppAD memory leak
	if( CPPAD_TRACK_COUNT() != 0 )
	{	ok = false;
		cout << "Error: memory leak detected" << endl;
	}
	if( ok )
		cout << "Correctness Test Passed" << endl;
	else	cout << "Correctness Test Failed" << endl;

	return static_cast<int>( ! ok );
}


Input File: openmp/sum_i_inv.cpp
5.10: ADFun Object Deprecated Member Functions

5.10.a: Syntax
f.Dependent(y)
o = f.Order()
m = f.Memory()
s = f.Size()
t = f.taylor_size()

5.10.b: Purpose
The ADFun<Base> functions documented here have been deprecated; i.e., they are no longer approved of and may be removed from some future version of CppAD.

5.10.c: Dependent
A recording of and AD of Base 9.4.g.b: operation sequence is started by a call of the form
     Independent(
x)
If there is only one such recording at the current time, you can use f.Dependent(y) in place of
     
f.Dependent(xy)
See 5.3: Dependent for a description of this operation.

5.10.c.a: Deprecated
This syntax was deprecated when CppAD was extended to allow for more than one AD<Base> recording to be active at one time. This was necessary to allow for multiple threading applications.

5.10.d: Order
The result o has prototype
     size_t 
o
and is the order of the previous forward operation using the function f. This is the highest order of the 9.4.k: Taylor coefficients that are currently stored in f.

5.10.d.a: Deprecated
Zero order corresponds to function values being stored in f. In the future, we would like to be able to erase the function values so that f uses less memory. In this case, the return value of Order would not make sense. Use 5.6.1.4: size_taylor to obtain the number of Taylor coefficients currently stored in the ADFun object f (which is equal to the order plus one).

5.10.e: Memory
The result
     size_t 
m
and is the number of memory units (sizeof) required for the information currently stored in f. This memory is returned to the system when the destructor for f is called.

5.10.e.a: Deprecated
It used to be the case that an ADFun object just kept increasing its buffers to the maximum size necessary during its lifetime. It would then return the buffers to the system when its destructor was called. This is no longer the case, an ADFun object now returns memory when it no longer needs the values stored in that memory. Thus the Memory function is no longer well defined.

5.10.f: Size
The result s has prototype
     size_t 
s
and is the number of variables in the operation sequence plus the following: one for a phantom variable with tape address zero, one for each component of the domain that is a parameter. The amount of work and memory necessary for computing function values and derivatives using f is roughly proportional to s.

5.10.f.a: Deprecated
There are other sizes attached to an ADFun object, for example, the number of operations in the sequence. In order to avoid confusion with these other sizes, use 5.5.h: size_var to obtain the number of variables in the operation sequence.

5.10.g: taylor_size
The result t has prototype
     size_t 
t
and is the number of Taylor coefficients, per variable in the AD operation sequence, currently calculated and stored in the ADFun object f.

5.10.g.a: Deprecated
For the purpose of uniform naming, this function has been replaced by 5.6.1.4: size_taylor .

5.10.h: Example
The file 5.6.1.7: Forward.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: omh/fun_deprecated.omh
6: The CppAD General Purpose Library

6.a: Error Handler
All of the routines in the CppAD namespace use the following general purpose error handler:
6.1: ErrorHandler Replacing the CppAD Error Handler

6.b: Some Testing Utilities
The routines listed below are general purpose numerical testing routines:
6.2: NearEqual Determine if Two Values Are Nearly Equal
6.3: speed_test Run One Speed Test and Return Results
6.4: SpeedTest Run One Speed Test and Print Results

6.c: C++ Concepts
We refer to a the set of classes that satisfy certain conditions as a C++ concept. The following concepts are used by the CppAD Template library:
6.5: NumericType Definition of a Numeric Type
6.6: CheckNumericType Check NumericType Class Concept
6.7: SimpleVector Definition of a Simple Vector
6.8: CheckSimpleVector Check Simple Vector Concept

6.d: CppAD Numerical Template Library
The routines listed below are general purpose numerical routines written with the floating point type A C++ template parameter. This enables them to be used with algorithmic differentiation types, as well as for other purposes.
6.9: nan Obtain Nan and Determine if a Value is Nan
6.10: pow_int The Integer Power Function
6.11: Poly Evaluate a Polynomial or its Derivative
6.12: LuDetAndSolve Compute Determinants and Solve Equations by LU Factorization
6.13: RombergOne One DimensionalRomberg Integration
6.14: RombergMul Multi-dimensional Romberg Integration
6.15: Runge45 An Embedded 4th and 5th Order Runge-Kutta ODE Solver
6.16: Rosen34 A 3rd and 4th Order Rosenbrock ODE Solver
6.17: OdeErrControl An Error Controller for ODE Solvers
6.18: OdeGear An Arbitrary Order Gear Method
6.19: OdeGearControl An Error Controller for Gear's Ode Solvers

6.e: Numerical AD Library
The routines listed below are numerical routines that are specially designed to work with CppAD in particular.
6.20: BenderQuad Computing Jacobian and Hessian of Bender's Reduced Objective
6.21: LuRatio LU Factorization of A Square Matrix and Stability Calculation

6.f: CppAD Support Template Library
The classes listed are used to support CppAD calculations:
6.22: std_math_unary Float and Double Standard Math Unary Functions
6.23: CppAD_vector The CppAD::vector Template Class
6.24: TrackNewDel Routines That Track Use of New and Delete

Input File: omh/library.omh
6.1: Replacing the CppAD Error Handler

6.1.a: Syntax
ErrorHandler info(handler)
ErrorHandler::Call(knownlinefileexpmsg)

6.1.b: Constructor
When you construct a ErrorHandler object, the current CppAD error handler is replaced by handler. When the object is destructed, the previous CppAD error handler is restored.

6.1.c: Call
When ErrorHandler::Call is called, the current CppAD error handler is used to report an error. This starts out as a default error handler and can be replaced using the ErrorHandler constructor.

6.1.d: info
The object info is used to store information that is necessary to restore the previous CppAD error handler. This is done when the destructor for info is called.

6.1.e: handler
The argument handler has prototype
     void (*
handler
          (bool, int, const char *, const char *, const char *);
When an error is detected, it is called with the syntax
     
handler (knownlinefileexpmsg)
This routine should not return; i.e., upon detection of the error, the routine calling handler does not know how to proceed.

6.1.f: known
The handler argument known has prototype
     bool 
known
If it is true, the error being reported is from a know problem.

6.1.g: line
The handler argument line has prototype
     int 
line
It reports the source code line number where the error is detected.

6.1.h: file
The handler argument file has prototype
     const char *
file
and is a '\0' terminated character vector. It reports the source code file where the error is detected.

6.1.i: exp
The handler argument exp has prototype
     const char *
exp
and is a '\0' terminated character vector. It is a source code boolean expression that should have been true, but is false, and thereby causes this call to handler.

6.1.j: msg
The handler argument msg has prototype
     const char *
msg
and is a '\0' terminated character vector. It reports the meaning of the error from the C++ programmers point of view.

6.1.k: Example
The file 6.1.1: ErrorHandler.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.
Input File: cppad/error_handler.hpp
6.1.1: Replacing The CppAD Error Handler: Example and Test
 

# include <cppad/error_handler.hpp>
# include <cstring>

namespace {
	void myhandler(
		bool known       ,
		int  line        ,
		const char *file ,
		const char *exp  ,
		const char *msg  )
	{	// error handler must not return, so throw an exception
		throw line;
	}
}


bool ErrorHandler(void)
{	using CppAD::ErrorHandler;

	int lineMinusFive = 0;

	// replace the default CppAD error handler
	ErrorHandler info(myhandler);

	// set ok to false unless catch block is executed
	bool ok = false;

	// use try / catch because handler throws an exception
	try {
		// set the static variable Line to next source code line
		lineMinusFive = __LINE__;
 
		// can call myhandler anywhere that ErrorHandler is defined
		ErrorHandler::Call(
			true     , // reason for the error is known
			__LINE__ , // current source code line number
			__FILE__ , // current source code file name
			"1 > 0"  , // an intentional error condition
			"Testing ErrorHandler"     // reason for error
		); 
	}
	catch ( int line )
	{	// check value of the line number that was passed to handler
		ok = (line == lineMinusFive + 5);
	}

	// info drops out of scope and the default CppAD error handler
	// is restored when this routine returns.
	return ok;
}


Input File: example/error_handler.cpp
6.1.2: CppAD Assertions During Execution

6.1.2.a: Syntax
CPPAD_ASSERT_KNOWN(expmsg)
CPPAD_ASSERT_UNKNOWN(exp)

6.1.2.b: Purpose
If the preprocessor symbol NDEBUG/ is not defined, these CppAD macros are used to detect and report errors. They are documented here because they correspond to the C++ source code that the error is reported at.

6.1.2.c: Restriction
The CppAD user should not uses these macros. You can however write your own macros that do not begin with CPPAD and that call the 6.1: CppAD error handler .

6.1.2.c.a: Known
The CPPAD_ASSERT_KNOWN macro is used to check for an error with a known cause. For example, many CppAD routines uses these macros to make sure their arguments conform to their specifications.

6.1.2.c.b: Unknown
The CPPAD_ASSERT_UNKNOWN macro is used to check that the CppAD internal data structures conform as expected. If this is not the case, CppAD does not know why the error has occurred; for example, the user may have written past the end of an allocated array.

6.1.2.d: Exp
The argument exp is a C++ source code expression that results in a bool value that should be true. If it is false, an error has occurred. This expression may be execute any number of times (including zero times) so it must have not side effects.

6.1.2.e: Msg
The argument msg has prototype
     const char *
msg
and contains a '\0' terminated character string. This string is a description of the error corresponding to exp being false.

6.1.2.f: Error Handler
These macros use the 6.1: CppAD error handler to report errors. This error handler can be replaced by the user.
Input File: cppad/local/cppad_assert.hpp
6.2: Determine if Two Values Are Nearly Equal

6.2.a: Syntax
# include <cppad/near_equal.hpp>

b = NearEqual(xyra)

6.2.b: Purpose
Returns true, if x and y are nearly equal, and false otherwise.

6.2.c: x
The argument x has one of the following possible prototypes
     const 
Type               &x,
     const std::complex<
Type> &x

6.2.d: y
The argument y has one of the following possible prototypes
     const 
Type               &y,
     const std::complex<
Type> &y

6.2.e: r
The relative error criteria r has prototype
     const 
Type &r
It must be greater than or equal to zero. The relative error condition is defined as:  \[
     | x - y | \leq r ( |x| + |y| ) 
\] 


6.2.f: a
The absolute error criteria a has prototype
     const 
Type &a
It must be greater than or equal to zero. The absolute error condition is defined as:  \[
     | x - y | \leq a
\] 


6.2.g: b
The return value b has prototype
     bool 
b
If either x or y is infinite or not a number, the return value is false. Otherwise, if either the relative or absolute error condition (defined above) is satisfied, the return value is true. Otherwise, the return value is false.

6.2.h: Type
The type Type must be a 6.5: NumericType . The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined objects a and b of type Type:
Operation Description
a <= b less that or equal operator (returns a bool object)

6.2.i: Include Files
The file cppad/near_equal.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.2.j: Example
The file 6.2.1: Near_Equal.cpp contains an example and test of NearEqual. It return true if it succeeds and false otherwise.

6.2.k: Exercise
Create and run a program that contains the following code:
 
	using std::complex;
	using std::cout;
	using std::endl;

	complex<double> one(1., 0), i(0., 1);
	complex<double> x = one / i;
	complex<double> y = - i;
	double          r = 1e-12;
	double          a = 0;
	bool           ok = CppAD::NearEqual(x, y, r, a);
	if( ok )
		cout << "Ok"    << endl;
	else	cout << "Error" << endl;

Input File: cppad/near_equal.hpp
6.2.1: NearEqual Function: Example and Test

6.2.1.a: File Name
This file is called near_equal.cpp instead of NearEqual.cpp to avoid a name conflict with ../lib/NearEqual.cpp in the corresponding Microsoft project file.
 

# include <cppad/near_equal.hpp>

# include <complex>

bool Near_Equal(void)
{	bool ok = true;
	typedef std::complex<double> Complex;
	using CppAD::NearEqual;

	// double 
	double x    = 1.00000;
	double y    = 1.00001;
	double a    =  .00003;
	double r    =  .00003;
	double zero = 0.; 
	double inf  = 1. / zero;
	double nan  = 0. / zero;

	ok &= NearEqual(x, y, zero, a);
	ok &= NearEqual(x, y, r, zero);
	ok &= NearEqual(x, y, r, a);

	ok &= ! NearEqual(x, y, r / 10., a / 10.);
	ok &= ! NearEqual(inf, inf, r, a);
	ok &= ! NearEqual(-inf, -inf, r, a);
	ok &= ! NearEqual(nan, nan, r, a);

	// complex 
	Complex X(x, x / 2.);
	Complex Y(y, y / 2.);
	Complex Inf(inf, zero);
	Complex Nan(zero, nan);

	ok &= NearEqual(X, Y, zero, a);
	ok &= NearEqual(X, Y, r, zero);
	ok &= NearEqual(X, Y, r, a);

	ok &= ! NearEqual(X, Y, r / 10., a / 10.);
	ok &= ! NearEqual(Inf, Inf, r, a);
	ok &= ! NearEqual(-Inf, -inf, r, a);
	ok &= ! NearEqual(Nan, Nan, r, a);

	return ok;
}


Input File: example/near_equal.cpp
6.3: Run One Speed Test and Return Results

6.3.a: Syntax
# include <cppad/speed_test.hpp>

rate_vec = speed_test(testsize_vectime_min)

6.3.b: Purpose
The speed_test function executes a speed test for various sized problems and reports the rate of execution.

6.3.c: Motivation
It is important to separate small calculation units and test them individually. This way individual changes can be tested in the context of the routine that they are in. On many machines, accurate timing of a very short execution sequences is not possible. In addition, there may be set up and tear down time for a test that we do not really want included in the timing. For this reason speed_test automatically determines how many times to repeat the section of the test that we wish to time.

6.3.d: Include
The file cppad/speed_test.hpp defines the speed_test function. This file is included by cppad/cppad.hpp and it can also be included separately with out the rest of the CppAD routines.

6.3.e: Vector
We use Vector to denote a 6.7: simple vector class with elements of type size_t.

6.3.f: test
The speed_test argument test is a function with the syntax
     
test(sizerepeat)
and its return value is void.

6.3.f.a: size
The test argument size has prototype
     size_t 
size
It specifies the size for this test.

6.3.f.b: repeat
The test argument repeat has prototype
     size_t 
repeat
It specifies the number of times to repeat the test.

6.3.g: size_vec
The speed_test argument size_vec has prototype
     const 
Vector &size_vec
This vector determines the size for each of the tests problems.

6.3.h: time_min
The argument time_min has prototype
     double 
time_min
It specifies the minimum amount of time in seconds that the test routine should take. The repeat argument to test is increased until this amount of execution time is reached.

6.3.i: rate_vec
The return value rate_vec has prototype
     
Vector &rate_vec
We use  n to denote its size which is the same as the vector size_vec. For  i = 0 , \ldots , n-1 ,
     
rate_vec[i]
is the ratio of repeat divided by time in seconds for the problem with size size_vec[i].

6.3.j: Timing
If your system supports the unix gettimeofday function, it will be used to measure time. Otherwise, time is measured by the difference in
 
	(double) clock() / (double) CLOCKS_PER_SEC
in the context of the standard <ctime> definitions.

6.3.k: Example
The section 6.3.1: speed_test.cpp contains an example and test of speed_test.
Input File: cppad/speed_test.hpp
6.3.1: speed_test: Example and test
 
# include <cppad/speed_test.hpp>
# include <vector>

namespace { // empty namespace
	void test(size_t size, size_t repeat)
	{	// setup
		double *a = new double[size];
		double *b = new double[size];
		double *c = new double[size];
		size_t i  = size;;
		while(i)
		{	--i;
			a[i] = i;
			b[i] = 2 * i;
		}
		// operations we are timing
		while(repeat--)
		{	i = size;;
			while(i)
			{	--i;
				c[i] = a[i] + b[i];
			}
		}
		// teardown
		delete [] a;
		delete [] b;
		delete [] c;
		return;
	}
}
bool speed_test(void)
{	bool ok = true;

	// size of the test cases
	std::vector<size_t> size_vec(2);
	size_vec[0] = 10;
	size_vec[1] = 20;

	// use a small amout of time (we do not need accurate results)
	double time_min = .2; 

	// run the test cases
	std::vector<size_t> rate_vec(2);
	rate_vec = CppAD::speed_test(test, size_vec, time_min);

	// time per repeat loop (note counting setup or teardown)
	double time_0 = 1. / double(rate_vec[0]);
	double time_1 = 1. / double(rate_vec[1]);

	// for this case, time should be linear w.r.t size
	double check    = double(size_vec[1]) * time_0 / double(size_vec[0]);
	double rel_diff = std::abs(check - time_1) / time_1;
	ok             &= (rel_diff <= .1);
 
	return ok;
}

Input File: speed/example/speed_test.cpp
6.4: Run One Speed Test and Print Results

6.4.a: Syntax
# include <cppad/speed_test.hpp>

SpeedTest(Testfirstinclast)

6.4.b: Purpose
The SpeedTest function executes a speed test for various sized problems and reports the results on standard output; i.e. std::cout. The size of each test problem is included in its report (unless first is equal to last).

6.4.c: Motivation
It is important to separate small calculation units and test them individually. This way individual changes can be tested in the context of the routine that they are in. On many machines, accurate timing of a very short execution sequences is not possible. In addition, there may be set up time for a test that we do not really want included in the timing. For this reason SpeedTest automatically determines how many times to repeat the section of the test that we wish to time.

6.4.d: Include
The file speed_test.hpp contains the SpeedTest function. This file is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.4.e: Test
The SpeedTest argument Test is a function with the syntax
     
name = Test(sizerepeat)

6.4.e.a: size
The Test argument size has prototype
     size_t 
size
It specifies the size for this test.

6.4.e.b: repeat
The Test argument repeat has prototype
     size_t 
repeat
It specifies the number of times to repeat the test.

6.4.e.c: name
The Test result name has prototype
     std::string 
name
The results for this test are reported on std::cout with name as an identifier for the test. It is assumed that, for the duration of this call to SpeedTest, Test will always return the same value for name. If name is the empty string, no test name is reported by SpeedTest.

6.4.f: first
The SpeedTest argument first has prototype
     size_t 
first
It specifies the size of the first test problem reported by this call to SpeedTest.

6.4.g: last
The SpeedTest argument last has prototype
     size_t 
last
It specifies the size of the last test problem reported by this call to SpeedTest.

6.4.h: inc
The SpeedTest argument inc has prototype
     int 
inc
It specifies the increment between problem sizes; i.e., all values of size in calls to Test are given by
     
size = first + j * inc
where j is a positive integer. The increment can be positive or negative but it cannot be zero. The values first, last and inc must satisfy the relation  \[
     inc * ( last - first ) \geq 0
\] 


6.4.i: rate
The value displayed in the rate column on std::cout is defined as the value of repeat divided by the corresponding elapsed execution time in seconds. The elapsed execution time is measured by the difference in
 
	(double) clock() / (double) CLOCKS_PER_SEC
in the context of the standard <ctime> definitions.

6.4.j: Errors
If one of the restrictions above is violated, the CppAD error handler is used to report the error. You can redefine this action using the instructions in 6.1: ErrorHandler

6.4.k: Example
The section 6.4.1: speed_program.cpp contains an example usage of SpeedTest.
Input File: cppad/speed_test.hpp
6.4.1: Example Use of SpeedTest

6.4.1.a: Running This Program
On a Unix system that includes the g++ compiler, you can compile and run this program by changing into the speed/example directory and executing the following commands
 
	g++ -I../.. speed_program.cpp -o speed_program.exe
	./speed_program.exe


6.4.1.b: Program
 
# include <cppad/speed_test.hpp>

std::string Test(size_t size, size_t repeat)
{	// setup
	double *a = new double[size];
	double *b = new double[size];
	double *c = new double[size];
	size_t i  = size;;
	while(i)
	{	--i;
		a[i] = i;
		b[i] = 2 * i;
	}
	// operations we are timing
	while(repeat--)
	{	i = size;;
		while(i)
		{	--i;
			c[i] = a[i] + b[i];
		}
	}
	// teardown
	delete [] a;
	delete [] b;
	delete [] c;

	// return a test name that is valid for all sizes and repeats
	return "double: c[*] = a[*] + b[*]";
}
int main(void)
{
	CppAD::SpeedTest(Test, 10, 10, 100);
	return 0;
}



6.4.1.c: Output
Executing of the program above generated the following output (the rates will be different for each particular system):
 
	double: c[*] = a[*] + b[*]
	size = 10  rate = 14,122,236
	size = 20  rate = 7,157,515
	size = 30  rate = 4,972,500
	size = 40  rate = 3,887,214
	size = 50  rate = 3,123,086
	size = 60  rate = 2,685,214
	size = 70  rate = 2,314,737
	size = 80  rate = 2,032,124
	size = 90  rate = 1,814,145
	size = 100 rate = 1,657,828

Input File: speed/example/speed_program.cpp
6.5: Definition of a Numeric Type

6.5.a: Type Requirements
A NumericType is any type that satisfies the requirements below. The following is a list of some numeric types: int, float, double, AD<double>, AD< AD<double> >. The routine 6.6: CheckNumericType can be used to check that a type satisfies these conditions.

6.5.b: Default Constructor
The syntax
     
NumericType x;
creates a NumericType object with an unspecified value.

6.5.c: Constructor From Integer
If i is an int, the syntax
     
NumericType x(i);
creates a NumericType object with a value equal to i where i can be const.

6.5.d: Copy Constructor
If x is a NumericType object the syntax
     
NumericType y(x);
creates a NumericType object y with the same value as x where x can be const.

6.5.e: Assignment
If x and y are NumericType objects, the syntax
     
x = y
sets the value of x equal to the value of y where y can be const. The expression corresponding to this operation is unspecified; i.e., it could be void and hence
     
x = y = z
may not be legal.

6.5.f: Operators
Suppose x, y and z NumericType objects where x and y may be const. In the result type column, NumericType can be replaced by any type that can be used just like a NumericType object.
Operation Description Result Type
+x unary plus NumericType
-x unary minus NumericType
x +  y binary addition NumericType
x -  y binary subtraction NumericType
x *  y binary multiplication NumericType
x /  y binary division NumericType
z += y computed assignment addition unspecified
z -= y computed assignment subtraction unspecified
z *= y computed assignment multiplication unspecified
z /= y computed assignment division unspecified

6.5.g: Example
The file 6.5.1: NumericType.cpp contains an example and test of using numeric types. It returns true if it succeeds and false otherwise. (It is easy to modify to test additional numeric types.)

6.5.h: Exercise
  1. List three operators that are not supported by every numeric type but that are supported by the numeric types int, float, double.
  2. Which of the following are numeric types: std::complex<double>, std::valarray<double>, std::vector<double> ?

Input File: omh/numeric_type.omh
6.5.1: The NumericType: Example and Test
 

# include <cppad/cppad.hpp>

namespace { // Empty namespace

	// -------------------------------------------------------------------
	class MyType {
	private:
		double d;
	public:
		// constructor from void 
		MyType(void) : d(0.)
		{ }
		// constructor from an int 
		MyType(int d_) : d(d_)
		{ }
		// copy constructor
		MyType(const MyType &x) 
		{	d = x.d; }
		// assignment operator
		void operator = (const MyType &x)
		{	d = x.d; }
		// member function that converts to double
		double Double(void) const
		{	return d; }
		// unary plus
		MyType operator + (void) const
		{	MyType x;
			x.d =  d;
			return x; 
		}
		// unary plus
		MyType operator - (void) const
		{	MyType x;
			x.d = - d;
			return x; 
		}
		// binary addition
		MyType operator + (const MyType &x) const
		{	MyType y;
			y.d = d + x.d ;
			return y; 
		}
		// binary subtraction
		MyType operator - (const MyType &x) const
		{	MyType y;
			y.d = d - x.d ;
			return y; 
		}
		// binary multiplication
		MyType operator * (const MyType &x) const
		{	MyType y;
			y.d = d * x.d ;
			return y; 
		}
		// binary division
		MyType operator / (const MyType &x) const
		{	MyType y;
			y.d = d / x.d ;
			return y; 
		}
		// computed assignment addition
		void operator += (const MyType &x)
		{	d += x.d; }
		// computed assignment subtraction
		void operator -= (const MyType &x)
		{	d -= x.d; }
		// computed assignment multiplication
		void operator *= (const MyType &x)
		{	d *= x.d; }
		// computed assignment division
		void operator /= (const MyType &x)
		{	d /= x.d; }
	};
}
bool NumericType(void)
{	bool ok  = true;
	using CppAD::AD;
	using CppAD::CheckNumericType;

	CheckNumericType<MyType>            ();

	CheckNumericType<int>               ();
	CheckNumericType<double>            ();
	CheckNumericType< AD<double> >      ();
	CheckNumericType< AD< AD<double> > >();

	return ok;
}


Input File: example/numeric_type.cpp
6.6: Check NumericType Class Concept

6.6.a: Syntax
# include <cppad/check_numeric_type.hpp>

CheckNumericType<NumericType>()

6.6.b: Purpose
The syntax
     CheckNumericType<
NumericType>()
preforms compile and run time checks that the type specified by NumericType satisfies all the requirements for a 6.5: NumericType class. If a requirement is not satisfied, a an error message makes it clear what condition is not satisfied.

6.6.c: Include
The file cppad/check_numeric_type.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest if the CppAD include files.

6.6.d: Example
The file 6.6.1: CheckNumericType.cpp contains an example and test of this function. It returns true, if it succeeds an false otherwise. The comments in this example suggest a way to change the example so an error message occurs.
Input File: cppad/check_numeric_type.hpp
6.6.1: The CheckNumericType Function: Example and Test
 

# include <cppad/check_numeric_type.hpp>
# include <cppad/near_equal.hpp>


// Chosing a value between 1 and 10 selects a numeric class properity to be 
// omitted and result in an error message being generated 
# define CppADMyTypeOmit 0

namespace { // Empty namespace

	// -------------------------------------------------------------------
	class MyType {
	private:
		double d;
	public:
		// constructor from void 
		MyType(void) : d(0.)
		{ }
		// constructor from an int 
		MyType(int d_) : d(d_)
		{ }
		// copy constuctor
		MyType(const MyType &x)
		{	d = x.d; }
		// assignment operator
		void operator = (const MyType &x)
		{	d = x.d; }
		// member function that converts to double
		double Double(void) const
		{	return d; }
# if CppADMyTypeOmit != 1
		// unary plus
		MyType operator + (void) const
		{	MyType x;
			x.d =  d;
			return x; 
		}
# endif
# if CppADMyTypeOmit != 2
		// unary plus
		MyType operator - (void) const
		{	MyType x;
			x.d = - d;
			return x; 
		}
# endif
# if CppADMyTypeOmit != 3
		// binary addition
		MyType operator + (const MyType &x) const
		{	MyType y;
			y.d = d + x.d ;
			return y; 
		}
# endif
# if CppADMyTypeOmit != 4
		// binary subtraction
		MyType operator - (const MyType &x) const
		{	MyType y;
			y.d = d - x.d ;
			return y; 
		}
# endif
# if CppADMyTypeOmit != 5
		// binary multiplication
		MyType operator * (const MyType &x) const
		{	MyType y;
			y.d = d * x.d ;
			return y; 
		}
# endif
# if CppADMyTypeOmit != 6
		// binary division
		MyType operator / (const MyType &x) const
		{	MyType y;
			y.d = d / x.d ;
			return y; 
		}
# endif
# if CppADMyTypeOmit != 7
		// computed assignment addition
		void operator += (const MyType &x)
		{	d += x.d; }
# endif
# if CppADMyTypeOmit != 8
		// computed assignment subtraction
		void operator -= (const MyType &x)
		{	d -= x.d; }
# endif
# if CppADMyTypeOmit != 9
		// computed assignment multiplication
		void operator *= (const MyType &x)
		{	d *= x.d; }
# endif
# if CppADMyTypeOmit != 10
		// computed assignment division
		void operator /= (const MyType &x)
		{	d /= x.d; }
# endif
	};
	// -------------------------------------------------------------------
	/*
	Solve: A[0] * x[0] + A[1] * x[1] = b[0] 
	       A[2] * x[0] + A[3] * x[1] = b[1] 
	*/ 
	template <class NumericType>
	void Solve(NumericType *A, NumericType *x, NumericType *b)
	{
		// make sure NumericType satisfies its conditions
		CppAD::CheckNumericType<NumericType>();

		// copy b to x
		x[0] = b[0];
		x[1] = b[1];

		// copy A to work space
		NumericType W[4];
		W[0] = A[0];
		W[1] = A[1];
		W[2] = A[2];
		W[3] = A[3];

		// divide first row by W(1,1)
		W[1] /= W[0];
		x[0] /= W[0];
		W[0] = NumericType(1);

		// subtract W(2,1) times first row from second row
		W[3] -= W[2] * W[1];
		x[1] -= W[2] * x[0];
		W[2] = NumericType(0);

		// divide second row by W(2, 2)
		x[1] /= W[3];
		W[3]  = NumericType(1);

		// use first row to solve for x[0]
		x[0] -= W[1] * x[1];
	}
} // End Empty namespace

bool CheckNumericType(void)
{	bool ok  = true;

	MyType A[4];
	A[0] = MyType(1); A[1] = MyType(2);
	A[2] = MyType(3); A[3] = MyType(4);

	MyType b[2]; 
	b[0] = MyType(1);
	b[1] = MyType(2);

	MyType x[2];
	Solve(A, x, b);

	MyType sum;
	sum = A[0] * x[0] + A[1] * x[1];
	ok &= CppAD::NearEqual(sum.Double(), b[0].Double(), 1e-10, 1e-10);

	sum = A[2] * x[0] + A[3] * x[1];
	ok &= CppAD::NearEqual(sum.Double(), b[1].Double(), 1e-10, 1e-10);

	return ok;
}


Input File: example/check_numeric_type.cpp
6.7: Definition of a Simple Vector

6.7.a: Template Class Requirements
A simple vector template class SimpleVector, is any template class that satisfies the requirements below. The following is a list of some simple vector template classes:
Name Documentation
std::vector Section 16.3 of 9.5.b: The C++ Programming Language
std::valarray Section 22.4 of 9.5.b: The C++ Programming Language
CppAD::vector 6.23: The CppAD::vector Template Class

6.7.b: Elements of Specified Type
A simple vector class with elements of type Scalar, is any class that satisfies the requirements for a class of the form
     
SimpleVector<Scalar>
The routine 6.8: CheckSimpleVector can be used to check that a class is a simple vector class with a specified element type.

6.7.c: Default Constructor
The syntax
     
SimpleVector<Scalarx;
creates an empty vector x (x.size() is zero) that can later contain elements of the specified type (see 6.7.i: resize below).

6.7.d: Sizing Constructor
If n has type size_t,
     
SimpleVector<Scalarx(n)
creates a vector x with n elements each of the specified type.

6.7.e: Copy Constructor
If x is a SimpleVector<Scalar> object,
     
SimpleVector<Scalary(x)
creates a vector with the same type and number of elements as x. The Scalar assignment operator ( = ) is used to set each element of y equal to the corresponding element of x. This is a `deep copy' in that the values of the elements of x and y can be set independently after the copy. The argument x is passed by reference and may be const.

6.7.f: Element Constructor and Destructor
The constructor for every element in a vector is called when the vector element is created and the corresponding destructor is called when it is removed from the vector (this includes when the vector is destroyed).

6.7.g: Assignment
If x and y are SimpleVector<Scalar> objects,
     
y = x
uses the Scalar assignment operator ( = ) to set each element of y equal to the corresponding element of x. This is a `deep assignment' in that the values of the elements of x and y can be set independently after the assignment. The vectors x and y must have the same number of elements. The argument x is passed by reference and may be const.

The type returned by this assignment is unspecified; for example, it might be void in which case the syntax
     
z = y = x
would not be valid.

6.7.h: Size
If x is a SimpleVector<Scalar> object and n has type size_t,
     
n = x.size()
sets n to the number of elements in the vector x. The object x may be const.

6.7.i: Resize
If x is a SimpleVector<Scalar> object and n has type size_t,
     
x.resize(n)
changes the number of elements contained in the vector x to be n. The value of the elements of x are not specified after this operation; i.e., any values previously stored in x are lost. (The object x can not be const.)

6.7.j: Value Type
If Vector is any simple vector class, the syntax
     
Vector::value_type
is the type of the elements corresponding to the vector class; i.e.,
     
SimpleVector<Scalar>::value_type
is equal to Scalar.

6.7.k: Element Access
If x is a SimpleVector<Scalar> object and i has type size_t,
     
x[i]
returns an object of an unspecified type, referred to here as elementType.

6.7.k.a: Using Value
If elementType is not the same as Scalar, the conversion operator
     static_cast<
Scalar>(x[i]) is used implicitly when x[i] is used in an expression with values of type Scalar. For this type of usage, the object x may be const.

6.7.k.b: Assignment
If y is an object of type Scalar,
     
x[i] = y
assigns the i-th element of x to have value y. For this type of usage, the object x can not be const. The type returned by this assignment is unspecified; for example, it might be void in which case the syntax
     
z = x[i] = y
would not be valid.

6.7.l: Example
The file 6.7.1: SimpleVector.cpp contains an example and test of a Simple template class. It returns true if it succeeds and false otherwise. (It is easy to modify to test additional simple vector template classes.)

6.7.m: Exercise
  1. If Vector is a simple vector template class, the following code may not be valid:
         
    Vector<double> x(2);
         x[2] = 1.;
    Create and run a program that executes the code segment above where Vector is each of the following cases: std::vector, CppAD::vector. Do this both where the compiler option -DNDEBUG is and is not present on the compilation command line.
  2. If Vector is a simple vector template class, the following code may not be valid:
         
    Vector<int> x(2);
         
    Vector<int> y(1);
         x[0] = 0;
         x[1] = 1;
         y    = x;
    Create and run a program that executes the code segment above where Vector is each of the following cases: std::valarray, CppAD::vector. Do this both where the compiler option -DNDEBUG is and is not present on the compilation command line.

Input File: omh/simple_vector.omh
6.7.1: Simple Vector Template Class: Example and Test
 
# include <iostream>                   // std::cout and std::endl

# include <vector>                     // std::vector
# include <valarray>                   // std::valarray
# include <cppad/vector.hpp>       // CppAD::vector
# include <cppad/check_simple_vector.hpp>  // CppAD::CheckSimpleVector
namespace {
	template <typename Vector>
	bool Ok(void)
	{	// type corresponding to elements of Vector
		typedef typename Vector::value_type Scalar;

		bool ok = true;             // initialize testing flag

		Vector x;                   // use the default constructor
		ok &= (x.size() == 0);      // test size for an empty vector
		Vector y(2);                // use the sizing constructor
		ok &= (y.size() == 2);      // size for an vector with elements

		// non-const access to the elements of y
		size_t i;                   
		for(i = 0; i < 2; i++)
			y[i] = Scalar(i); 

		const Vector z(y);          // copy constructor
		x.resize(2);                // resize 
		x = z;                      // vector assignment

		// use the const access to the elements of x
		// and test the values of elements of x, y, z
		for(i = 0; i < 2; i++)
		{	ok &= (x[i] == Scalar(i));
			ok &= (y[i] == Scalar(i));
			ok &= (z[i] == Scalar(i));
		}
		return ok;
	}
}
bool SimpleVector (void)
{	bool ok = true;

	// use routine above to check these cases
	ok &= Ok< std::vector<double> >();
	ok &= Ok< std::valarray<float> >();
	ok &= Ok< CppAD::vector<int> >();
# ifndef _MSC_VER
	// Avoid Microsoft following compiler warning:  'size_t' : 
	// forcing value to bool 'true' or 'false' (performance warning) 
	ok &= Ok< std::vector<bool> >();
	ok &= Ok< CppAD::vector<bool> >();
# endif
	// use CheckSimpleVector for more extensive testing
	CppAD::CheckSimpleVector<double, std::vector<double>  >();
	CppAD::CheckSimpleVector<float,  std::valarray<float> >();
	CppAD::CheckSimpleVector<int,    CppAD::vector<int>   >();
	CppAD::CheckSimpleVector<bool,   std::vector<bool>    >();
	CppAD::CheckSimpleVector<bool,   CppAD::vector<bool>  >();

	return ok;
}

Input File: example/simple_vector.cpp
6.8: Check Simple Vector Concept

6.8.a: Syntax
# include <cppad/check_simple_vector.hpp>

CheckSimpleVector<ScalarVector>()

6.8.b: Purpose
The syntax
     CheckSimpleVector<
ScalarVector>()
preforms compile and run time checks that the type specified by Vector satisfies all the requirements for a 6.7: SimpleVector class with 6.7.b: elements of type Scalar . If a requirement is not satisfied, a an error message makes it clear what condition is not satisfied.

6.8.c: Restrictions
The following extra assumption is made by CheckSimpleVector: If x is a Scalar object and i is an int,
     
x = i
assigns the object x the value of the value of i. If y is another Scalar object,
     
x = y
assigns the object x the value of y.

6.8.d: Include
The file cppad/check_simple_vector.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest if the CppAD include files.

6.8.e: Example
The file 6.8.1: CheckSimpleVector.cpp contains an example and test of this function where S is the same as T. It returns true, if it succeeds an false otherwise. The comments in this example suggest a way to change the example so S is not the same as T.
Input File: cppad/check_simple_vector.hpp
6.8.1: The CheckSimpleVector Function: Example and Test
 

# include <cppad/vector.hpp>
# include <cppad/check_simple_vector.hpp>
# include <iostream>


// Chosing a value between 1 and 9 selects a simple vector properity to be 
// omitted and result in an error message being generated 
# define CppADMyVectorOmit 0

// -------------------------------------------------------------------------

// example class used for non-constant elements (different from Scalar)
template <class Scalar>
class MyElement {
private:
	Scalar *element;
public:
	// element constructor
	MyElement(Scalar *e)
	{	element = e; }
	// an example element assignment that returns void
	void operator = (const Scalar &s)
	{	*element = s; }
	// conversion to Scalar
	operator Scalar() const
	{	return *element; }
}; 
	 

// example simple vector class 
template <class Scalar>
class MyVector {
private:
	size_t length;
	Scalar * data;
public:

# if CppADMyVectorOmit != 1
	// type of the elements in the vector
	typedef Scalar value_type;
# endif
# if CppADMyVectorOmit != 2
	// default constructor
	inline MyVector(void) : length(0) , data(0)
	{ }
# endif
# if CppADMyVectorOmit != 3
	// constructor with a specified size
	inline MyVector(size_t n) : length(n)
	{	if( length == 0 )
			data = 0;
		else	data = new Scalar[length]; 
	}
# endif
# if CppADMyVectorOmit != 4
	// copy constructor
	inline MyVector(const MyVector &x) : length(x.length)
	{	size_t i;
		if( length == 0 )
			data = 0;
		else	data = new Scalar[length]; 

		for(i = 0; i < length; i++)
			data[i] = x.data[i];
	}
# endif
# if CppADMyVectorOmit != 4 
# if CppADMyVectorOmit != 7
	// destructor (it is not safe to delete the pointer in cases 4 and 7)
	~MyVector(void)
	{	delete [] data; }
# endif
# endif
# if CppADMyVectorOmit != 5
	// size function
	inline size_t size(void) const
	{	return length; }
# endif
# if CppADMyVectorOmit != 6
	// resize function
	inline void resize(size_t n)
	{	if( length > 0 )
			delete [] data;
		length = n;
		if( length > 0 )
			data = new Scalar[length];
		else	data = 0;
	}
# endif
# if CppADMyVectorOmit != 7
	// assignment operator
	inline MyVector & operator=(const MyVector &x)
	{	size_t i;
		for(i = 0; i < length; i++)
			data[i] = x.data[i];
		return *this;
	}
# endif
# if CppADMyVectorOmit != 8
	// non-constant element access
	MyElement<Scalar> operator[](size_t i)
	{	return data + i; }
# endif
# if CppADMyVectorOmit != 9
	// constant element access
	const Scalar & operator[](size_t i) const
	{	return data[i]; }
# endif
};
// -------------------------------------------------------------------------

/*
Compute r = a * v, where a is a scalar with same type as the elements of 
the Simple Vector v. This routine uses the CheckSimpleVector function to ensure that 
the types agree.
*/ 
namespace { // Empty namespace
	template <class Scalar, class Vector>
	Vector Sscal(const Scalar &a, const Vector &v)
	{
		// invoke CheckSimpleVector function 
		CppAD::CheckSimpleVector<Scalar, Vector>();
	
		size_t n = v.size();
		Vector r(n);
	
		size_t i;
		for(i = 0; i < n; i++)
			r[i] = a * v[i];
	
		return r;
	}
}

bool CheckSimpleVector(void)
{	bool ok  = true;
	using CppAD::vector;

	// --------------------------------------------------------
	// If you change double to float in the next statement,
	// CheckSimpleVector will generate an error message at compile time.
	double a = 3.;
	// --------------------------------------------------------

	size_t n = 2;
	MyVector<double> v(n);
	v[0]     = 1.;
	v[1]     = 2.;
	MyVector<double> r = Sscal(a, v);
	ok      &= (r[0] == 3.);
	ok      &= (r[1] == 6.);

	return ok;
}


Input File: example/check_simple_vector.cpp
6.9: Obtain Nan and Determine if a Value is Nan

6.9.a: Syntax
# include <cppad/nan.hpp>
s = nan(z)
b = isnan(s)
b = hasnan(v)

6.9.b: Purpose
It obtain and check for the value not a number nan. The IEEE standard specifies that a floating point value a is nan if and only if the following returns true
     
a != a
Some systems do not get this correct, so we also use the fact that zero divided by zero should result in a nan. To be specific, if a value is not equal to itself or if it is equal to zero divided by zero, it is considered to be a nan.

6.9.c: Include
The file cppad/nan.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.9.c.a: Macros
Some C++ compilers use preprocessor symbols called nan and isnan. These preprocessor symbols will no longer be defined after this file is included.

6.9.d: nan
This routine returns a nan with the same type as z.

6.9.d.a: z
The argument z has prototype
     const 
Scalar &z 
and its value is zero (see 6.9.g: Scalar for the definition of Scalar).

6.9.d.b: s
The return value s has prototype
     
Scalar s
It is the value nan for this floating point type.

6.9.e: isnan
This routine determines if a scalar value is nan.

6.9.e.a: s
The argument s has prototype
     const 
Scalar s

6.9.e.b: b
The return value b has prototype
     bool 
b
It is true if the value s is nan.

6.9.f: hasnan
This routine determines if a 6.7: SimpleVector has an element that is nan.

6.9.f.a: v
The argument v has prototype
     const 
Vector &v
(see 6.9.h: Vector for the definition of Vector).

6.9.f.b: b
The return value b has prototype
     bool 
b
It is true if the vector v has a nan.

6.9.g: Scalar
The type Scalar must support the following operations;
Operation Description
a / b division operator (returns a Scalar object)
a == b equality operator (returns a bool object)
a != b not equality operator (returns a bool object)
Note that the division operator will be used with a and b equal to zero. For some types (e.g. int) this may generate an exception. No attempt is made to catch any such exception.

6.9.h: Vector
The type Vector must be a 6.7: SimpleVector class with elements of type Scalar.

6.9.i: Example
The file 6.9.1: nan.cpp contains an example and test of this routine. It returns true if it succeeds and false otherwise.
Input File: cppad/nan.hpp
6.9.1: nan: Example and Test
 
# include <cppad/nan.hpp>
# include <vector>

bool nan(void)
{	bool ok = true;

	// get a nan
	double double_zero = 0.;
	double double_nan = CppAD::nan(double_zero);

	// create a simple vector with no nans
	std::vector<double> v(2);
	v[0] = double_zero;
	v[1] = double_zero;

	// check that zero is not nan
	ok &= ! CppAD::isnan(double_zero);
	ok &= ! CppAD::hasnan(v);

	// check that nan is a nan
	v[1] = double_nan;
	ok &= CppAD::isnan(double_nan);
	ok &= CppAD::hasnan(v);

	return ok;
}


Input File: example/nan.cpp
6.10: The Integer Power Function

6.10.a: Syntax
# include <cppad/pow_int.h>

z = pow(xy)

6.10.b: Purpose
Determines the value of the power function  \[
     {\rm pow} (x, y) = x^y
\] 
for integer exponents n using multiplication and possibly division to compute the value. The other CppAD 4.4.3.4: pow function may use logarithms and exponentiation to compute derivatives of the same value (which will not work if x is less than or equal zero).

6.10.c: Include
The file cppad/pow_int.h is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines. Including this file defines this version of the pow within the CppAD namespace.

6.10.d: x
The argument x has prototype
     const 
Type &x

6.10.e: y
The argument y has prototype
     const int &
y

6.10.f: z
The result z has prototype
     
Type z

6.10.g: Type
The type Type must support the following operations where a and b are Type objects and i is an int:
Operation    Description Result Type
Type a(i) construction of a Type object from an int Type
a * b binary multiplication of Type objects Type
a / b binary division of Type objects Type

6.10.h: Operation Sequence
The Type operation sequence used to calculate z is 9.4.g.d: independent of x.
Input File: cppad/pow_int.hpp
6.11: Evaluate a Polynomial or its Derivative

6.11.a: Syntax
# include <cppad/poly.hpp>

p = Poly(kaz)

6.11.b: Description
Computes the k-th derivative of the polynomial  \[
     P(z) = a_0 + a_1 z^1 + \cdots + a_d z^d
\] 
If k is equal to zero, the return value is  P(z) .

6.11.c: Include
The file cppad/poly.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines. Including this file defines Poly within the CppAD namespace.

6.11.d: k
The argument k has prototype
     size_t 
k
It specifies the order of the derivative to calculate.

6.11.e: a
The argument a has prototype
     const 
Vector &a
(see 6.11.i: Vector below). It specifies the vector corresponding to the polynomial  P(z) .

6.11.f: z
The argument z has prototype
     const 
Type &z
(see Type below). It specifies the point at which to evaluate the polynomial

6.11.g: p
The result p has prototype
     
Type p
(see 6.11.h: Type below) and it is equal to the k-th derivative of  P(z) ; i.e.,  \[
p = \frac{k !}{0 !} a_k 
  + \frac{(k+1) !}{1 !} a_{k+1} z^1 
  + \ldots
  + \frac{d !}{(d - k) !} a_d z^{d - k}
\]
If  k > d , p = Type(0).

6.11.h: Type
The type Type is determined by the argument z. It is assumed that multiplication and addition of Type objects are commutative.

6.11.h.a: Operations
The following operations must be supported where x and y are objects of type Type and i is an int:
x  = i assignment
x  = y assignment
x *= y multiplication computed assignment
x += y addition computed assignment

6.11.i: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Type. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.11.j: Operation Sequence
The Type operation sequence used to calculate p is 9.4.g.d: independent of z and the elements of a (it does depend on the size of the vector a).

6.11.k: Example
The file 6.11.1: Poly.cpp contains an example and test of this routine. It returns true if it succeeds and false otherwise.

6.11.l: Source
The file 6.11.2: poly.hpp contains the current source code that implements these specifications.
Input File: cppad/poly.hpp
6.11.1: Polynomial Evaluation: Example and Test
 

# include <cppad/cppad.hpp>
# include <cmath>

bool Poly(void)
{	bool ok = true;

	// degree of the polynomial
	size_t deg = 3;

	// set the polynomial coefficients 
	CPPAD_TEST_VECTOR<double>   a(deg + 1);
	size_t i;
	for(i = 0; i <= deg; i++)
		a[i] = 1.;

	// evaluate this polynomial
	size_t k = 0;
	double z = 2.;
	double p = CppAD::Poly(k, a, z);
	ok      &= (p == 1. + z + z*z + z*z*z);

	// evaluate derivative
	k = 1;
	p = CppAD::Poly(k, a, z);
	ok &= (p == 1 + 2.*z + 3.*z*z); 
	
	return ok;
}


Input File: example/poly.cpp
6.11.2: Source: Poly
# ifndef CPPAD_POLY_INCLUDED
# define CPPAD_POLY_INCLUDED
 
# include <cstddef>  // used to defined size_t
# include <cppad/check_simple_vector.hpp>

namespace CppAD {    // BEGIN CppAD namespace

template <class Type, class Vector>
Type Poly(size_t k, const Vector &a, const Type &z)
{	size_t i;
	size_t d = a.size() - 1;

	Type tmp;

	// check Vector is Simple Vector class with Type elements
	CheckSimpleVector<Type, Vector>();

	// case where derivative order greater than degree of polynomial
	if( k > d )
	{	tmp = 0;
		return tmp;
	}
	// case where we are evaluating a derivative
	if( k > 0 )
	{	// initialize factor as (k-1) !
		size_t factor = 1;
		for(i = 2; i < k; i++)
			factor *= i;

		// set b to coefficient vector corresponding to derivative
		Vector b(d - k + 1);
		for(i = k; i <= d; i++)
		{	factor   *= i;
			tmp       = factor;
			b[i - k]  = a[i] * tmp; 
			factor   /= (i - k + 1);
		}
		// value of derivative polynomial
		return Poly(0, b, z);
	}
	// case where we are evaluating the original polynomial
	Type sum = a[d];
	i        = d;
	while(i > 0)
	{	sum *= z;
		sum += a[--i];
	}
	return sum;
}
} // END CppAD namespace
# endif

Input File: omh/poly_hpp.omh
6.12: Compute Determinants and Solve Equations by LU Factorization

6.12.a: Contents
LuSolve: 6.12.1Compute Determinant and Solve Linear Equations
LuFactor: 6.12.2LU Factorization of A Square Matrix
LuInvert: 6.12.3Invert an LU Factored Equation

Input File: omh/lu_det_and_solve.omh
6.12.1: Compute Determinant and Solve Linear Equations

6.12.1.a: Syntax
# include <cppad/lu_solve.hpp>

signdet = LuSolve(nmABXlogdet)

6.12.1.b: Description
Use an LU factorization of the matrix A to compute its determinant and solve for X in the linear of equation  \[
     A * X = B
\] 
where A is an n by n matrix, X is an n by m matrix, and B is an  n x m matrix.

6.12.1.c: Include
The file cppad/lu_solve.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.12.1.d: Factor and Invert
This routine is an easy to user interface to 6.12.2: LuFactor and 6.12.3: LuInvert for computing determinants and solutions of linear equations. These separate routines should be used if one right hand side B depends on the solution corresponding to another right hand side (with the same value of A). In this case only one call to LuFactor is required but there will be multiple calls to LuInvert.

6.12.1.e: Matrix Storage
All matrices are stored in row major order. To be specific, if  Y is a vector that contains a  p by  q matrix, the size of  Y must be equal to   p * q  and for  i = 0 , \ldots , p-1 ,  j = 0 , \ldots , q-1 ,  \[
     Y_{i,j} = Y[ i * q + j ]
\] 


6.12.1.f: signdet
The return value signdet is a int value that specifies the sign factor for the determinant of A. This determinant of A is zero if and only if signdet is zero.

6.12.1.g: n
The argument n has type size_t and specifies the number of rows in the matrices A, X, and B. The number of columns in A is also equal to n.

6.12.1.h: m
The argument m has type size_t and specifies the number of columns in the matrices X and B. If m is zero, only the determinant of A is computed and the matrices X and B are not used.

6.12.1.i: A
The argument A has the prototype
     const 
FloatVector &A
and the size of A must equal  n * n (see description of 6.12.1.n: FloatVector below). This is the  n by n matrix that we are computing the determinant of and that defines the linear equation.

6.12.1.j: B
The argument B has the prototype
     const 
FloatVector &B
and the size of B must equal  n * m (see description of 6.12.1.n: FloatVector below). This is the  n by m matrix that defines the right hand side of the linear equations. If m is zero, B is not used.

6.12.1.k: X
The argument X has the prototype
     
FloatVector &X
and the size of X must equal  n * m (see description of 6.12.1.n: FloatVector below). The input value of X does not matter. On output, the elements of X contain the solution of the equation we wish to solve (unless signdet is equal to zero). If m is zero, X is not used.

6.12.1.l: logdet
The argument logdet has prototype
     
Float &logdet
On input, the value of logdet does not matter. On output, it has been set to the log of the determinant of A (but not quite). To be more specific, the determinant of A is given by the formula
     
det = signdet * exp( logdet )
This enables LuSolve to use logs of absolute values in the case where Float corresponds to a real number.

6.12.1.m: Float
The type Float must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for any pair of Float objects x and y:
Operation Description
log(x) returns the logarithm of x as a Float object

6.12.1.n: FloatVector
The type FloatVector must be a 6.7: SimpleVector class with 6.7.b: elements of type Float . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.12.1.o: LeqZero
Including the file lu_solve.hpp defines the template function
     template <typename 
Float>
     bool LeqZero<
Float>(const Float &x)
in the CppAD namespace. This function returns true if x is less than or equal to zero and false otherwise. It is used by LuSolve to avoid taking the log of zero (or a negative number if Float corresponds to real numbers). This template function definition assumes that the operator <= is defined for Float objects. If this operator is not defined for your use of Float, you will need to specialize this template so that it works for your use of LuSolve.

Complex numbers do not have the operation or <= defined. In addition, in the complex case, one can take the log of a negative number. The specializations
     bool LeqZero< std::complex<float> > (const std::complex<float> &
x)
     bool LeqZero< std::complex<double> >(const std::complex<double> &
x)
are defined by including lu_solve.hpp. These return true if x is zero and false otherwise.

6.12.1.p: AbsGeq
Including the file lu_solve.hpp defines the template function
     template <typename 
Float>
     bool AbsGeq<
Float>(const Float &x, const Float &y)
If the type Float does not support the <= operation and it is not std::complex<float> or std::complex<double>, see the documentation for AbsGeq in 6.12.2.l: LuFactor .

6.12.1.q: Example
The file 6.12.1.1: LuSolve.cpp contains an example and test of using this routine. It returns true if it succeeds and false otherwise.

6.12.1.r: Source
The file 6.12.1.2: lu_solve.hpp contains the current source code that implements these specifications.
Input File: cppad/lu_solve.hpp
6.12.1.1: LuSolve With Complex Arguments: Example and Test
 

# include <cppad/lu_solve.hpp>       // for CppAD::LuSolve
# include <cppad/near_equal.hpp>     // for CppAD::NearEqual
# include <cppad/vector.hpp>  // for CppAD::vector
# include <complex>               // for std::complex

typedef std::complex<double> Complex;    // define the Complex type
bool LuSolve(void)
{	bool  ok = true;
	using namespace CppAD;

	size_t   n = 3;           // number rows in A and B
	size_t   m = 2;           // number columns in B, X and S

	// A is an n by n matrix, B, X, and S are n by m matrices
	CppAD::vector<Complex> A(n * n), B(n * m), X(n * m) , S(n * m);

	Complex  logdet;          // log of determinant of A
	int      signdet;         // zero if A is singular
	Complex  det;             // determinant of A
	size_t   i, j, k;         // some temporary indices

	// set A equal to the n by n Hilbert Matrix
	for(i = 0; i < n; i++)
		for(j = 0; j < n; j++)
			A[i * n + j] = 1. / (double) (i + j + 1);

	// set S to the solution of the equation we will solve
	for(j = 0; j < n; j++)
		for(k = 0; k < m; k++)
			S[ j * m + k ] = Complex(j, j + k);
		
	// set B = A * S 
	size_t ik;
	Complex sum;
	for(k = 0; k < m; k++)
	{	for(i = 0; i < n; i++)
		{	sum = 0.;
			for(j = 0; j < n; j++)
				sum += A[i * n + j] * S[j * m + k];
			B[i * m + k] = sum;
		}
	}

	// solve the equation A * X = B and compute determinant of A
	signdet = CppAD::LuSolve(n, m, A, B, X, logdet);
	det     = Complex( signdet ) * exp( logdet );

	double cond  = 4.62963e-4;       // condition number of A when n = 3
	double determinant = 1. / 2160.; // determinant of A when n = 3
	double delta = 1e-14 / cond;     // accuracy expected in X

	// check determinant
	ok &= CppAD::NearEqual(det, determinant, delta, delta);

	// check solution
	for(ik = 0; ik < n * m; ik++)
		ok &= CppAD::NearEqual(X[ik], S[ik], delta, delta);

	return ok;
}

Input File: example/lu_solve.cpp
6.12.1.2: Source: LuSolve
# ifndef CPPAD_LU_SOLVE_INCLUDED
# define CPPAD_LU_SOLVE_INCLUDED
 
# include <complex>
# include <vector>

// link exp for float and double cases
# include <cppad/std_math_unary.hpp>

# include <cppad/local/cppad_assert.hpp>
# include <cppad/check_simple_vector.hpp>
# include <cppad/check_numeric_type.hpp>
# include <cppad/lu_factor.hpp>
# include <cppad/lu_invert.hpp>

namespace CppAD { // BEGIN CppAD namespace

// LeqZero
template <typename Float>
inline bool LeqZero(const Float &x)
{	return x <= Float(0); }
inline bool LeqZero( const std::complex<double> &x )
{	return x == std::complex<double>(0); }
inline bool LeqZero( const std::complex<float> &x )
{	return x == std::complex<float>(0); }

// LuSolve
template <typename Float, typename FloatVector>
int LuSolve(
	size_t             n      ,
	size_t             m      , 
	const FloatVector &A      , 
	const FloatVector &B      , 
	FloatVector       &X      , 
	Float        &logdet      )
{	
	// check numeric type specifications
	CheckNumericType<Float>();

	// check simple vector class specifications
	CheckSimpleVector<Float, FloatVector>();

	size_t        p;       // index of pivot element (diagonal of L)
	int     signdet;       // sign of the determinant
	Float     pivot;       // pivot element

	// the value zero
	const Float zero(0);

	// pivot row and column order in the matrix
	std::vector<size_t> ip(n);
	std::vector<size_t> jp(n);

	// -------------------------------------------------------
	CPPAD_ASSERT_KNOWN(
		A.size() == n * n,
		"Error in LuSolve: A must have size equal to n * n"
	);
	CPPAD_ASSERT_KNOWN(
		B.size() == n * m,
		"Error in LuSolve: B must have size equal to n * m"
	);
	CPPAD_ASSERT_KNOWN(
		X.size() == n * m,
		"Error in LuSolve: X must have size equal to n * m"
	);
	// -------------------------------------------------------

	// copy A so that it does not change
	FloatVector Lu(A);

	// copy B so that it does not change
	X = B;

	// Lu factor the matrix A
	signdet = LuFactor(ip, jp, Lu);

	// compute the log of the determinant
	logdet  = Float(0);
	for(p = 0; p < n; p++)
	{	// pivot using the max absolute element
		pivot   = Lu[ ip[p] * n + jp[p] ];

		// check for determinant equal to zero
		if( pivot == zero )
		{	// abort the mission
			logdet = Float(0);
			return   0;
		}

		// update the determinant
		if( LeqZero ( pivot ) )
		{	logdet += log( - pivot );
			signdet = - signdet;
		}
		else	logdet += log( pivot );

	}

	// solve the linear equations
	LuInvert(ip, jp, Lu, X);

	// return the sign factor for the determinant
	return signdet;
}
} // END CppAD namespace 
# endif

Input File: omh/lu_solve_hpp.omh
6.12.2: LU Factorization of A Square Matrix

6.12.2.a: Syntax
# include <cppad/lu_factor.hpp>

sign = LuFactor(ipjpLU)

6.12.2.b: Description
Computes an LU factorization of the matrix A where A is a square matrix.

6.12.2.c: Include
The file cppad/lu_factor.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.12.2.d: Matrix Storage
All matrices are stored in row major order. To be specific, if  Y is a vector that contains a  p by  q matrix, the size of  Y must be equal to   p * q  and for  i = 0 , \ldots , p-1 ,  j = 0 , \ldots , q-1 ,  \[
     Y_{i,j} = Y[ i * q + j ]
\] 


6.12.2.e: sign
The return value sign has prototype
     int 
sign
If A is invertible, sign is plus or minus one and is the sign of the permutation corresponding to the row ordering ip and column ordering jp. If A is not invertible, sign is zero.

6.12.2.f: ip
The argument ip has prototype
     
SizeVector &ip
(see description of 6.12.2.i: SizeVector below). The size of ip is referred to as n in the specifications below. The input value of the elements of ip does not matter. The output value of the elements of ip determine the order of the rows in the permuted matrix.

6.12.2.g: jp
The argument jp has prototype
     
SizeVector &jp
(see description of 6.12.2.i: SizeVector below). The size of jp must be equal to n. The input value of the elements of jp does not matter. The output value of the elements of jp determine the order of the columns in the permuted matrix.

6.12.2.h: LU
The argument LU has the prototype
     
FloatVector &LU
and the size of LU must equal  n * n (see description of 6.12.2.j: FloatVector below).

6.12.2.h.a: A
We define A as the matrix corresponding to the input value of LU.

6.12.2.h.b: P
We define the permuted matrix P in terms of A by
     
P(ij) = Aip[i] * n + jp[j] ]

6.12.2.h.c: L
We define the lower triangular matrix L in terms of the output value of LU. The matrix L is zero above the diagonal and the rest of the elements are defined by
     
L(ij) = LUip[i] * n + jp[j] ]
for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , i .

6.12.2.h.d: U
We define the upper triangular matrix U in terms of the output value of LU. The matrix U is zero below the diagonal, one on the diagonal, and the rest of the elements are defined by
     
U(ij) = LUip[i] * n + jp[j] ]
for  i = 0 , \ldots , n-2 and  j = i+1 , \ldots , n-1 .

6.12.2.h.e: Factor
If the return value sign is non-zero,
     
L * U = P
If the return value of sign is zero, the contents of L and U are not defined.

6.12.2.h.f: Determinant
If the return value sign is zero, the determinant of A is zero. If sign is non-zero, using the output value of LU the determinant of the matrix A is equal to
sign * LU[ip[0], jp[0]] * ... * LU[ip[n-1], jp[n-1]] 

6.12.2.i: SizeVector
The type SizeVector must be a 6.7: SimpleVector class with 6.7.b: elements of type size_t . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.12.2.j: FloatVector
The type FloatVector must be a 6.7: simple vector class . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.12.2.k: Float
This notation is used to denote the type corresponding to the elements of a FloatVector. The type Float must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for any pair of Float objects x and y:
Operation Description
log(x) returns the logarithm of x as a Float object

6.12.2.l: AbsGeq
Including the file lu_factor.hpp defines the template function
     template <typename 
Float>
     bool AbsGeq<
Float>(const Float &x, const Float &y)
in the CppAD namespace. This function returns true if the absolute value of x is greater than or equal the absolute value of y. It is used by LuFactor to choose the pivot elements. This template function definition uses the operator <= to obtain the absolute value for Float objects. If this operator is not defined for your use of Float, you will need to specialize this template so that it works for your use of LuFactor.

Complex numbers do not have the operation <= defined. The specializations
bool AbsGeq< std::complex<float> > 
     (const std::complex<float> &
x, const std::complex<float> &y)
bool AbsGeq< std::complex<double> > 
     (const std::complex<double> &
x, const std::complex<double> &y)
are define by including lu_factor.hpp These return true if the sum of the square of the real and imaginary parts of x is greater than or equal the sum of the square of the real and imaginary parts of y.

6.12.2.m: Example
The file 6.12.2.1: LuFactor.cpp contains an example and test of using LuFactor by itself. It returns true if it succeeds and false otherwise.

The file 6.12.1.2: lu_solve.hpp provides a useful example usage of LuFactor with LuInvert.

6.12.2.n: Source
The file 6.12.2.2: lu_factor.hpp contains the current source code that implements these specifications.
Input File: cppad/lu_factor.hpp
6.12.2.1: LuFactor: Example and Test
 
# include <cstdlib>               // for rand function
# include <cppad/lu_factor.hpp>      // for CppAD::LuFactor
# include <cppad/near_equal.hpp>     // for CppAD::NearEqual
# include <cppad/vector.hpp>  // for CppAD::vector

bool LuFactor(void)
{	bool  ok = true;

# ifndef _MSC_VER
	using std::rand;
	using std::srand;
# endif

	size_t  n = 5;                        // number rows in A 
	double  rand_max = double(RAND_MAX);  // maximum rand value
	double  sum;                          // element of L * U
	double  pij;                          // element of permuted A
	size_t  i, j, k;                      // temporary indices

	// A is an n by n matrix
	CppAD::vector<double> A(n*n), LU(n*n), L(n*n), U(n*n);

	// set A equal to an n by n random matrix
	for(i = 0; i < n; i++)
		for(j = 0; j < n; j++)
			A[i * n + j] = rand() / rand_max;

	// pivot vectors
	CppAD::vector<size_t> ip(n);
	CppAD::vector<size_t> jp(n);

	// factor the matrix A
	LU       = A;
	CppAD::LuFactor(ip, jp, LU);

	// check that ip and jp are permutations of the indices 0, ... , n-1
	for(i = 0; i < n; i++)
	{	ok &= (ip[i] < n);
		ok &= (jp[i] < n);
		for(j = 0; j < n; j++)
		{	if( i != j )
			{	ok &= (ip[i] != ip[j]);
				ok &= (jp[i] != jp[j]);
			}
		}
	}
	
	// Extract L from LU
	for(i = 0; i < n; i++)
	{	// elements along and below the diagonal
		for(j = 0; j <= i; j++)
			L[i * n + j] = LU[ ip[i] * n + jp[j] ];
		// elements above the diagonal
		for(j = i+1; j < n; j++)
			L[i * n + j] = 0.;
	}
	
	// Extract U from LU
	for(i = 0; i < n; i++)
	{	// elements below the diagonal
		for(j = 0; j < i; j++)
			U[i * n + j] = 0.;
		// elements along the diagonal
		U[i * n + i] = 1.;
		// elements above the diagonal
		for(j = i+1; j < n; j++)
			U[i * n + j] = LU[ ip[i] * n + jp[j] ];
	}

	// Compute L * U 
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
		{	// compute element (i,j) entry in L * U
			sum = 0.;
			for(k = 0; k < n; k++)
				sum += L[i * n + k] * U[k * n + j];
			// element (i,j) in permuted version of A
			pij  = A[ ip[i] * n + jp[j] ];
			// compare
			ok  &= CppAD::NearEqual(pij, sum, 1e-10, 1e-10);
		}
	}

	return ok;
}


Input File: example/lu_factor.cpp
6.12.2.2: Source: LuFactor
# ifndef CPPAD_LU_FACTOR_INCLUDED
# define CPPAD_LU_FACTOR_INCLUDED
 

# include <complex>
# include <vector>

# include <cppad/local/cppad_assert.hpp>
# include <cppad/check_simple_vector.hpp>
# include <cppad/check_numeric_type.hpp>

namespace CppAD { // BEGIN CppAD namespace

// AbsGeq
template <typename Float>
inline bool AbsGeq(const Float &x, const Float &y)
{	Float xabs = x;
	if( xabs <= Float(0) )
		xabs = - xabs;
	Float yabs = y;
	if( yabs <= Float(0) )
		yabs = - yabs;
	return xabs >= yabs;
}
inline bool AbsGeq(
	const std::complex<double> &x, 
	const std::complex<double> &y)
{	double xsq = x.real() * x.real() + x.imag() * x.imag();
	double ysq = y.real() * y.real() + y.imag() * y.imag();

	return xsq >= ysq;
}
inline bool AbsGeq(
	const std::complex<float> &x, 
	const std::complex<float> &y)
{	float xsq = x.real() * x.real() + x.imag() * x.imag();
	float ysq = y.real() * y.real() + y.imag() * y.imag();

	return xsq >= ysq;
}

// Lines that are different from code in cppad/local/lu_ratio.hpp end with //
template <class SizeVector, class FloatVector>                          //
int LuFactor(SizeVector &ip, SizeVector &jp, FloatVector &LU)           //
{	
	// type of the elements of LU                                   //
	typedef typename FloatVector::value_type Float;                 //

	// check numeric type specifications
	CheckNumericType<Float>();

	// check simple vector class specifications
	CheckSimpleVector<Float, FloatVector>();
	CheckSimpleVector<size_t, SizeVector>();

	size_t  i, j;          // some temporary indices
	const Float zero( 0 ); // the value zero as a Float object
	size_t  imax;          // row index of maximum element
	size_t  jmax;          // column indx of maximum element
	Float    emax;         // maximum absolute value
	size_t  p;             // count pivots
	int     sign;          // sign of the permutation
	Float   etmp;          // temporary element
	Float   pivot;         // pivot element

	// -------------------------------------------------------
	size_t n = ip.size();
	CPPAD_ASSERT_KNOWN(
		jp.size() == n,
		"Error in LuFactor: jp must have size equal to n"
	);
	CPPAD_ASSERT_KNOWN(
		LU.size() == n * n,
		"Error in LuFactor: LU must have size equal to n * m"
	);
	// -------------------------------------------------------

	// initialize row and column order in matrix not yet pivoted
	for(i = 0; i < n; i++)
	{	ip[i] = i;
		jp[i] = i;
	}
	// initialize the sign of the permutation
	sign = 1;
	// ---------------------------------------------------------

	// Reduce the matrix P to L * U using n pivots
	for(p = 0; p < n; p++)
	{	// determine row and column corresponding to element of 
		// maximum absolute value in remaining part of P
		imax = jmax = n;
		emax = zero;
		for(i = p; i < n; i++)
		{	for(j = p; j < n; j++)
			{	CPPAD_ASSERT_UNKNOWN(
					(ip[i] < n) & (jp[j] < n)
				);
				etmp = LU[ ip[i] * n + jp[j] ];

				// check if maximum absolute value so far
				if( AbsGeq (etmp, emax) )
				{	imax = i;
					jmax = j;
					emax = etmp;
				}
			}
		}
		CPPAD_ASSERT_KNOWN( 
		(imax < n) & (jmax < n) ,
		"LuFactor can't determine an element with "
		"maximum absolute value.\n"
		"Perhaps original matrix contains not a number or infinity.\n" 
		"Perhaps your specialization of AbsGeq is not correct."
		);
		if( imax != p )
		{	// switch rows so max absolute element is in row p
			i        = ip[p];
			ip[p]    = ip[imax];
			ip[imax] = i;
			sign     = -sign;
		}
		if( jmax != p )
		{	// switch columns so max absolute element is in column p
			j        = jp[p];
			jp[p]    = jp[jmax];
			jp[jmax] = j;
			sign     = -sign;
		}
		// pivot using the max absolute element
		pivot   = LU[ ip[p] * n + jp[p] ];

		// check for determinant equal to zero
		if( pivot == zero )
		{	// abort the mission
			return   0;
		}

		// Reduce U by the elementary transformations that maps 
		// LU( ip[p], jp[p] ) to one.  Only need transform elements
		// above the diagonal in U and LU( ip[p] , jp[p] ) is
		// corresponding value below diagonal in L.
		for(j = p+1; j < n; j++)
			LU[ ip[p] * n + jp[j] ] /= pivot;

		// Reduce U by the elementary transformations that maps 
		// LU( ip[i], jp[p] ) to zero. Only need transform elements 
		// above the diagonal in U and LU( ip[i], jp[p] ) is 
		// corresponding value below diagonal in L.
		for(i = p+1; i < n; i++ )
		{	etmp = LU[ ip[i] * n + jp[p] ];
			for(j = p+1; j < n; j++)
			{	LU[ ip[i] * n + jp[j] ] -= 
					etmp * LU[ ip[p] * n + jp[j] ];
			} 
		}
	}
	return sign;
}
} // END CppAD namespace 
# endif

Input File: omh/lu_factor_hpp.omh
6.12.3: Invert an LU Factored Equation

6.12.3.a: Syntax
# include <cppad/lu_invert.hpp>

LuInvert(ipjpLUX)

6.12.3.b: Description
Solves the matrix equation A * X = B using an LU factorization computed by 6.12.2: LuFactor .

6.12.3.c: Include
The file cppad/lu_invert.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.12.3.d: Matrix Storage
All matrices are stored in row major order. To be specific, if  Y is a vector that contains a  p by  q matrix, the size of  Y must be equal to   p * q  and for  i = 0 , \ldots , p-1 ,  j = 0 , \ldots , q-1 ,  \[
     Y_{i,j} = Y[ i * q + j ]
\] 


6.12.3.e: ip
The argument ip has prototype
     const 
SizeVector &ip
(see description for SizeVector in 6.12.2.i: LuFactor specifications). The size of ip is referred to as n in the specifications below. The elements of ip determine the order of the rows in the permuted matrix.

6.12.3.f: jp
The argument jp has prototype
     const 
SizeVector &jp
(see description for SizeVector in 6.12.2.i: LuFactor specifications). The size of jp must be equal to n. The elements of jp determine the order of the columns in the permuted matrix.

6.12.3.g: LU
The argument LU has the prototype
     const 
FloatVector &LU
and the size of LU must equal  n * n (see description for FloatVector in 6.12.2.j: LuFactor specifications).

6.12.3.g.a: L
We define the lower triangular matrix L in terms of LU. The matrix L is zero above the diagonal and the rest of the elements are defined by
     
L(ij) = LUip[i] * n + jp[j] ]
for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , i .

6.12.3.g.b: U
We define the upper triangular matrix U in terms of LU. The matrix U is zero below the diagonal, one on the diagonal, and the rest of the elements are defined by
     
U(ij) = LUip[i] * n + jp[j] ]
for  i = 0 , \ldots , n-2 and  j = i+1 , \ldots , n-1 .

6.12.3.g.c: P
We define the permuted matrix P in terms of the matrix L and the matrix U by P = L * U.

6.12.3.g.d: A
The matrix A, which defines the linear equations that we are solving, is given by
     
P(ij) = Aip[i] * n + jp[j] ]
(Hence LU contains a permuted factorization of the matrix A.)

6.12.3.h: X
The argument X has prototype
     
FloatVector &X
(see description for FloatVector in 6.12.2.j: LuFactor specifications). The matrix X must have the same number of rows as the matrix A. The input value of X is the matrix B and the output value solves the matrix equation A * X = B.

6.12.3.i: Example
The file 6.12.1.2: lu_solve.hpp is a good example usage of LuFactor with LuInvert. The file 6.12.3.1: LuInvert.cpp contains an example and test of using LuInvert by itself. It returns true if it succeeds and false otherwise.

6.12.3.j: Source
The file 6.12.3.2: lu_invert.hpp contains the current source code that implements these specifications.
Input File: cppad/lu_invert.hpp
6.12.3.1: LuInvert: Example and Test
 
# include <cstdlib>               // for rand function
# include <cppad/lu_invert.hpp>      // for CppAD::LuInvert
# include <cppad/near_equal.hpp>     // for CppAD::NearEqual
# include <cppad/vector.hpp>  // for CppAD::vector

bool LuInvert(void)
{	bool  ok = true;

# ifndef _MSC_VER
	using std::rand;
	using std::srand;
# endif

	size_t  n = 7;                        // number rows in A 
	size_t  m = 3;                        // number columns in B
	double  rand_max = double(RAND_MAX);  // maximum rand value
	double  sum;                          // element of L * U
	size_t  i, j, k;                      // temporary indices

	// dimension matrices
	CppAD::vector<double> 
		A(n*n), X(n*m), B(n*m), LU(n*n), L(n*n), U(n*n);

	// seed the random number generator
	srand(123); 

	// pivot vectors
	CppAD::vector<size_t> ip(n);
	CppAD::vector<size_t> jp(n);

	// set pivot vectors
	for(i = 0; i < n; i++)
	{	ip[i] = (i + 2) % n;      // ip = 2 , 3, ... , n-1, 0, 1
		jp[i] = (n + 2 - i) % n;  // jp = 2 , 1, n-1, n-2, ... , 3
	}

	// chose L, a random lower triangular matrix
	for(i = 0; i < n; i++)
	{	for(j = 0; j <= i; j++)
			L [i * n + j]  = rand() / rand_max;
		for(j = i+1; j < n; j++)
			L [i * n + j]  = 0.;
	}
	// chose U, a random upper triangular matrix with ones on diagonal
	for(i = 0; i < n; i++)
	{	for(j = 0; j < i; j++)
			U [i * n + j]  = 0.; 
		U[ i * n + i ] = 1.;
		for(j = i+1; j < n; j++)
			U [i * n + j]  = rand() / rand_max;
	}
	// chose X, a random matrix
	for(i = 0; i < n; i++)
	{	for(k = 0; k < m; k++)
			X[i * m + k] = rand() / rand_max;
	}
	// set LU to a permuted combination of both L and U
	for(i = 0; i < n; i++)
	{	for(j = 0; j <= i; j++)
			LU [ ip[i] * n + jp[j] ]  = L[i * n + j];
		for(j = i+1; j < n; j++)
			LU [ ip[i] * n + jp[j] ]  = U[i * n + j];
	}
	// set A to a permuted version of L * U 
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
		{	// compute (i,j) entry in permuted matrix
			sum = 0.;
			for(k = 0; k < n; k++)
				sum += L[i * n + k] * U[k * n + j];
			A[ ip[i] * n + jp[j] ] = sum;
		}
	}
	// set B to A * X
	for(i = 0; i < n; i++)
	{	for(k = 0; k < m; k++)
		{	// compute (i,k) entry of B
			sum = 0.;
			for(j = 0; j < n; j++)
				sum += A[i * n + j] * X[j * m + k];
			B[i * m + k] = sum;
		}
	}
	// solve for X
	CppAD::LuInvert(ip, jp, LU, B);

	// check result
	for(i = 0; i < n; i++)
	{	for(k = 0; k < m; k++)
		{	ok &= CppAD::NearEqual(
				X[i * m + k], B[i * m + k], 1e-10, 1e-10
			);
		}
	}
	return ok;
}


Input File: example/lu_invert.cpp
6.12.3.2: Source: LuInvert
# ifndef CPPAD_LU_INVERT_INCLUDED
# define CPPAD_LU_INVERT_INCLUDED
 
# include <cppad/local/cppad_assert.hpp>
# include <cppad/check_simple_vector.hpp>
# include <cppad/check_numeric_type.hpp>

namespace CppAD { // BEGIN CppAD namespace

// LuInvert
template <typename SizeVector, typename FloatVector>
void LuInvert(
	const SizeVector  &ip, 
	const SizeVector  &jp, 
	const FloatVector &LU,
	FloatVector       &B )
{	size_t k; // column index in X
	size_t p; // index along diagonal in LU
	size_t i; // row index in LU and X

	typedef typename FloatVector::value_type Float;

	// check numeric type specifications
	CheckNumericType<Float>();

	// check simple vector class specifications
	CheckSimpleVector<Float, FloatVector>();
	CheckSimpleVector<size_t, SizeVector>();

	Float etmp;
	
	size_t n = ip.size();
	CPPAD_ASSERT_KNOWN(
		jp.size() == n,
		"Error in LuInvert: jp must have size equal to n * n"
	);
	CPPAD_ASSERT_KNOWN(
		LU.size() == n * n,
		"Error in LuInvert: Lu must have size equal to n * m"
	);
	size_t m = B.size() / n;
	CPPAD_ASSERT_KNOWN(
		B.size() == n * m,
		"Error in LuSolve: B must have size equal to a multiple of n"
	);

	// temporary storage for reordered solution
	FloatVector x(n);

	// loop over equations
	for(k = 0; k < m; k++)
	{	// invert the equation c = L * b
		for(p = 0; p < n; p++)
		{	// solve for c[p]
			etmp = B[ ip[p] * m + k ] / LU[ ip[p] * n + jp[p] ];
			B[ ip[p] * m + k ] = etmp;
			// subtract off effect on other variables
			for(i = p+1; i < n; i++)
				B[ ip[i] * m + k ] -=
					etmp * LU[ ip[i] * n + jp[p] ];
		}

		// invert the equation x = U * c
		p = n;
		while( p > 0 )
		{	--p;
			etmp       = B[ ip[p] * m + k ];
			x[ jp[p] ] = etmp;
			for(i = 0; i < p; i++ )
				B[ ip[i] * m + k ] -= 
					etmp * LU[ ip[i] * n + jp[p] ];
		}

		// copy reordered solution into B
		for(i = 0; i < n; i++)
			B[i * m + k] = x[i];
	}
	return;
}
} // END CppAD namespace 
# endif

Input File: omh/lu_invert_hpp.omh
6.13: One DimensionalRomberg Integration

6.13.a: Syntax
# include <cppad/romberg_one.hpp>

r = RombergOne(Fabne)

6.13.b: Description
Returns the Romberg integration estimate  r for a one dimensional integral  \[
r = \int_a^b F(x) {\bf d} x + O \left[ (b - a) / 2^{n-1} \right]^{2(p+1)}
\] 


6.13.c: Include
The file cppad/romberg_one.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.13.d: r
The return value r has prototype
     
Float r
It is the estimate computed by RombergOne for the integral above.

6.13.e: F
The object F can be of any type, but it must support the operation
     
F(x)
The argument x to F has prototype
     const 
Float &x
The return value of F is a Float object (see description of 6.13.k: Float below).

6.13.f: a
The argument a has prototype
     const 
Float &a
It specifies the lower limit for the integration.

6.13.g: b
The argument b has prototype
     const 
Float &b
It specifies the upper limit for the integration.

6.13.h: n
The argument n has prototype
     size_t 
n
A total number of  2^{n-1} + 1 evaluations of F(x) are used to estimate the integral.

6.13.i: p
The argument p has prototype
     size_t 
p
It must be less than or equal  n and determines the accuracy order in the approximation for the integral that is returned by RombergOne. To be specific  \[
r = \int_a^b F(x) {\bf d} x + O \left[ (b - a) / 2^{n-1} \right]^{2(p+1)}
\] 


6.13.j: e
The argument e has prototype
     
Float &e
The input value of e does not matter and its output value is an approximation for the error in the integral estimates; i.e.,  \[
     e \approx \left| r - \int_a^b F(x) {\bf d} x \right|
\] 


6.13.k: Float
The type Float must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, if x and y are Float objects,
     
x < y
returns the bool value true if x is less than y and false otherwise.

6.13.l: Example
The file 6.13.1: RombergOne.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

6.13.m: Source Code
The source code for this routine is in the file cppad/romberg_one.hpp.
Input File: cppad/romberg_one.hpp
6.13.1: One Dimensional Romberg Integration: Example and Test
 

# include <cppad/romberg_one.hpp>
# include <cppad/vector.hpp>
# include <cppad/near_equal.hpp>

namespace {
	class Fun {
	private:
		const size_t degree;
	public:
		// constructor
		Fun(size_t degree_) : degree(degree_) 
		{ }

		// function F(x) = x^degree
		template <class Type>
		Type operator () (const Type &x)
		{	size_t i;
			Type   f = 1;
			for(i = 0; i < degree; i++)
				f *= x;
			return f;
		}
	};
}

bool RombergOne(void)
{	bool ok = true;
	size_t i;

	size_t degree = 4;
	Fun F(degree);

	// arguments to RombergOne
	double a = 0.;
	double b = 1.;
	size_t n = 4;
	size_t p;
	double r, e;

	// int_a^b F(x) dx = [ b^(degree+1) - a^(degree+1) ] / (degree+1) 
	double bpow = 1.;
	double apow = 1.;
	for(i = 0; i <= degree; i++)
	{	bpow *= b;
		apow *= a;
	}  
	double check = (bpow - apow) / (degree+1);

	// step size corresponding to r
	double step = (b - a) / exp(log(2.)*(n-1));
	// step size corresponding to error estimate
	step *= 2.;
	// step size raised to a power
	double spow = 1;

	for(p = 0; p < n; p++)
	{	spow = spow * step * step;

		r = CppAD::RombergOne(F, a, b, n, p, e);

		ok  &= e < (degree+1) * spow;
		ok  &= CppAD::NearEqual(check, r, 0., e);	
	}

	return ok;
}


Input File: example/romberg_one.cpp
6.14: Multi-dimensional Romberg Integration

6.14.a: Syntax
# include <cppad/romberg_mul.hpp>

RombergMul<FunSizeVectorFloatVectormR
r = R(Fabnpe)

6.14.b: Description
Returns the Romberg integration estimate  r for the multi-dimensional integral  \[
r = 
\int_{a[0]}^{b[0]} \cdots \int_{a[m-1]}^{b[m-1]}
\; F(x) \;
{\bf d} x_0 \cdots {\bf d} x_{m-1} 
\; + \; 
\sum_{i=0}^{m-1} 
O \left[ ( b[i] - a[i] ) / 2^{n[i]-1} \right]^{2(p[i]+1)}
\] 


6.14.c: Include
The file cppad/romberg_mul.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.14.d: m
The template parameter m must be convertible to a size_t object with a value that can be determined at compile time; for example 2. It determines the dimension of the domain space for the integration.

6.14.e: r
The return value r has prototype
     
Float r
It is the estimate computed by RombergMul for the integral above (see description of 6.14.l: Float below).

6.14.f: F
The object F has the prototype
     
Fun &F
It must support the operation
     
F(x)
The argument x to F has prototype
     const 
Float &x
The return value of F is a Float object

6.14.g: a
The argument a has prototype
     const 
FloatVector &a
It specifies the lower limit for the integration (see description of 6.14.m: FloatVector below).

6.14.h: b
The argument b has prototype
     const 
FloatVector &b
It specifies the upper limit for the integration.

6.14.i: n
The argument n has prototype
     const 
SizeVector &n
A total number of  2^{n[i]-1} + 1 evaluations of F(x) are used to estimate the integral with respect to  {\bf d} x_i .

6.14.j: p
The argument p has prototype
     const 
SizeVector &p
For  i = 0 , \ldots , m-1 ,  n[i] determines the accuracy order in the approximation for the integral that is returned by RombergMul. The values in p must be less than or equal n; i.e., p[i] <= n[i].

6.14.k: e
The argument e has prototype
     
Float &e
The input value of e does not matter and its output value is an approximation for the absolute error in the integral estimate.

6.14.l: Float
The type Float is defined as the type of the elements of 6.14.m: FloatVector . The type Float must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, if x and y are Float objects,
     
x < y
returns the bool value true if x is less than y and false otherwise.

6.14.m: FloatVector
The type FloatVector must be a 6.7: SimpleVector class. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.14.n: Example
The file 6.14.1: RombergMul.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

6.14.o: Source Code
The source code for this routine is in the file cppad/romberg_mul.hpp.
Input File: cppad/romberg_mul.hpp
6.14.1: One Dimensional Romberg Integration: Example and Test
 

# include <cppad/romberg_mul.hpp>
# include <cppad/vector.hpp>
# include <cppad/near_equal.hpp>


namespace {

	class TestFun {
	private:
		const CppAD::vector<size_t> deg;
	public:
		// constructor
		TestFun(const CppAD::vector<size_t> deg_) 
		: deg(deg_)
		{ }

		// function F(x) = x[0]^deg[0] * x[1]^deg[1]
		double operator () (const CppAD::vector<double> &x)
		{	size_t i;
			double   f = 1;
			for(i = 0; i < deg[0]; i++)
				f *= x[0];
			for(i = 0; i < deg[1]; i++)
				f *= x[1];
			return f;
		}
	};

}

bool RombergMul(void)
{	bool ok = true;
	size_t i;
	size_t k;

	CppAD::vector<size_t> deg(2);
	deg[0] = 5;
	deg[1] = 3;
	TestFun F(deg);

	CppAD::RombergMul<
		TestFun              , 
		CppAD::vector<size_t>, 
		CppAD::vector<double>, 
		2                    > RombergMulTest;

	// arugments to RombergMul
	CppAD::vector<double> a(2);
	CppAD::vector<double> b(2);
	CppAD::vector<size_t> n(2);
	CppAD::vector<size_t> p(2);
	for(i = 0; i < 2; i++)
	{	a[i] = 0.;
		b[i] = 1.;
	}
	n[0] = 4;
	n[1] = 3;
	double r, e;

	// int_a1^b1 dx1 int_a0^b0 F(x0,x1) dx0
	//	= [ b0^(deg[0]+1) - a0^(deg[0]+1) ] / (deg[0]+1) 
	//	* [ b1^(deg[1]+1) - a1^(deg[1]+1) ] / (deg[1]+1) 
	double bpow = 1.;
	double apow = 1.;
	for(i = 0; i <= deg[0]; i++)
	{	bpow *= b[0];
		apow *= a[0];
	}  
	double check = (bpow - apow) / (deg[0]+1);
	bpow = 1.;
	apow = 1.;
	for(i = 0; i <= deg[1]; i++)
	{	bpow *= b[1];
		apow *= a[1];
	}  
	check *= (bpow - apow) / (deg[1]+1);

	double step = (b[1] - a[1]) / exp(log(2.)*(n[1]-1));
	double spow = 1;
	for(k = 0; k <= n[1]; k++)
	{	spow = spow * step * step;
		double bnd = 3 * (deg[1] + 1) * spow;

		for(i = 0; i < 2; i++)
			p[i] = k;
		r    = RombergMulTest(F, a, b, n, p, e);

		ok  &= e < bnd;
		ok  &= CppAD::NearEqual(check, r, 0., e);	

	}

	return ok;
}


Input File: example/romberg_mul.cpp
6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver

6.15.a: Syntax
# include <cppad/runge_45.hpp>
xf = Runge45(FMtitfxi)
xf = Runge45(FMtitfxie)

6.15.b: Purpose
This is an implementation of the Cash-Karp embedded 4th and 5th order Runge-Kutta ODE solver described in Section 16.2 of 9.5.d: Numerical Recipes . We use  n for the size of the vector xi. Let  \R denote the real numbers and let  F : \R \times \R^n \rightarrow \R^n be a smooth function. The return value xf contains a 5th order approximation for the value  X(tf) where  X : [ti , tf] \rightarrow \R^n is defined by the following initial value problem:  \[
\begin{array}{rcl}
     X(ti)  & = & xi    \\
     X'(t)  & = & F[t , X(t)] 
\end{array}
\] 
If your set of ordinary differential equations are stiff, an implicit method may be better (perhaps 6.16: Rosen34 .)

6.15.c: Include
The file cppad/runge_45.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.15.d: xf
The return value xf has the prototype
     
Vector xf
and the size of xf is equal to n (see description of 6.15.l: Vector below).  \[
     X(tf) = xf + O( h^6 )
\] 
where  h = (tf - ti) / M is the step size. If xf contains not a number 6.9: nan , see the discussion for 6.15.e.c: f .

6.15.e: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
The object F (and the class Fun) must have a member function named Ode that supports the syntax
     
F.Ode(txf)

6.15.e.a: t
The argument t to F.Ode has prototype
     const 
Scalar &t
(see description of 6.15.k: Scalar below).

6.15.e.b: x
The argument x to F.Ode has prototype
     const 
Vector &x
and has size n (see description of 6.15.l: Vector below).

6.15.e.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size n and the input values of the elements of f do not matter. On output, f is set equal to  F(t, x) in the differential equation. If any of the elements of f have the value not a number nan the routine Runge45 returns with all the elements of xf and e equal to nan.

6.15.e.d: Warning
The argument f to F.Ode must have a call by reference in its prototype; i.e., do not forget the & in the prototype for f.

6.15.f: M
The argument M has prototype
     size_t 
M
It specifies the number of steps to use when solving the differential equation. This must be greater than or equal one. The step size is given by  h = (tf - ti) / M , thus the larger M, the more accurate the return value xf is as an approximation for  X(tf) .

6.15.g: ti
The argument ti has prototype
     const 
Scalar &ti
(see description of 6.15.k: Scalar below). It specifies the initial time for t in the differential equation; i.e., the time corresponding to the value xi.

6.15.h: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for t in the differential equation; i.e., the time corresponding to the value xf.

6.15.i: xi
The argument xi has the prototype
     const 
Vector &xi
and the size of xi is equal to n. It specifies the value of  X(ti)

6.15.j: e
The argument e is optional and has the prototype
     
Vector &e
If e is present, the size of e must be equal to n. The input value of the elements of e does not matter. On output it contains an element by element estimated bound for the absolute value of the error in xf  \[
     e = O( h^5 )
\] 
where  h = (tf - ti) / M is the step size. If on output, e contains not a number nan, see the discussion for 6.15.e.c: f .

6.15.k: Scalar
The type Scalar must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b:
Operation Description
a < b less than operator (returns a bool object)

6.15.l: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Scalar . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.15.m: Example
The file 6.15.1: Runge45.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

6.15.n: Source Code
The source code for this routine is in the file cppad/runge_45.hpp.
Input File: cppad/runge_45.hpp
6.15.1: Runge45: Example and Test
Define  X : \R \rightarrow \R^n by  \[
     X_i (t) =  t^{i+1}
\] 
for  i = 1 , \ldots , n-1 . It follows that  \[
\begin{array}{rclr}
X_i(0)       & = & 0                           & {\rm for \; all \;} i \\
X_i ' (t)  & = & 1                             & {\rm if \;} i = 0      \\
X_i '(t)   & = & (i+1) t^i = (i+1) X_{i-1} (t) & {\rm if \;} i > 0
\end{array}
\] 
The example tests Runge45 using the relations above:
 

# include <cstddef>                 // for size_t
# include <cppad/runge_45.hpp>      // for CppAD::Runge45
# include <cppad/near_equal.hpp>    // for CppAD::NearEqual
# include <cppad/vector.hpp>        // for CppAD::vector

namespace {
	class Fun {
	public:
		// constructor
		Fun(bool use_x_) : use_x(use_x_) 
		{ }

		// set f = x'(t)
		void Ode(
			const double                &t, 
			const CppAD::vector<double> &x, 
			CppAD::vector<double>       &f)
		{	size_t n  = x.size();	
			double ti = 1.;
			f[0]      = 1.;
			size_t i;
			for(i = 1; i < n; i++)
			{	ti *= t;
				if( use_x )
					f[i] = (i+1) * x[i-1];
				else	f[i] = (i+1) * ti;
			}
		}
	private:
		const bool use_x;

	};
}

bool Runge45(void)
{	bool ok = true;     // initial return value
	size_t i;           // temporary indices

	size_t  n = 5;      // number components in X(t) and order of method
	size_t  M = 2;      // number of Runge45 steps in [ti, tf]
	double ti = 0.;     // initial time
	double tf = 2.;     // final time 

	// xi = X(0)
	CppAD::vector<double> xi(n); 
	for(i = 0; i <n; i++)
		xi[i] = 0.;

	size_t use_x;
	for( use_x = 0; use_x < 2; use_x++)
	{	// function object depends on value of use_x
		Fun F(use_x > 0); 

		// compute Runge45 approximation for X(tf)
		CppAD::vector<double> xf(n), e(n); 
		xf = CppAD::Runge45(F, M, ti, tf, xi, e);

		double check = tf;
		for(i = 0; i < n; i++)
		{	// check that error is always positive
			ok    &= (e[i] >= 0.);
			// 5th order method is exact for i < 5
			if( i < 5 ) ok &=
				CppAD::NearEqual(xf[i], check, 1e-10, 1e-10);
			// 4th order method is exact for i < 4
			if( i < 4 )
				ok &= (e[i] <= 1e-10);

			// check value for next i
			check *= tf;
		}
	}
	return ok;
}


Input File: example/runge_45.cpp
6.16: A 3rd and 4th Order Rosenbrock ODE Solver

6.16.a: Syntax
# include <cppad/rosen_34.hpp>
xf = Rosen34(FMtitfxi)
xf = Rosen34(FMtitfxie)

6.16.b: Description
This is an embedded 3rd and 4th order Rosenbrock ODE solver (see Section 16.6 of 9.5.d: Numerical Recipes for a description of Rosenbrock ODE solvers). In particular, we use the formulas taken from page 100 of 9.5.e: Shampine, L.F. (except that the fraction 98/108 has been correction to be 97/108).

We use  n for the size of the vector xi. Let  \R denote the real numbers and let  F : \R \times \R^n \rightarrow \R^n be a smooth function. The return value xf contains a 5th order approximation for the value  X(tf) where  X : [ti , tf] \rightarrow \R^n is defined by the following initial value problem:  \[
\begin{array}{rcl}
     X(ti)  & = & xi    \\
     X'(t)  & = & F[t , X(t)] 
\end{array}
\] 
If your set of ordinary differential equations are not stiff an explicit method may be better (perhaps 6.15: Runge45 .)

6.16.c: Include
The file cppad/rosen_34.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.16.d: xf
The return value xf has the prototype
     
Vector xf
and the size of xf is equal to n (see description of 6.16.l: Vector below).  \[
     X(tf) = xf + O( h^5 )
\] 
where  h = (tf - ti) / M is the step size. If xf contains not a number 6.9: nan , see the discussion of 6.16.e.f: f .

6.16.e: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
This must support the following set of calls
     
F.Ode(txf)
     
F.Ode_ind(txf_t)
     
F.Ode_dep(txf_x)

6.16.e.a: t
In all three cases, the argument t has prototype
     const 
Scalar &t
(see description of 6.16.k: Scalar below).

6.16.e.b: x
In all three cases, the argument x has prototype
     const 
Vector &x
and has size n (see description of 6.16.l: Vector below).

6.16.e.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size n and the input values of the elements of f do not matter. On output, f is set equal to  F(t, x) (see F(t, x) in 6.16.b: Description ).

6.16.e.d: f_t
The argument f_t to F.Ode_ind has prototype
     
Vector &f_t
On input and output, f_t is a vector of size n and the input values of the elements of f_t do not matter. On output, the i-th element of f_t is set equal to  \partial_t F_i (t, x) (see F(t, x) in 6.16.b: Description ).

6.16.e.e: f_x
The argument f_x to F.Ode_dep has prototype
     
Vector &f_x
On input and output, f_x is a vector of size n*n and the input values of the elements of f_x do not matter. On output, the [i*n+j] element of f_x is set equal to  \partial_{x(j)} F_i (t, x) (see F(t, x) in 6.16.b: Description ).

6.16.e.f: Nan
If any of the elements of f, f_t, or f_x have the value not a number nan, the routine Rosen34 returns with all the elements of xf and e equal to nan.

6.16.e.g: Warning
The arguments f, f_t, and f_x must have a call by reference in their prototypes; i.e., do not forget the & in the prototype for f, f_t and f_x.

6.16.e.h: Optimization
Every call of the form
     
F.Ode_ind(txf_t)
is directly followed by a call of the form
     
F.Ode_dep(txf_x)
where the arguments t and x have not changed between calls. In many cases it is faster to compute the values of f_t and f_x together and then pass them back one at a time.

6.16.f: M
The argument M has prototype
     size_t 
M
It specifies the number of steps to use when solving the differential equation. This must be greater than or equal one. The step size is given by  h = (tf - ti) / M , thus the larger M, the more accurate the return value xf is as an approximation for  X(tf) .

6.16.g: ti
The argument ti has prototype
     const 
Scalar &ti
(see description of 6.16.k: Scalar below). It specifies the initial time for t in the differential equation; i.e., the time corresponding to the value xi.

6.16.h: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for t in the differential equation; i.e., the time corresponding to the value xf.

6.16.i: xi
The argument xi has the prototype
     const 
Vector &xi
and the size of xi is equal to n. It specifies the value of  X(ti)

6.16.j: e
The argument e is optional and has the prototype
     
Vector &e
If e is present, the size of e must be equal to n. The input value of the elements of e does not matter. On output it contains an element by element estimated bound for the absolute value of the error in xf  \[
     e = O( h^4 )
\] 
where  h = (tf - ti) / M is the step size.

6.16.k: Scalar
The type Scalar must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b:
Operation Description
a < b less than operator (returns a bool object)

6.16.l: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Scalar . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.16.m: Example
The file 6.16.1: Rosen34.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

6.16.n: Source Code
The source code for this routine is in the file cppad/rosen_34.hpp.
Input File: cppad/rosen_34.hpp
6.16.1: Rosen34: Example and Test
Define  X : \R \rightarrow \R^n by  \[
     X_i (t) =  t^{i+1}
\] 
for  i = 1 , \ldots , n-1 . It follows that  \[
\begin{array}{rclr}
X_i(0)     & = & 0                             & {\rm for \; all \;} i \\
X_i ' (t)  & = & 1                             & {\rm if \;} i = 0      \\
X_i '(t)   & = & (i+1) t^i = (i+1) X_{i-1} (t) & {\rm if \;} i > 0
\end{array}
\] 
The example tests Rosen34 using the relations above:
 

# include <cppad/cppad.hpp>        // For automatic differentiation

namespace {
	class Fun {
	public:
		// constructor
		Fun(bool use_x_) : use_x(use_x_) 
		{ }

		// compute f(t, x) both for double and AD<double>
		template <typename Scalar>
		void Ode(
			const Scalar                    &t, 
			const CPPAD_TEST_VECTOR<Scalar> &x, 
			CPPAD_TEST_VECTOR<Scalar>       &f)
		{	size_t n  = x.size();	
			Scalar ti(1);
			f[0]   = Scalar(1);
			size_t i;
			for(i = 1; i < n; i++)
			{	ti *= t;
				// convert int(size_t) to avoid warning
				// on _MSC_VER systems
				if( use_x )
					f[i] = int(i+1) * x[i-1];
				else	f[i] = int(i+1) * ti;
			}
		}

		// compute partial of f(t, x) w.r.t. t using AD
		void Ode_ind(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f_t)
		{	using namespace CppAD;

			size_t n  = x.size();	
			CPPAD_TEST_VECTOR< AD<double> > T(1);
			CPPAD_TEST_VECTOR< AD<double> > X(n);
			CPPAD_TEST_VECTOR< AD<double> > F(n);

			// set argument values
			T[0] = t;
			size_t i;
			for(i = 0; i < n; i++)
				X[i] = x[i];

			// declare independent variables
			Independent(T);

			// compute f(t, x)
			this->Ode(T[0], X, F);

			// define AD function object
			ADFun<double> Fun(T, F);

			// compute partial of f w.r.t t
			CPPAD_TEST_VECTOR<double> dt(1);
			dt[0] = 1.;
			f_t = Fun.Forward(1, dt);
		}

		// compute partial of f(t, x) w.r.t. x using AD
		void Ode_dep(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f_x)
		{	using namespace CppAD;

			size_t n  = x.size();	
			CPPAD_TEST_VECTOR< AD<double> > T(1);
			CPPAD_TEST_VECTOR< AD<double> > X(n);
			CPPAD_TEST_VECTOR< AD<double> > F(n);

			// set argument values
			T[0] = t;
			size_t i, j;
			for(i = 0; i < n; i++)
				X[i] = x[i];

			// declare independent variables
			Independent(X);

			// compute f(t, x)
			this->Ode(T[0], X, F);

			// define AD function object
			ADFun<double> Fun(X, F);

			// compute partial of f w.r.t x
			CPPAD_TEST_VECTOR<double> dx(n);
			CPPAD_TEST_VECTOR<double> df(n);
			for(j = 0; j < n; j++)
				dx[j] = 0.;
			for(j = 0; j < n; j++)
			{	dx[j] = 1.;
				df = Fun.Forward(1, dx);
				for(i = 0; i < n; i++)
					f_x [i * n + j] = df[i];
				dx[j] = 0.;
			}
		}

	private:
		const bool use_x;

	};
}

bool Rosen34(void)
{	bool ok = true;     // initial return value
	size_t i;           // temporary indices

	size_t  n = 4;      // number components in X(t) and order of method
	size_t  M = 2;      // number of Rosen34 steps in [ti, tf]
	double ti = 0.;     // initial time
	double tf = 2.;     // final time 

	// xi = X(0)
	CPPAD_TEST_VECTOR<double> xi(n); 
	for(i = 0; i <n; i++)
		xi[i] = 0.;

	size_t use_x;
	for( use_x = 0; use_x < 2; use_x++)
	{	// function object depends on value of use_x
		Fun F(use_x > 0); 

		// compute Rosen34 approximation for X(tf)
		CPPAD_TEST_VECTOR<double> xf(n), e(n); 
		xf = CppAD::Rosen34(F, M, ti, tf, xi, e);

		double check = tf;
		for(i = 0; i < n; i++)
		{	// check that error is always positive
			ok    &= (e[i] >= 0.);
			// 4th order method is exact for i < 4
			if( i < 4 ) ok &=
				CppAD::NearEqual(xf[i], check, 1e-10, 1e-10);
			// 3rd order method is exact for i < 3
			if( i < 3 )
				ok &= (e[i] <= 1e-10);

			// check value for next i
			check *= tf;
		}
	}
	return ok;
}


Input File: example/rosen_34.cpp
6.17: An Error Controller for ODE Solvers

6.17.a: Syntax
# include <cppad/ode_err_control.hpp>

xf = OdeErrControl(methodtitfxi,
     
sminsmaxscureabserelef , maxabsnstep )

6.17.b: Description
Let  \R denote the real numbers and let  F : \R \times \R^n \rightarrow \R^n be a smooth function. We define  X : [ti , tf] \rightarrow \R^n by the following initial value problem:  \[
\begin{array}{rcl}
     X(ti)  & = & xi    \\
     X'(t)  & = & F[t , X(t)] 
\end{array}
\] 
The routine OdeErrControl can be used to adjust the step size used an arbitrary integration methods in order to be as fast as possible and still with in a requested error bound.

6.17.c: Include
The file cppad/ode_err_control.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.17.d: Notation
The template parameter types 6.17.s: Scalar and 6.17.t: Vector are documented below.

6.17.e: xf
The return value xf has the prototype
     
Vector xf
(see description of 6.17.t: Vector below). and the size of xf is equal to n. If xf contains not a number 6.9: nan , see the discussion of 6.17.f.b: step .

6.17.f: Method
The class Method and the object method satisfy the following syntax
     
Method &method
The object method must support step and order member functions defined below:

6.17.f.a: step
The syntax
     
method.step(tatbxaxbeb)
executes one step of the integration method.

ta
The argument ta has prototype
     const 
Scalar &ta
It specifies the initial time for this step in the ODE integration. (see description of 6.17.s: Scalar below).

tb
The argument tb has prototype
     const 
Scalar &tb
It specifies the final time for this step in the ODE integration.

xa
The argument xa has prototype
     const 
Vector &xa
and size n. It specifies the value of  X(ta) . (see description of 6.17.t: Vector below).

xb
The argument value xb has prototype
     
Vector &xb
and size n. The input value of its elements does not matter. On output, it contains the approximation for  X(tb) that the method obtains.

eb
The argument value eb has prototype
     
Vector &eb
and size n. The input value of its elements does not matter. On output, it contains an estimate for the error in the approximation xb. It is assumed (locally) that the error bound in this approximation nearly equal to  K (tb - ta)^m where K is a fixed constant and m is the corresponding argument to CodeControl.

6.17.f.b: Nan
If any element of the vector eb or xb are not a number nan, the current step is considered to large. If this happens with the current step size equal to smin, OdeErrControl returns with xf and ef as vectors of nan.

6.17.f.c: order
If m is size_t, the object method must also support the following syntax
     
m = method.order()
The return value m is the order of the error estimate; i.e., there is a constant K such that if  ti \leq ta \leq tb \leq tf ,  \[
     | eb(tb) | \leq K | tb - ta |^m
\] 
where ta, tb, and eb are as in method.step(tatbxaxbeb)

6.17.g: ti
The argument ti has prototype
     const 
Scalar &ti
It specifies the initial time for the integration of the differential equation.

6.17.h: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for the integration of the differential equation.

6.17.i: xi
The argument xi has prototype
     const 
Vector &xi
and size n. It specifies value of  X(ti) .

6.17.j: smin
The argument smin has prototype
     const 
Scalar &smin
The step size during a call to method is defined as the corresponding value of  tb - ta . If  tf - ti \leq smin , the integration will be done in one step of size tf - ti. Otherwise, the minimum value of tb - ta will be  smin except for the last two calls to method where it may be as small as  smin / 2 .

6.17.k: smax
The argument smax has prototype
     const 
Scalar &smax
It specifies the maximum step size to use during the integration; i.e., the maximum value for  tb - ta in a call to method. The value of smax must be greater than or equal smin.

6.17.l: scur
The argument scur has prototype
     
Scalar &scur
The value of scur is the suggested next step size, based on error criteria, to try in the next call to method. On input it corresponds to the first call to method, in this call to OdeErrControl (where  ta = ti ). On output it corresponds to the next call to method, in a subsequent call to OdeErrControl (where ta = tf).

6.17.m: eabs
The argument eabs has prototype
     const 
Vector &eabs
and size n. Each of the elements of eabs must be greater than or equal zero. It specifies a bound for the absolute error in the return value xf as an approximation for  X(tf) . (see the 6.17.r: error criteria discussion below).

6.17.n: erel
The argument erel has prototype
     const 
Scalar &erel
and is greater than or equal zero. It specifies a bound for the relative error in the return value xf as an approximation for  X(tf) (see the 6.17.r: error criteria discussion below).

6.17.o: ef
The argument value ef has prototype
     
Vector &ef
and size n. The input value of its elements does not matter. On output, it contains an estimated bound for the absolute error in the approximation xf; i.e.,  \[
     ef_i > | X( tf )_i - xf_i |
\] 
If on output ef contains not a number nan, see the discussion of 6.17.f.b: step .

6.17.p: maxabs
The argument maxabs is optional in the call to OdeErrControl. If it is present, it has the prototype
     
Vector &maxabs
and size n. The input value of its elements does not matter. On output, it contains an estimate for the maximum absolute value of  X(t) ; i.e.,  \[
     maxabs[i] \approx \max \left\{ 
          | X( t )_i | \; : \;  t \in [ti, tf] 
     \right\}
\] 


6.17.q: nstep
The argument nstep is optional in the call to OdeErrControl. If it is present, it has the prototype
     
size_t &nstep
Its input value does not matter and its output value is the number of calls to method.step used by OdeErrControl.

6.17.r: Error Criteria Discussion
The relative error criteria erel and absolute error criteria eabs are enforced during each step of the integration of the ordinary differential equations. In addition, they are inversely scaled by the step size so that the total error bound is less than the sum of the error bounds. To be specific, if  \tilde{X} (t) is the approximate solution at time  t , ta is the initial step time, and tb is the final step time,  \[
\left| \tilde{X} (tb)_j  - X (tb)_j \right| 
\leq 
\frac{tf - ti}{tb - ta}
\left[ eabs[j] + erel \;  | \tilde{X} (tb)_j | \right] 
\] 
If  X(tb)_j is near zero for some  tb \in [ti , tf] , and one uses an absolute error criteria  eabs[j] of zero, the error criteria above will force OdeErrControl to use step sizes equal to 6.17.j: smin for steps ending near  tb . In this case, the error relative to maxabs can be judged after OdeErrControl returns. If ef is to large relative to maxabs, OdeErrControl can be called again with a smaller value of smin.

6.17.s: Scalar
The type Scalar must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b:
Operation Description
a <= b returns true (false) if a is less than or equal (greater than) b.
a == b returns true (false) if a is equal to b.
log(a) returns a Scalar equal to the logarithm of a
exp(a) returns a Scalar equal to the exponential of a

6.17.t: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Scalar . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.17.u: Example
The files 6.17.1: OdeErrControl.cpp and 6.17.2: OdeErrMaxabs.cpp contain examples and tests of using this routine. They return true if they succeed and false otherwise.

6.17.v: Theory
Let  e(s) be the error as a function of the step size  s and suppose that there is a constant  K such that  e(s) = K s^m . Let  a be our error bound. Given the value of  e(s) , a step of size  \lambda s would be ok provided that  \[
\begin{array}{rcl}
     a  & \geq & e( \lambda s ) (tf - ti) / ( \lambda s ) \\
     a  & \geq & K \lambda^m s^m (tf - ti) / ( \lambda s ) \\
     a  & \geq & \lambda^{m-1} s^{m-1} (tf - ti) e(s) / s^m \\
     a  & \geq & \lambda^{m-1} (tf - ti) e(s) / s           \\
     \lambda^{m-1} & \leq & \frac{a}{e(s)} \frac{s}{tf - ti}
\end{array}
\] 
Thus if the right hand side of the last inequality is greater than or equal to one, the step of size  s is ok.

6.17.w: Source Code
The source code for this routine is in the file cppad/ode_err_control.hpp.
Input File: cppad/ode_err_control.hpp
6.17.1: OdeErrControl: Example and Test
Define  X : \R \rightarrow \R^2 by  \[
\begin{array}{rcl}
     X_0 (0)       & = & 1  \\
     X_1 (0)       & = & 0  \\
     X_0^{(1)} (t) & = & - \alpha X_0 (t)  \\
     X_1^{(1)} (t) & = &  1 / X_0 (t)
\end{array}
\] 
It follows that  \[
\begin{array}{rcl}
X_0 (t) & = &  \exp ( - \alpha t )  \\
X_1 (t) & = & [ \exp( \alpha t ) - 1 ] / \alpha
\end{array}
\] 
This example tests OdeErrControl using the relations above.

6.17.1.a: Nan
Note that  X_0 (t) > 0 for all  t and that the ODE goes through a singularity between  X_0 (t) > 0 and  X_0 (t) < 0 . If  X_0 (t) < 0 , we return nan in order to inform OdeErrControl that its is taking to large a step.
 

# include <cstddef>                     // for size_t
# include <cmath>                       // for exp
# include <cppad/ode_err_control.hpp>   // CppAD::OdeErrControl
# include <cppad/near_equal.hpp>        // CppAD::NearEqual
# include <cppad/vector.hpp>            // CppAD::vector
# include <cppad/runge_45.hpp>          // CppAD::Runge45
# include <cppad/nan.hpp>               // for nan

namespace {
	// --------------------------------------------------------------
	class Fun {
	private:
		const double alpha_;
	public:
		// constructor
		Fun(double alpha) : alpha_(alpha)
		{ } 

		// set f = x'(t)
		void Ode(
			const double                &t, 
			const CppAD::vector<double> &x, 
			CppAD::vector<double>       &f)
		{	f[0] = - alpha_ * x[0];
			f[1] = 1. / x[0];	
			// case where ODE does not make sense
			if( x[0] < 0. )
				f[1] = CppAD::nan(0.);
		}

	};

	// --------------------------------------------------------------
	class Method {
	private:
		Fun F;
	public:
		// constructor
		Method(double alpha) : F(alpha)
		{ }
		void step(
			double ta, 
			double tb, 
			CppAD::vector<double> &xa ,
			CppAD::vector<double> &xb ,
			CppAD::vector<double> &eb )
		{	xb = CppAD::Runge45(F, 1, ta, tb, xa, eb);
		}
		size_t order(void)
		{	return 4; }
	};
}

bool OdeErrControl(void)
{	bool ok = true;     // initial return value

	double alpha = 10.;
	Method method(alpha);

	CppAD::vector<double> xi(2);
	xi[0] = 1.;
	xi[1] = 0.;

	CppAD::vector<double> eabs(2);
	eabs[0] = 1e-4;
	eabs[1] = 1e-4;

	// inputs
	double ti   = 0.;
	double tf   = 1.;
	double smin = 1e-4;
	double smax = 1.;
	double scur = 1.;
	double erel = 0.;

	// outputs
	CppAD::vector<double> ef(2);
	CppAD::vector<double> xf(2);
	CppAD::vector<double> maxabs(2);
	size_t nstep;

	
	xf = OdeErrControl(method,
		ti, tf, xi, smin, smax, scur, eabs, erel, ef, maxabs, nstep);

	double x0 = exp(-alpha*tf);
	ok &= CppAD::NearEqual(x0, xf[0], 1e-4, 1e-4);
	ok &= CppAD::NearEqual(0., ef[0], 1e-4, 1e-4);

	double x1 = (exp(alpha*tf) - 1) / alpha;
	ok &= CppAD::NearEqual(x1, xf[1], 1e-4, 1e-4);
	ok &= CppAD::NearEqual(0., ef[1], 1e-4, 1e-4);

	return ok;
}


Input File: example/ode_err_control.cpp
6.17.2: OdeErrControl: Example and Test Using Maxabs Argument
Define  X : \R \rightarrow \R^2 by  \[
\begin{array}{rcl}
X_0 (t) & = &  - \exp ( - w_0 t )  \\
X_1 (t) & = & \frac{w_0}{w_1 - w_0} [ \exp ( - w_0 t ) - \exp( - w_1 t )]
\end{array}
\] 
It follows that  X_0 (0) = 1 ,  X_1 (0) = 0 and  \[
\begin{array}{rcl}
     X_0^{(1)} (t) & = & - w_0 X_0 (t)  \\
     X_1^{(1)} (t) & = & + w_0 X_0 (t) - w_1 X_1 (t) 
\end{array}
\] 
Note that  X_1 (0) is zero an if  w_0 t is large,  X_0 (t) is near zero. This example tests OdeErrControl using the maxabs argument.
 

# include <cstddef>              // for size_t
# include <cmath>                // for exp
# include <cppad/ode_err_control.hpp>   // CppAD::OdeErrControl
# include <cppad/near_equal.hpp>    // CppAD::NearEqual
# include <cppad/vector.hpp> // CppAD::vector
# include <cppad/runge_45.hpp>      // CppAD::Runge45

namespace {
	// --------------------------------------------------------------
	class Fun {
	private:
		 CppAD::vector<double> w;
	public:
		// constructor
		Fun(const CppAD::vector<double> &w_) : w(w_)
		{ } 

		// set f = x'(t)
		void Ode(
			const double                &t, 
			const CppAD::vector<double> &x, 
			CppAD::vector<double>       &f)
		{	f[0] = - w[0] * x[0];
			f[1] = + w[0] * x[0] - w[1] * x[1];	
		}
	};

	// --------------------------------------------------------------
	class Method {
	private:
		Fun F;
	public:
		// constructor
		Method(const CppAD::vector<double> &w_) : F(w_)
		{ }
		void step(
			double ta, 
			double tb, 
			CppAD::vector<double> &xa ,
			CppAD::vector<double> &xb ,
			CppAD::vector<double> &eb )
		{	xb = CppAD::Runge45(F, 1, ta, tb, xa, eb);
		}
		size_t order(void)
		{	return 4; }
	};
}

bool OdeErrMaxabs(void)
{	bool ok = true;     // initial return value

	CppAD::vector<double> w(2);
	w[0] = 10.;
	w[1] = 1.;
	Method method(w);

	CppAD::vector<double> xi(2);
	xi[0] = 1.;
	xi[1] = 0.;

	CppAD::vector<double> eabs(2);
	eabs[0] = 0.;
	eabs[1] = 0.;

	CppAD::vector<double> ef(2);
	CppAD::vector<double> xf(2);
	CppAD::vector<double> maxabs(2);

	double ti   = 0.;
	double tf   = 1.;
	double smin = .5;
	double smax = 1.;
	double scur = .5;
	double erel = 1e-4;
	
	bool accurate = false;
	while( ! accurate )
	{	xf = OdeErrControl(method,
			ti, tf, xi, smin, smax, scur, eabs, erel, ef, maxabs);
		accurate = true;
		size_t i;
		for(i = 0; i < 2; i++)
			accurate &= ef[i] <= erel * maxabs[i];
		if( ! accurate )
			smin = smin / 2;
	} 

	double x0 = exp(-w[0]*tf);
	ok &= CppAD::NearEqual(x0, xf[0], erel, 0.);
	ok &= CppAD::NearEqual(0., ef[0], erel, erel);

	double x1 = w[0] * (exp(-w[0]*tf) - exp(-w[1]*tf))/(w[1] - w[0]);
	ok &= CppAD::NearEqual(x1, xf[1], erel, 0.);
	ok &= CppAD::NearEqual(0., ef[1], erel, erel);

	return ok;
}


Input File: example/ode_err_maxabs.cpp
6.18: An Arbitrary Order Gear Method

6.18.a: Syntax
# include <cppad/ode_gear.hpp>
OdeGear(FmnTXe)

6.18.b: Purpose
This routine applies 6.18.o: Gear's Method to solve an explicit set of ordinary differential equations. We are given  f : \R \times \R^n \rightarrow \R^n be a smooth function. This routine solves the following initial value problem  \[
\begin{array}{rcl}
     x( t_{m-1} )  & = & x^0    \\
     x^\prime (t)  & = & f[t , x(t)] 
\end{array}
\] 
for the value of  x( t_m ) . If your set of ordinary differential equations are not stiff an explicit method may be better (perhaps 6.15: Runge45 .)

6.18.c: Include
The file cppad/ode_gear.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.18.d: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
This must support the following set of calls
     
F.Ode(txf)
     
F.Ode_dep(txf_x)

6.18.d.a: t
The argument t has prototype
     const 
Scalar &t
(see description of 6.18.j: Scalar below).

6.18.d.b: x
The argument x has prototype
     const 
Vector &x
and has size n (see description of 6.18.k: Vector below).

6.18.d.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size n and the input values of the elements of f do not matter. On output, f is set equal to  f(t, x) (see f(t, x) in 6.18.b: Purpose ).

6.18.d.d: f_x
The argument f_x has prototype
     
Vector &f_x
On input and output, f_x is a vector of size  n * n and the input values of the elements of f_x do not matter. On output,  \[
     f\_x [i * n + j] = \partial_{x(j)} f_i ( t , x )
\] 


6.18.d.e: Warning
The arguments f, and f_x must have a call by reference in their prototypes; i.e., do not forget the & in the prototype for f and f_x.

6.18.e: m
The argument m has prototype
     size_t 
m
It specifies the order (highest power of  t ) used to represent the function  x(t) in the multi-step method. Upon return from OdeGear, the i-th component of the polynomial is defined by  \[
     p_i ( t_j ) = X[ j * n + i ]
\] 
for  j = 0 , \ldots , m (where  0 \leq i < n ). The value of  m must be greater than or equal one.

6.18.f: n
The argument n has prototype
     size_t 
n
It specifies the range space dimension of the vector valued function  x(t) .

6.18.g: T
The argument T has prototype
     const 
Vector &T
and size greater than or equal to  m+1 . For  j = 0 , \ldots m ,  T[j] is the time corresponding to time corresponding to a previous point in the multi-step method. The value  T[m] is the time of the next point in the multi-step method. The array  T must be monotone increasing; i.e.,  T[j] < T[j+1] . Above and below we often use the shorthand  t_j for  T[j] .

6.18.h: X
The argument X has the prototype
     
Vector &X
and size greater than or equal to  (m+1) * n . On input to OdeGear, for  j = 0 , \ldots , m-1 , and  i = 0 , \ldots , n-1  \[
     X[ j * n + i ] = x_i ( t_j )
\] 
Upon return from OdeGear, for  i = 0 , \ldots , n-1  \[
     X[ m * n + i ] \approx x_i ( t_m ) 
\] 


6.18.i: e
The vector e is an approximate error bound for the result; i.e.,  \[
     e[i] \geq | X[ m * n + i ] - x_i ( t_m ) |
\] 
The order of this approximation is one less than the order of the solution; i.e.,  \[
     e = O ( h^m )
\] 
where  h is the maximum of  t_{j+1} - t_j .

6.18.j: Scalar
The type Scalar must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b:
Operation Description
a < b less than operator (returns a bool object)

6.18.k: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Scalar . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.18.l: Example
The file 6.18.1: OdeGear.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

6.18.m: Source Code
The source code for this routine is in the file cppad/ode_gear.hpp.

6.18.n: Theory
For this discussion we use the shorthand  x_j for the value  x ( t_j ) \in \R^n which is not to be confused with  x_i (t) \in \R in the notation above. The interpolating polynomial  p(t) is given by  \[
p(t) = 
\sum_{j=0}^m 
x_j
\frac{ 
     \prod_{i \neq j} ( t - t_i )
}{
     \prod_{i \neq j} ( t_j - t_i ) 
}
\] 
The derivative  p^\prime (t) is given by  \[
p^\prime (t) = 
\sum_{j=0}^m 
x_j
\frac{ 
     \sum_{i \neq j} \prod_{k \neq i,j} ( t - t_k )
}{ 
     \prod_{k \neq j} ( t_j - t_k ) 
}
\] 
Evaluating the derivative at the point  t_\ell we have  \[
\begin{array}{rcl}
p^\prime ( t_\ell ) & = & 
x_\ell
\frac{ 
     \sum_{i \neq \ell} \prod_{k \neq i,\ell} ( t_\ell - t_k )
}{ 
     \prod_{k \neq \ell} ( t_\ell - t_k ) 
}
+
\sum_{j \neq \ell}
x_j
\frac{ 
     \sum_{i \neq j} \prod_{k \neq i,j} ( t_\ell - t_k )
}{ 
     \prod_{k \neq j} ( t_j - t_k ) 
}
\\
& = &
x_\ell
\sum_{i \neq \ell} 
\frac{ 1 }{ t_\ell - t_i }
+
\sum_{j \neq \ell}
x_j
\frac{ 
     \prod_{k \neq \ell,j} ( t_\ell - t_k )
}{ 
     \prod_{k \neq j} ( t_j - t_k ) 
}
\\
& = &
x_\ell
\sum_{k \neq \ell} ( t_\ell - t_k )^{-1}
+
\sum_{j \neq \ell}
x_j
( t_j - t_\ell )^{-1}
\prod_{k \neq \ell ,j} ( t_\ell - t_k ) / ( t_j - t_k )
\end{array}
\] 
We define the vector  \alpha \in \R^{m+1} by  \[
\alpha_j = \left\{ \begin{array}{ll}
\sum_{k \neq m} ( t_m - t_k )^{-1}
     & {\rm if} \; j = m
\\
( t_j - t_m )^{-1}
\prod_{k \neq m,j} ( t_m - t_k ) / ( t_j - t_k )
     & {\rm otherwise}
\end{array} \right.
\] 
It follows that  \[
     p^\prime ( t_m ) = \alpha_0 x_0 + \cdots + \alpha_m x_m
\] 
Gear's method determines  x_m by solving the following nonlinear equation  \[
     f( t_m , x_m ) = \alpha_0 x_0 + \cdots + \alpha_m x_m
\] 
Newton's method for solving this equation determines iterates, which we denote by  x_m^k , by solving the following affine approximation of the equation above  \[
\begin{array}{rcl}
f( t_m , x_m^{k-1} ) + \partial_x f( t_m , x_m^{k-1} ) ( x_m^k - x_m^{k-1} )
& = &
\alpha_0 x_0^k + \alpha_1 x_1 + \cdots + \alpha_m x_m
\\
\left[ \alpha_m I - \partial_x f( t_m , x_m^{k-1} ) \right]  x_m
& = &
\left[ 
f( t_m , x_m^{k-1} ) - \partial_x f( t_m , x_m^{k-1} ) x_m^{k-1} 
- \alpha_0 x_0 - \cdots - \alpha_{m-1} x_{m-1}
\right]
\end{array}
\] 
In order to initialize Newton's method; i.e. choose  x_m^0 we define the vector  \beta \in \R^{m+1} by  \[
\beta_j = \left\{ \begin{array}{ll}
\sum_{k \neq m-1} ( t_{m-1} - t_k )^{-1}
     & {\rm if} \; j = m-1
\\
( t_j - t_{m-1} )^{-1}
\prod_{k \neq m-1,j} ( t_{m-1} - t_k ) / ( t_j - t_k )
     & {\rm otherwise}
\end{array} \right.
\] 
It follows that  \[
     p^\prime ( t_{m-1} ) = \beta_0 x_0 + \cdots + \beta_m x_m
\] 
We solve the following approximation of the equation above to determine  x_m^0 :  \[
     f( t_{m-1} , x_{m-1} ) = 
     \beta_0 x_0 + \cdots + \beta_{m-1} x_{m-1} + \beta_m x_m^0
\] 


6.18.o: Gear's Method
C. W. Gear, ``Simultaneous Numerical Solution of Differential-Algebraic Equations,'' IEEE Transactions on Circuit Theory, vol. 18, no. 1, pp. 89-95, Jan. 1971.
Input File: cppad/ode_gear.hpp
6.18.1: OdeGear: Example and Test
Define  x : \R \rightarrow \R^n by  \[
     x_i (t) =  t^{i+1}
\] 
for  i = 1 , \ldots , n-1 . It follows that  \[
\begin{array}{rclr}
x_i(0)     & = & 0                             & {\rm for \; all \;} i \\
x_i ' (t)  & = & 1                             & {\rm if \;} i = 0      \\
x_i '(t)   & = & (i+1) t^i = (i+1) x_{i-1} (t) & {\rm if \;} i > 0
\end{array}
\] 
The example tests OdeGear using the relations above:
 

# include <cppad/ode_gear.hpp>
# include <cppad/cppad.hpp>        // For automatic differentiation

namespace {
	class Fun {
	public:
		// constructor
		Fun(bool use_x_) : use_x(use_x_) 
		{ }

		// compute f(t, x) both for double and AD<double>
		template <typename Scalar>
		void Ode(
			const Scalar                    &t, 
			const CPPAD_TEST_VECTOR<Scalar> &x, 
			CPPAD_TEST_VECTOR<Scalar>       &f)
		{	size_t n  = x.size();	
			Scalar ti(1);
			f[0]   = Scalar(1);
			size_t i;
			for(i = 1; i < n; i++)
			{	ti *= t;
				// convert int(size_t) to avoid warning
				// on _MSC_VER systems
				if( use_x )
					f[i] = int(i+1) * x[i-1];
				else	f[i] = int(i+1) * ti;
			}
		}

		void Ode_dep(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f_x)
		{	using namespace CppAD;

			size_t n  = x.size();	
			CPPAD_TEST_VECTOR< AD<double> > T(1);
			CPPAD_TEST_VECTOR< AD<double> > X(n);
			CPPAD_TEST_VECTOR< AD<double> > F(n);

			// set argument values
			T[0] = t;
			size_t i, j;
			for(i = 0; i < n; i++)
				X[i] = x[i];

			// declare independent variables
			Independent(X);

			// compute f(t, x)
			this->Ode(T[0], X, F);

			// define AD function object
			ADFun<double> Fun(X, F);

			// compute partial of f w.r.t x
			CPPAD_TEST_VECTOR<double> dx(n);
			CPPAD_TEST_VECTOR<double> df(n);
			for(j = 0; j < n; j++)
				dx[j] = 0.;
			for(j = 0; j < n; j++)
			{	dx[j] = 1.;
				df = Fun.Forward(1, dx);
				for(i = 0; i < n; i++)
					f_x [i * n + j] = df[i];
				dx[j] = 0.;
			}
		}

	private:
		const bool use_x;

	};
}

bool OdeGear(void)
{	bool ok = true; // initial return value
	size_t i, j;    // temporary indices

	size_t  m = 4;  // index of next value in X
	size_t  n = m;  // number of components in x(t)

	// vector of times
	CPPAD_TEST_VECTOR<double> T(m+1); 
	double step = .1;
	T[0]        = 0.;
	for(j = 1; j <= m; j++)
	{	T[j] = T[j-1] + step;
		step = 2. * step;
	}

	// initial values for x( T[m-j] ) 
	CPPAD_TEST_VECTOR<double> X((m+1) * n);
	for(j = 0; j < m; j++)
	{	double ti = T[j];
		for(i = 0; i < n; i++)
		{	X[ j * n + i ] = ti;
			ti *= T[j];
		}
	}

	// error bound
	CPPAD_TEST_VECTOR<double> e(n);

	size_t use_x;
	for( use_x = 0; use_x < 2; use_x++)
	{	// function object depends on value of use_x
		Fun F(use_x > 0); 

		// compute OdeGear approximation for x( T[m] )
		CppAD::OdeGear(F, m, n, T, X, e);

		double check = T[m];
		for(i = 0; i < n; i++)
		{	// method is exact up to order m and x[i] = t^{i+1}
			if( i + 1 <= m ) ok &= CppAD::NearEqual(
				X[m * n + i], check, 1e-10, 1e-10
			);
			// error bound should be zero up to order m-1
			if( i + 1 < m ) ok &= CppAD::NearEqual(
				e[i], 0., 1e-10, 1e-10
			);
			// check value for next i
			check *= T[m];
		}
	}
	return ok;
}


Input File: example/ode_gear.cpp
6.19: An Error Controller for Gear's Ode Solvers

6.19.a: Syntax
# include <cppad/ode_gear_control.hpp>
xf = OdeGearControl(FMtitfxi,
     
sminsmaxsinieabserelef , maxabsnstep )

6.19.b: Purpose
Let  \R denote the real numbers and let  f : \R \times \R^n \rightarrow \R^n be a smooth function. We define  X : [ti , tf] \rightarrow \R^n by the following initial value problem:  \[
\begin{array}{rcl}
     X(ti)  & = & xi    \\
     X'(t)  & = & f[t , X(t)] 
\end{array}
\] 
The routine 6.18: OdeGear is a stiff multi-step method that can be used to approximate the solution to this equation. The routine OdeGearControl sets up this multi-step method and controls the error during such an approximation.

6.19.c: Include
The file cppad/ode_gear_control.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD routines.

6.19.d: Notation
The template parameter types 6.19.t: Scalar and 6.19.u: Vector are documented below.

6.19.e: xf
The return value xf has the prototype
     
Vector xf
and the size of xf is equal to n (see description of 6.18.k: Vector below). It is the approximation for  X(tf) .

6.19.f: Fun
The class Fun and the object F satisfy the prototype
     
Fun &F
This must support the following set of calls
     
F.Ode(txf)
     
F.Ode_dep(txf_x)

6.19.f.a: t
The argument t has prototype
     const 
Scalar &t
(see description of 6.18.j: Scalar below).

6.19.f.b: x
The argument x has prototype
     const 
Vector &x
and has size N (see description of 6.18.k: Vector below).

6.19.f.c: f
The argument f to F.Ode has prototype
     
Vector &f
On input and output, f is a vector of size N and the input values of the elements of f do not matter. On output, f is set equal to  f(t, x) (see f(t, x) in 6.18.b: Purpose ).

6.19.f.d: f_x
The argument f_x has prototype
     
Vector &f_x
On input and output, f_x is a vector of size  N * N and the input values of the elements of f_x do not matter. On output,  \[
     f\_x [i * n + j] = \partial_{x(j)} f_i ( t , x )
\] 


6.19.f.e: Warning
The arguments f, and f_x must have a call by reference in their prototypes; i.e., do not forget the & in the prototype for f and f_x.

6.19.g: M
The argument M has prototype
     size_t 
M
It specifies the order of the multi-step method; i.e., the order of the approximating polynomial (after the initialization process). The argument M must greater than or equal one.

6.19.h: ti
The argument ti has prototype
     const 
Scalar &ti
It specifies the initial time for the integration of the differential equation.

6.19.i: tf
The argument tf has prototype
     const 
Scalar &tf
It specifies the final time for the integration of the differential equation.

6.19.j: xi
The argument xi has prototype
     const 
Vector &xi
and size n. It specifies value of  X(ti) .

6.19.k: smin
The argument smin has prototype
     const 
Scalar &smin
The minimum value of  T[M] -  T[M-1] in a call to OdeGear will be  smin except for the last two calls where it may be as small as  smin / 2 . The value of smin must be less than or equal smax.

6.19.l: smax
The argument smax has prototype
     const 
Scalar &smax
It specifies the maximum step size to use during the integration; i.e., the maximum value for  T[M] - T[M-1] in a call to OdeGear.

6.19.m: sini
The argument sini has prototype
     
Scalar &sini
The value of sini is the minimum step size to use during initialization of the multi-step method; i.e., for calls to OdeGear where  m < M . The value of sini must be less than or equal smax (and can also be less than smin).

6.19.n: eabs
The argument eabs has prototype
     const 
Vector &eabs
and size n. Each of the elements of eabs must be greater than or equal zero. It specifies a bound for the absolute error in the return value xf as an approximation for  X(tf) . (see the 6.19.s: error criteria discussion below).

6.19.o: erel
The argument erel has prototype
     const 
Scalar &erel
and is greater than or equal zero. It specifies a bound for the relative error in the return value xf as an approximation for  X(tf) (see the 6.19.s: error criteria discussion below).

6.19.p: ef
The argument value ef has prototype
     
Vector &ef
and size n. The input value of its elements does not matter. On output, it contains an estimated bound for the absolute error in the approximation xf; i.e.,  \[
     ef_i > | X( tf )_i - xf_i |
\] 


6.19.q: maxabs
The argument maxabs is optional in the call to OdeGearControl. If it is present, it has the prototype
     
Vector &maxabs
and size n. The input value of its elements does not matter. On output, it contains an estimate for the maximum absolute value of  X(t) ; i.e.,  \[
     maxabs[i] \approx \max \left\{ 
          | X( t )_i | \; : \;  t \in [ti, tf] 
     \right\}
\] 


6.19.r: nstep
The argument nstep has the prototype
     
size_t &nstep
Its input value does not matter and its output value is the number of calls to 6.18: OdeGear used by OdeGearControl.

6.19.s: Error Criteria Discussion
The relative error criteria erel and absolute error criteria eabs are enforced during each step of the integration of the ordinary differential equations. In addition, they are inversely scaled by the step size so that the total error bound is less than the sum of the error bounds. To be specific, if  \tilde{X} (t) is the approximate solution at time  t , ta is the initial step time, and tb is the final step time,  \[
\left| \tilde{X} (tb)_j  - X (tb)_j \right| 
\leq 
\frac{tf - ti}{tb - ta}
\left[ eabs[j] + erel \;  | \tilde{X} (tb)_j | \right] 
\] 
If  X(tb)_j is near zero for some  tb \in [ti , tf] , and one uses an absolute error criteria  eabs[j] of zero, the error criteria above will force OdeGearControl to use step sizes equal to 6.19.k: smin for steps ending near  tb . In this case, the error relative to maxabs can be judged after OdeGearControl returns. If ef is to large relative to maxabs, OdeGearControl can be called again with a smaller value of smin.

6.19.t: Scalar
The type Scalar must satisfy the conditions for a 6.5: NumericType type. The routine 6.6: CheckNumericType will generate an error message if this is not the case. In addition, the following operations must be defined for Scalar objects a and b:
Operation Description
a <= b returns true (false) if a is less than or equal (greater than) b.
a == b returns true (false) if a is equal to b.
log(a) returns a Scalar equal to the logarithm of a
exp(a) returns a Scalar equal to the exponential of a

6.19.u: Vector
The type Vector must be a 6.7: SimpleVector class with 6.7.b: elements of type Scalar . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.19.v: Example
The file 6.19.1: OdeGearControl.cpp contains an example and test a test of using this routine. It returns true if it succeeds and false otherwise.

6.19.w: Theory
Let  e(s) be the error as a function of the step size  s and suppose that there is a constant  K such that  e(s) = K s^m . Let  a be our error bound. Given the value of  e(s) , a step of size  \lambda s would be ok provided that  \[
\begin{array}{rcl}
     a  & \geq & e( \lambda s ) (tf - ti) / ( \lambda s ) \\
     a  & \geq & K \lambda^m s^m (tf - ti) / ( \lambda s ) \\
     a  & \geq & \lambda^{m-1} s^{m-1} (tf - ti) e(s) / s^m \\
     a  & \geq & \lambda^{m-1} (tf - ti) e(s) / s           \\
     \lambda^{m-1} & \leq & \frac{a}{e(s)} \frac{s}{tf - ti}
\end{array}
\] 
Thus if the right hand side of the last inequality is greater than or equal to one, the step of size  s is ok.

6.19.x: Source Code
The source code for this routine is in the file cppad/ode_gear_control.hpp.
Input File: cppad/ode_gear_control.hpp
6.19.1: OdeGearControl: Example and Test
Define  X : \R \rightarrow \R^2 by  \[
\begin{array}{rcl}
X_0 (t) & = &  - \exp ( - w_0 t )  \\
X_1 (t) & = & \frac{w_0}{w_1 - w_0} [ \exp ( - w_0 t ) - \exp( - w_1 t )]
\end{array}
\] 
It follows that  X_0 (0) = 1 ,  X_1 (0) = 0 and  \[
\begin{array}{rcl}
     X_0^{(1)} (t) & = & - w_0 X_0 (t)  \\
     X_1^{(1)} (t) & = & + w_0 X_0 (t) - w_1 X_1 (t) 
\end{array}
\] 
The example tests OdeGearControl using the relations above:
 

# include <cppad/cppad.hpp>
# include <cppad/ode_gear_control.hpp>   // CppAD::OdeGearControl

namespace {
	// --------------------------------------------------------------
	class Fun {
	private:
		 CPPAD_TEST_VECTOR<double> w;
	public:
		// constructor
		Fun(const CPPAD_TEST_VECTOR<double> &w_) : w(w_)
		{ } 

		// set f = x'(t)
		template <typename Scalar>
		void Ode(
			const Scalar                    &t, 
			const CPPAD_TEST_VECTOR<Scalar> &x, 
			CPPAD_TEST_VECTOR<Scalar>       &f)
		{	f[0] = - w[0] * x[0];
			f[1] = + w[0] * x[0] - w[1] * x[1];	
		}

		void Ode_dep(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f_x)
		{	using namespace CppAD;

			size_t n  = x.size();	
			CPPAD_TEST_VECTOR< AD<double> > T(1);
			CPPAD_TEST_VECTOR< AD<double> > X(n);
			CPPAD_TEST_VECTOR< AD<double> > F(n);

			// set argument values
			T[0] = t;
			size_t i, j;
			for(i = 0; i < n; i++)
				X[i] = x[i];

			// declare independent variables
			Independent(X);

			// compute f(t, x)
			this->Ode(T[0], X, F);

			// define AD function object
			ADFun<double> Fun(X, F);

			// compute partial of f w.r.t x
			CPPAD_TEST_VECTOR<double> dx(n);
			CPPAD_TEST_VECTOR<double> df(n);
			for(j = 0; j < n; j++)
				dx[j] = 0.;
			for(j = 0; j < n; j++)
			{	dx[j] = 1.;
				df = Fun.Forward(1, dx);
				for(i = 0; i < n; i++)
					f_x [i * n + j] = df[i];
				dx[j] = 0.;
			}
		}
	};
}

bool OdeGearControl(void)
{	bool ok = true;     // initial return value

	CPPAD_TEST_VECTOR<double> w(2);
	w[0] = 10.;
	w[1] = 1.;
	Fun F(w);

	CPPAD_TEST_VECTOR<double> xi(2);
	xi[0] = 1.;
	xi[1] = 0.;

	CPPAD_TEST_VECTOR<double> eabs(2);
	eabs[0] = 1e-4;
	eabs[1] = 1e-4;

	// return values
	CPPAD_TEST_VECTOR<double> ef(2);
	CPPAD_TEST_VECTOR<double> maxabs(2);
	CPPAD_TEST_VECTOR<double> xf(2);
	size_t                nstep;

	// input values
	size_t  M   = 5;
	double ti   = 0.;
	double tf   = 1.;
	double smin = 1e-8;
	double smax = 1.;
	double sini = 1e-10;
	double erel = 0.;
	
	xf = CppAD::OdeGearControl(F, M,
		ti, tf, xi, smin, smax, sini, eabs, erel, ef, maxabs, nstep);

	double x0 = exp(-w[0]*tf);
	ok &= CppAD::NearEqual(x0, xf[0], 1e-4, 1e-4);
	ok &= CppAD::NearEqual(0., ef[0], 1e-4, 1e-4);

	double x1 = w[0] * (exp(-w[0]*tf) - exp(-w[1]*tf))/(w[1] - w[0]);
	ok &= CppAD::NearEqual(x1, xf[1], 1e-4, 1e-4);
	ok &= CppAD::NearEqual(0., ef[1], 1e-4, 1e-4);

	return ok;
}


Input File: example/ode_gear_control.cpp
6.20: Computing Jacobian and Hessian of Bender's Reduced Objective

6.20.a: Syntax
# include <cppad/cppad.hpp>
BenderQuad(
xyfunggxgxx)

6.20.b: Problem
The type 6.20.k: ADvector cannot be determined form the arguments above (currently the type ADvector must be CPPAD_TEST_VECTOR<Base>.) This will be corrected in the future by requiring Fun to define Fun::vector_type which will specify the type ADvector.

6.20.c: Purpose
We are given the optimization problem  \[
\begin{array}{rcl}
     {\rm minimize} & F(x, y) & {\rm w.r.t.} \; (x, y) \in \R^n \times \R^m
\end{array}
\] 
that is convex with respect to  y . In addition, we are given a set of equations  H(x, y) such that  \[
     H[ x , Y(x) ] = 0 \;\; \Rightarrow \;\; F_y [ x , Y(x) ] = 0
\] 
(In fact, it is often the case that  H(x, y) = F_y (x, y) .) Furthermore, it is easy to calculate a Newton step for these equations; i.e.,  \[
     dy = - [ \partial_y H(x, y)]^{-1} H(x, y)
\] 
The purpose of this routine is to compute the value, Jacobian, and Hessian of the reduced objective function  \[
     G(x) = F[ x , Y(x) ]
\] 
Note that if only the value and Jacobian are needed, they can be computed more quickly using the relations  \[
     G^{(1)} (x) = \partial_x F [x, Y(x) ]
\] 


6.20.d: x
The BenderQuad argument x has prototype
     const 
BAvector &x
(see 6.20.j: BAvector below) and its size must be equal to n. It specifies the point at which we evaluating the reduced objective function and its derivatives.

6.20.e: y
The BenderQuad argument y has prototype
     const 
BAvector &y
and its size must be equal to m. It must be equal to  Y(x) ; i.e., it must solve the problem in  y for this given value of  x  \[
\begin{array}{rcl}
     {\rm minimize} & F(x, y) & {\rm w.r.t.} \; y \in \R^m
\end{array}
\] 


6.20.f: fun
The BenderQuad object fun must support the member functions listed below. The AD<Base> arguments will be variables for a tape created by a call to 5.1: Independent from BenderQuad (hence they can not be combined with variables corresponding to a different tape).

6.20.f.a: fun.f
The BenderQuad argument fun supports the syntax
     
f = fun.f(xy)
The fun.f argument x has prototype
     const 
ADvector &x
(see 6.20.k: ADvector below) and its size must be equal to n. The fun.f argument y has prototype
     const 
ADvector &y
and its size must be equal to m. The fun.f result f has prototype
     
ADvector f
and its size must be equal to one. The value of f is  \[
     f = F(x, y)
\] 
.

6.20.f.b: fun.h
The BenderQuad argument fun supports the syntax
     
h = fun.h(xy)
The fun.h argument x has prototype
     const 
ADvector &x
and its size must be equal to n. The fun.h argument y has prototype
     const 
BAvector &y
and its size must be equal to m. The fun.h result h has prototype
     
ADvector h
and its size must be equal to m. The value of h is  \[
     h = H(x, y)
\] 
.

6.20.f.c: fun.dy
The BenderQuad argument fun supports the syntax
     
dy = fun.dy(xyh)

x
The fun.dy argument x has prototype
     const 
BAvector &x
and its size must be equal to n. Its value will be exactly equal to the BenderQuad argument x and values depending on it can be stored as private objects in f and need not be recalculated.

y
The fun.dy argument y has prototype
     const 
BAvector &y
and its size must be equal to m. Its value will be exactly equal to the BenderQuad argument y and values depending on it can be stored as private objects in f and need not be recalculated.

h
The fun.dy argument h has prototype
     const 
ADvector &h
and its size must be equal to m.

dy
The fun.dy result dy has prototype
     
ADvector dy
and its size must be equal to m. The return value dy is given by  \[
     dy = - [ \partial_y H (x , y) ]^{-1} h
\] 
Note that if h is equal to  H(x, y) ,  dy is the Newton step for finding a zero of  H(x, y) with respect to  y ; i.e.,  y + dy is an approximate solution for the equation  H (x, y + dy) = 0 .

6.20.g: g
The argument g has prototype
     
BAvector &g
and has size one. The input value of its element does not matter. On output, it contains the value of  G (x) ; i.e.,  \[
     g[0] = G (x)
\] 


6.20.h: gx
The argument gx has prototype
     
BAvector &gx
and has size  n  . The input values of its elements do not matter. On output, it contains the Jacobian of  G (x) ; i.e., for  j = 0 , \ldots , n-1 ,  \[
     gx[ j ] = G^{(1)} (x)_j
\] 


6.20.i: gxx
The argument gx has prototype
     
BAvector &gxx
and has size  n \times n . The input values of its elements do not matter. On output, it contains the Hessian of  G (x) ; i.e., for  i = 0 , \ldots , n-1 , and  j = 0 , \ldots , n-1 ,  \[
     gxx[ i * n + j ] = G^{(2)} (x)_{i,j}
\] 


6.20.j: BAvector
The type BAvector must be a 6.7: SimpleVector class. We use Base to refer to the type of the elements of BAvector; i.e.,
     
BAvector::value_type

6.20.k: ADvector
The type ADvector must be a 6.7: SimpleVector class with elements of type AD<Base>; i.e.,
     
ADvector::value_type
must be the same type as
     AD< 
BAvector::value_type >
.

6.20.l: Example
The file 6.20.1: BenderQuad.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
Input File: cppad/local/bender_quad.hpp
6.20.1: BenderQuad: Example and Test
Define  F : \R \times \R \rightarrow \R by  \[
F(x, y) 

\frac{1}{2} \sum_{i=1}^N [ y * \sin ( x * t_i ) - z_i ]^2
\] 
where  z \in \R^N is a fixed vector. It follows that  \[
\begin{array}{rcl}
\partial_y F(x, y) 
& = & 
\sum_{i=1}^N [ y * \sin ( x * t_i ) - z_i ] \sin( x * t_i )
\\
\partial_y \partial_y F(x, y)
& = & 
\sum_{i=1}^N \sin ( x t_i )^2
\end{array}
\] 
Furthermore if we define  Y(x) as the argmin of  F(x, y) with respect to  y ,  \[
\begin{array}{rcl}
Y(x) 
& = &
y - [ \partial_y \partial_y F(x, y) ]^{-1} \partial_y F[x,  y] 
\\
& = &
\left. 
     \sum_{i=1}^N z_i \sin ( x t_i ) 
          \right/ 
               \sum_{i=1}^N z_i \sin ( x * t_i )^2 
\end{array}
\] 
 

# include <cppad/cppad.hpp>

namespace {
	template <class Type>   // Type can be either double or AD<double>
	class Fun {
	typedef CPPAD_TEST_VECTOR<double> BAvector;
	typedef CPPAD_TEST_VECTOR<Type>   ADvector;
	private:
		BAvector t; // measurement times
		BAvector z; // measurement values
	public:
		// constructor
		Fun(const BAvector &t_, const BAvector &z_)
		{ }
		// Fun.f(x, y) = F(x, y)
		ADvector f(const ADvector &x, const ADvector &y)
		{	size_t i;
			size_t N = z.size();

			ADvector f(1);
			f[0] = Type(0);

			Type residual;
			for(i = 0; i < N; i++)
			{	residual = y[0] * sin( x[0] * t[i] ) - z[i];
				f[0]    += residual * residual;
			}
			return f;
		}
		// Fun.h(x, y) = H(x, y) = F_y (x, y)
		ADvector h(const ADvector &x, const BAvector &y)
		{	size_t i;
			size_t N = z.size();

			ADvector fy(1);
			fy[0] = Type(0);

			Type residual;
			for(i = 0; i < N; i++)
			{	residual = y[0] * sin( x[0] * t[i] ) - z[i];
				fy[0]   += residual * sin( x[0] * t[i] );
			}
			return fy;
		}
		// Fun.dy(x, y, h) = - H_y (x,y)^{-1} * h 
		//                 = - F_yy (x, y)^{-1} * h
		ADvector dy(
			const BAvector &x , 
			const BAvector &y , 
			const ADvector &h )
		{	size_t i;
			size_t N = z.size();

			ADvector dy(1);
			Type fyy = Type(0);

			for(i = 0; i < N; i++)
			{	fyy += sin( x[0] * t[i] ) * sin( x[0] * t[i] );
			}
			dy[0] = - h[0] / fyy;

			return dy;
		}
		// Fun.Y(x) = Y(x)  (only used for testing results)
		BAvector Y(const BAvector &x )
		{	size_t i;
			size_t N = z.size();

			BAvector y(1);
			double num = 0.;
			double den = 0.;

			for(i = 0; i < N; i++)
			{	num += z[i] * sin( x[0] * t[i] );
				den += sin( x[0] * t[i] ) * sin( x[0] * t[i] );
			}
			y[0] = num / den;

			return y;
		}
	};
}

bool BenderQuad(void)
{	bool ok = true;
	using CppAD::AD;
	using CppAD::NearEqual;

	// temporary indices
	size_t i;

	// x space vector
	size_t n = 1;
	CPPAD_TEST_VECTOR<double> x(n);
	x[0] = 2. * 3.141592653;

	// y space vector
	size_t m = 1;
	CPPAD_TEST_VECTOR<double> y(m);
	y[0] = 1.;

	// t and z vectors
	size_t N = 10;
	CPPAD_TEST_VECTOR<double> t(N);
	CPPAD_TEST_VECTOR<double> z(N);
	for(i = 0; i < N; i++)
	{	t[i] = double(i) / double(N);       // time or measurement
		z[i] = y[0] * sin( x[0] * t[i] );   // data without noise
	}

	// construct the function object with Type = AD<double>
	Fun< AD<double> > fun(z, t);

	// construct the function object with Type = double
	Fun<double>       fun_test(z, t);       

	// evaluate the G(x), G'(x) and G''(x)
	CPPAD_TEST_VECTOR<double> g(1), gx(n), gxx(n * n);
	BenderQuad(x, y, fun, g, gx, gxx);

	// Evaluate G(x) at nearby points
	double              step(1e-5);
	CPPAD_TEST_VECTOR<double> g0 = fun_test.f(x, fun_test.Y(x) );
	x[0] = x[0] + 1. * step;
	CPPAD_TEST_VECTOR<double> gp = fun_test.f(x, fun_test.Y(x) );
	x[0] = x[0] - 2. * step;
	CPPAD_TEST_VECTOR<double> gm = fun_test.f(x, fun_test.Y(x) );

	// check function value
	double check = g0[0];
	ok          &= NearEqual(check, g[0], 1e-10, 1e-10);

	// check derivative value
	check        = (gp[0] - gm[0]) / (2. * step);
	ok          &= NearEqual(check, gx[0], 1e-10, 1e-10);

	check        = (gp[0] - 2. * g0[0] + gm[0]) / (step * step);
	ok          &= NearEqual(check, gxx[0], 1e-10, 1e-10);

	return ok;
}


Input File: example/bender_quad.cpp
6.21: LU Factorization of A Square Matrix and Stability Calculation

6.21.a: Syntax
# include <cppad/cppad.hpp>

sign = LuRatio(ipjpLUratio)

6.21.b: Description
Computes an LU factorization of the matrix A where A is a square matrix. A measure of the numerical stability called ratio is calculated. This ratio is useful when the results of LuRatio are used as part of an 5: ADFun object.

6.21.c: Include
This routine is designed to be used with AD objects and requires the cppad/cppad.hpp file to be included.

6.21.d: Matrix Storage
All matrices are stored in row major order. To be specific, if  Y is a vector that contains a  p by  q matrix, the size of  Y must be equal to   p * q  and for  i = 0 , \ldots , p-1 ,  j = 0 , \ldots , q-1 ,  \[
     Y_{i,j} = Y[ i * q + j ]
\] 


6.21.e: sign
The return value sign has prototype
     int 
sign
If A is invertible, sign is plus or minus one and is the sign of the permutation corresponding to the row ordering ip and column ordering jp. If A is not invertible, sign is zero.

6.21.f: ip
The argument ip has prototype
     
SizeVector &ip
(see description of 6.12.2.i: SizeVector below). The size of ip is referred to as n in the specifications below. The input value of the elements of ip does not matter. The output value of the elements of ip determine the order of the rows in the permuted matrix.

6.21.g: jp
The argument jp has prototype
     
SizeVector &jp
(see description of 6.12.2.i: SizeVector below). The size of jp must be equal to n. The input value of the elements of jp does not matter. The output value of the elements of jp determine the order of the columns in the permuted matrix.

6.21.h: LU
The argument LU has the prototype
     
ADvector &LU
and the size of LU must equal  n * n (see description of 6.21.k: ADvector below).

6.21.h.a: A
We define A as the matrix corresponding to the input value of LU.

6.21.h.b: P
We define the permuted matrix P in terms of A by
     
P(ij) = Aip[i] * n + jp[j] ]

6.21.h.c: L
We define the lower triangular matrix L in terms of the output value of LU. The matrix L is zero above the diagonal and the rest of the elements are defined by
     
L(ij) = LUip[i] * n + jp[j] ]
for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , i .

6.21.h.d: U
We define the upper triangular matrix U in terms of the output value of LU. The matrix U is zero below the diagonal, one on the diagonal, and the rest of the elements are defined by
     
U(ij) = LUip[i] * n + jp[j] ]
for  i = 0 , \ldots , n-2 and  j = i+1 , \ldots , n-1 .

6.21.h.e: Factor
If the return value sign is non-zero,
     
L * U = P
If the return value of sign is zero, the contents of L and U are not defined.

6.21.h.f: Determinant
If the return value sign is zero, the determinant of A is zero. If sign is non-zero, using the output value of LU the determinant of the matrix A is equal to
sign * LU[ip[0], jp[0]] * ... * LU[ip[n-1], jp[n-1]] 

6.21.i: ratio
The argument ratio has prototype
        AD<
Base> &ratio
On input, the value of ratio does not matter. On output it is a measure of how good the choice of pivots is. For  p = 0 , \ldots , n-1 , the p-th pivot element is the element of maximum absolute value of a  (n-p) \times (n-p) sub-matrix. The ratio of each element of sub-matrix divided by the pivot element is computed. The return value of ratio is the maximum absolute value of such ratios over with respect to all elements and all the pivots.

6.21.i.a: Purpose
Suppose that the execution of a call to LuRatio is recorded in the ADFun<Base> object F. Then a call to 5.6.1: Forward of the form
     
F.Forward(kxk)
with k equal to zero will revaluate this Lu factorization with the same pivots and a new value for A. In this case, the resulting ratio may not be one. If ratio is too large (the meaning of too large is up to you), the current pivots do not yield a stable LU factorization of A. A better choice for the pivots (for this value of A) will be made if you recreate the ADFun object starting with the 5.1: Independent variable values that correspond to the vector xk.

6.21.j: SizeVector
The type SizeVector must be a 6.7: SimpleVector class with 6.7.b: elements of type size_t . The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.21.k: ADvector
The type ADvector must be a 6.7: simple vector class with elements of type AD<Base>. The routine 6.8: CheckSimpleVector will generate an error message if this is not the case.

6.21.l: Example
The file 6.21.1: LuRatio.cpp contains an example and test of using LuRatio. It returns true if it succeeds and false otherwise.
Input File: cppad/local/lu_ratio.hpp
6.21.1: LuRatio: Example and Test
 
# include <cstdlib>               // for rand function
# include <cassert>
# include <cppad/cppad.hpp>

namespace { // Begin empty namespace

CppAD::ADFun<double> *NewFactor(
	size_t                           n ,
	const CPPAD_TEST_VECTOR<double> &x , 
	bool                           &ok ,
	CPPAD_TEST_VECTOR<size_t>      &ip , 
	CPPAD_TEST_VECTOR<size_t>      &jp )
{	using CppAD::AD;
	using CppAD::ADFun;
	size_t i, j, k;

	// values for independent and dependent variables
	CPPAD_TEST_VECTOR< AD<double> > Y(n*n+1), X(n*n);

	// values for the LU factor
	CPPAD_TEST_VECTOR< AD<double> > LU(n*n);

	// record the LU factorization corresponding to this value of x
	AD<double> Ratio;
	for(k = 0; k < n*n; k++)
		X[k] = x[k];
	Independent(X);
	for(k = 0; k < n*n; k++)
		LU[k] = X[k];
	CppAD::LuRatio(ip, jp, LU, Ratio);
	for(k = 0; k < n*n; k++)
		Y[k] = LU[k];
	Y[n*n] = Ratio;

	// use a function pointer so can return ADFun object
	ADFun<double> *FunPtr = new ADFun<double>(X, Y);

	// check value of ratio during recording
	ok &= (Ratio == 1.);

	// check that ip and jp are permutations of the indices 0, ... , n-1
	for(i = 0; i < n; i++)
	{	ok &= (ip[i] < n);
		ok &= (jp[i] < n);
		for(j = 0; j < n; j++)
		{	if( i != j )
			{	ok &= (ip[i] != ip[j]);
				ok &= (jp[i] != jp[j]);
			}
		}
	}
	return FunPtr; 
}
bool CheckLuFactor(
	size_t                           n  ,
	const CPPAD_TEST_VECTOR<double> &x  ,
	const CPPAD_TEST_VECTOR<double> &y  ,
	const CPPAD_TEST_VECTOR<size_t> &ip ,
	const CPPAD_TEST_VECTOR<size_t> &jp )
{	bool     ok = true;	

	double  sum;                          // element of L * U
	double  pij;                          // element of permuted x
	size_t  i, j, k;                      // temporary indices

	// L and U factors
	CPPAD_TEST_VECTOR<double>  L(n*n), U(n*n);

	// Extract L from LU factorization
	for(i = 0; i < n; i++)
	{	// elements along and below the diagonal
		for(j = 0; j <= i; j++)
			L[i * n + j] = y[ ip[i] * n + jp[j] ];
		// elements above the diagonal
		for(j = i+1; j < n; j++)
			L[i * n + j] = 0.;
	}

	// Extract U from LU factorization
	for(i = 0; i < n; i++)
	{	// elements below the diagonal
		for(j = 0; j < i; j++)
			U[i * n + j] = 0.;
		// elements along the diagonal
		U[i * n + i] = 1.;
		// elements above the diagonal
		for(j = i+1; j < n; j++)
			U[i * n + j] = y[ ip[i] * n + jp[j] ];
	}

	// Compute L * U 
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
		{	// compute element (i,j) entry in L * U
			sum = 0.;
			for(k = 0; k < n; k++)
				sum += L[i * n + k] * U[k * n + j];
			// element (i,j) in permuted version of A
			pij  = x[ ip[i] * n + jp[j] ];
			// compare
			ok  &= CppAD::NearEqual(pij, sum, 1e-10, 1e-10);
		}
	}
	return ok;
}

} // end Empty namespace

bool LuRatio(void)
{	bool  ok = true;

	size_t  n = 2; // number rows in A 
	double  ratio;

	// values for independent and dependent variables
	CPPAD_TEST_VECTOR<double>  x(n*n), y(n*n+1);

	// pivot vectors
	CPPAD_TEST_VECTOR<size_t> ip(n), jp(n);

	// set x equal to the identity matrix
	x[0] = 1.; x[1] = 0;
	x[2] = 0.; x[3] = 1.;

	// create a fnction object corresponding to this value of x
	CppAD::ADFun<double> *FunPtr = NewFactor(n, x, ok, ip, jp);

	// use function object to factor matrix
	y     = FunPtr->Forward(0, x);
	ratio = y[n*n];
	ok   &= (ratio == 1.);
	ok   &= CheckLuFactor(n, x, y, ip, jp);

	// set x so that the pivot ratio will be infinite
	x[0] = 0. ; x[1] = 1.;
	x[2] = 1. ; x[3] = 0.;

	// try to use old function pointer to factor matrix
	y     = FunPtr->Forward(0, x);
	ratio = y[n*n];

	// check to see if we need to refactor matrix
	ok &= (ratio > 10.);
	if( ratio > 10. )
	{	delete FunPtr; // to avoid a memory leak	
		FunPtr = NewFactor(n, x, ok, ip, jp);
	}

	//  now we can use the function object to factor matrix
	y     = FunPtr->Forward(0, x);
	ratio = y[n*n];
	ok    &= (ratio == 1.);
	ok    &= CheckLuFactor(n, x, y, ip, jp);

	delete FunPtr;  // avoid memory leak
	return ok;
}

Input File: example/lu_ratio.cpp
6.22: Float and Double Standard Math Unary Functions

6.22.a: Syntax
# include <cppad/std_math_unary.hpp> y = fun(x)

6.22.b: Purpose
Places a copy of the standard math unary functions in the CppAD namespace. This is included with <cppad/cppad.hpp> and can be included separately.

6.22.c: Type
The Type is determined by the argument x and is either float or double.

6.22.d: x
The argument x has the following prototype
     const 
Type  &x

6.22.e: y
The result y has prototype
     
Type y

6.22.f: fun
A definition of fun is included for each of the following functions: abs, acos, asin, atan, cos, cosh exp, log, log10, sin, sinh, sqrt, tan, tanh,
Input File: cppad/std_math_unary.hpp
6.23: The CppAD::vector Template Class

6.23.a: Syntax
# include <cppad/vector.hpp>


6.23.b: Description
The include file cppad/vector.hpp defines the vector template class CppAD::vector. This is a 6.7: SimpleVector template class and in addition it has the features listed below:

6.23.c: Include
The file cppad/vector.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD include files.

6.23.d: Assignment
If x and y are CppAD::vector<Scalar> objects,
     
y = x
has all the properties listed for a 6.7.g: simple vector assignment plus the following:

The CppAD::vector template class will check that the size of x is equal to the size of y before doing the assignment. If the sizes are not equal, CppAD::vector will use 6.1: ErrorHandler to generate an appropriate error report.

A reference to the vector y is returned. An example use of this reference is in multiple assignments of the form
     
z = y = x

6.23.e: Element Access
If x is a CppAD::vector<Scalar> object and i has type size_t,
     
x[i]
has all the properties listed for a 6.7.k: simple vector element access plus the following:

The object x[i] has type Scalar (is not possibly a different type that can be converted to Scalar ).

If i is not less than the size of the x , CppAD::vector will use 6.1: ErrorHandler to generate an appropriate error report.

6.23.f: push_back
If x is a CppAD::vector<Scalar> object with size equal to n and s has type Scalar ,
     
x.push_back(s)
extends the vector x so that its new size is n plus one and x[n] is equal to s (equal in the sense of the Scalar assignment operator).

6.23.g: push_vector
If x is a CppAD::vector<Scalar> object with size equal to n and v is a 6.7: simple vector with elements of type Scalar and size m ,
     
x.push_vector(v)
extends the vector x so that its new size is n+m and x[n + i] is equal to v[i] for i = 1 , ... , m-1 (equal in the sense of the Scalar assignment operator).

6.23.h: Output
If x is a CppAD::vector<Scalar> object and os is an std::ostream, and the operation
     
os << x
will output the vector x to the standard output stream os . The elements of x are enclosed at the beginning by a { character, they are separated by , characters, and they are enclosed at the end by } character. It is assumed by this operation that if e is an object with type Scalar ,
     
os << e
will output the value e to the standard output stream os .

6.23.i: resize
If the resize member function is called with argument value zero, all memory allocated for the vector will be freed. The can be useful when using very large vectors and when checking for memory leaks (and there are global vectors).

6.23.j: vectorBool
The file <cppad/vector.hpp> also defines the class CppAD::vectorBool. This has the same specifications as CppAD::vector<bool> with the following exceptions
  1. The class vectorBool conserves on memory (on the other hand, CppAD::vector<bool> is expected to be faster than vectorBool).
  2. The CppAD::vectorBool output operator prints each boolean value as a 0 for false, a 1 for true, and does not print any other output; i.e., the vector is written a long sequence of zeros and ones with no surrounding {, } and with no separating commas or spaces.
  3. If x has type vectorBool and i has type size_t, the element access value x[i] has an unspecified type (referred to here as elementType ) that can be implicitly converted to bool. The return value of the assignment operator
         
    x[i] = y
    also has type elementType . Thus, if z has type bool, the syntax
         
    z = x[i] = y
    is valid.


6.23.k: Example
The files 6.23.1: CppAD_vector.cpp and 6.23.2: vectorBool.cpp each contain an example and test of this template class. They return true if they succeed and false otherwise.

6.23.l: Exercise
Create and run a program that contains the following code:
 
	CppAD::vector<double> x(3);
	size_t i;
	for(i = 0; i < 3; i++)
		x[i] = 4. - i;
	std::cout << "x = " << x << std::endl;

Input File: cppad/vector.hpp
6.23.1: CppAD::vector Template Class: Example and Test
 

# include <cppad/vector.hpp>
# include <cppad/check_simple_vector.hpp>
# include <sstream> // sstream and string are used to test output operation
# include <string>

bool CppAD_vector(void)
{	bool ok = true;
	using CppAD::vector;     // so can use vector instead of CppAD::vector 
	typedef double Type;     // change double to test other types

	// check Simple Vector specifications
	CppAD::CheckSimpleVector< Type, vector<Type> >();


	vector<Type> x;          // default constructor 
	ok &= (x.size() == 0);

	x.resize(2);             // resize and set element assignment
	ok &= (x.size() == 2);
	x[0] = Type(0);
	x[1] = Type(1);

	vector<Type> y(2);       // sizing constructor
	ok &= (y.size() == 2);

	const vector<Type> z(x); // copy constructor and const element access
	ok &= (z.size() == 2);
	ok &= ( (z[0] == Type(0)) && (z[1] == Type(1)) );

	x[0] = Type(2);          // modify, assignment changes x
	ok &= (x[0] == Type(2));

	x = y = z;               // vector assignment
	ok &= ( (x[0] == Type(0)) && (x[1] == Type(1)) );
	ok &= ( (y[0] == Type(0)) && (y[1] == Type(1)) );
	ok &= ( (z[0] == Type(0)) && (z[1] == Type(1)) );

	// test of output
	std::string        correct= "{ 0, 1 }";
	std::string        str;
	std::ostringstream buf;
	buf << z;
	str = buf.str();
	ok &= (str == correct);

	// test of push_back scalar
	size_t i;
	size_t N = 100;
	x.resize(0);
	for(i = 0; i < N; i++)
		x.push_back( Type(i) );
	ok &= (x.size() == N);
	for(i = 0; i < N; i++)
		ok &= ( x[i] == Type(i) );

	// test of push_vector 
	x.push_vector(x);
	ok &= (x.size() == 2 * N);
	for(i = 0; i < N; i++)
	{	ok &= ( x[i] == Type(i) );
		ok &= ( x[i+N] == Type(i) );
	}

	return ok;
}


Input File: example/cppad_vector.cpp
6.23.2: CppAD::vectorBool Class: Example and Test
 

# include <cppad/vector.hpp>
# include <cppad/check_simple_vector.hpp>
# include <sstream> // sstream and string are used to test output operation
# include <string>

bool vectorBool(void)
{	bool ok = true;
	using CppAD::vectorBool;

	vectorBool x;          // default constructor 
	ok &= (x.size() == 0);

	x.resize(2);             // resize and set element assignment to bool
	ok &= (x.size() == 2);
	x[0] = false;
	x[1] = true;

	vectorBool y(2);       // sizing constructor
	ok &= (y.size() == 2);

	const vectorBool z(x); // copy constructor and const element access
	ok &= (z.size() == 2);
	ok &= ( (z[0] == false) && (z[1] == true) );

	x[0] = true;           // modify, assignment changes x
	ok &= (x[0] == true);

	x = y = z;              // vector assignment
	ok &= ( (x[0] == false) && (x[1] == true) );
	ok &= ( (y[0] == false) && (y[1] == true) );
	ok &= ( (z[0] == false) && (z[1] == true) );

	// test of push_vector
	y.push_vector(z);
	ok &= y.size() == 4;
	ok &= ( (y[0] == false) && (y[1] == true) );
	ok &= ( (y[2] == false) && (y[3] == true) );

	y[1] = false;           // element assignment to another element
	x[0] = y[1];
	ok &= (x[0] == false);

	// test of output
	std::string        correct= "01";
	std::string        str;
	std::ostringstream buf;
	buf << z;
	str = buf.str();
	ok &= (str == correct);

	// test of push_back element
	size_t i;
	x.resize(0);
	for(i = 0; i < 100; i++)
		x.push_back( (i % 3) != 0 );
	ok &= (x.size() == 100);
	for(i = 0; i < 100; i++)
		ok &= ( x[i] == ((i % 3) != 0) );

	// is that boolvector is
	// a simple vector class with elements of type bool
	CppAD::CheckSimpleVector< bool, vectorBool >();

	return ok;
}


Input File: example/vector_bool.cpp
6.24: Routines That Track Use of New and Delete

6.24.a: Syntax
# include <cppad/track_new_del.hpp>
newptr = TrackNewVec(filelinenewlenoldptr)
TrackDelVec(filelineoldptr)
newptr = TrackExtend(filelinenewlenncopyoldptr)
count = TrackCount(fileline)

6.24.b: Purpose
These routines aid in the use of new[] and delete[] during the execution of a C++ program.

6.24.c: Include
The file cppad/track_new_del.hpp is included by cppad/cppad.hpp but it can also be included separately with out the rest of the CppAD include files.

6.24.d: file
The argument file has prototype
     const char *
file
It should be the source code file name where the call to TrackNew is located. The best way to accomplish this is the use the preprocessor symbol __FILE__ for this argument.

6.24.e: line
The argument line has prototype
     int 
line
It should be the source code file line number where the call to TrackNew is located. The best way to accomplish this is the use the preprocessor symbol __LINE__ for this argument.

6.24.f: oldptr
The argument oldptr has prototype
     
Type *oldptr
This argument is used to identify the type Type.

6.24.f.a: OpenMP
In the case of multi-threading with OpenMP, calls with the argument oldptr must be made with the same thread as when oldptr was created (except for TrackNewVec where the value of oldptr does not matter).

6.24.g: newlen
The argument newlen has prototype
     size_t 
newlen

6.24.h: head newptr
The return value newptr has prototype
     
Type *newptr
It points to the newly allocated vector of objects that were allocated using
     new Type[
newlen]

6.24.i: ncopy
The argument ncopy has prototype
        size_t 
ncopy
This specifies the number of elements that are copied from the old array to the new array. The value of ncopy must be less than or equal newlen.

6.24.j: TrackNewVec
If NDEBUG is defined, this routine only sets
     
newptr = Type new[newlen]
The value of oldptr does not matter (except that it is used to identify Type). If NDEBUG is not defined, TrackNewVec also tracks the this memory allocation. In this case, if memory cannot be allocated 6.1: ErrorHandler is used to generate a message stating that there was not sufficient memory.

6.24.j.a: Macro
The preprocessor macro call
     CPPAD_TRACK_NEW_VEC(
newlenoldptr)
expands to
     CppAD::TrackNewVec(__FILE__, __LINE__, 
newlenoldptr)

6.24.j.b: Deprecated
The preprocessor macro CppADTrackNewVec is the same as CPPAD_TRACK_NEW_VEC. It has been deprecated; i.e., it is still defined in the CppAD distribution, but it should not be used.

6.24.k: TrackDelVec
This routine is used to a vector of objects that have been allocated using TrackNew or TrackExtend. If NDEBUG is defined, this routine only frees memory with
     delete [] 
oldptr
If NDEBUG is not defined, TrackDelete also checks that oldptr was allocated by TrackNew or TrackExtend and has not yet been freed. If this is not the case, 6.1: ErrorHandler is used to generate an error message.

6.24.k.a: Macro
The preprocessor macro call
     CPPAD_TRACK_DEL_VEC(
oldptr)
expands to
     CppAD::TrackDelVec(__FILE__, __LINE__, 
oldptr)

6.24.k.b: Deprecated
The preprocessor macro CppADTrackDelVec is the same as CPPAD_TRACK_DEL_VEC. It has been deprecated; i.e., it is still defined in the CppAD distribution, but it should not be used.

6.24.l: TrackExtend
This routine is used to allocate a new vector (using TrackNewVec), copy ncopy elements from the old vector to the new vector. If ncopy is greater than zero, oldptr must have been allocated using TrackNewVec or TrackExtend. In this case, the vector pointed to by oldptr must be have at least ncopy elements and it will be deleted (using TrackDelVec). Note that the dependence of TrackExtend on NDEBUG is indirectly through the routines TrackNewVec and TrackDelVec.

6.24.l.a: Macro
The preprocessor macro call
     CPPAD_TRACK_EXTEND(
newlenncopyoldptr)
expands to
     CppAD::TrackExtend(__FILE__, __LINE__, 
newlenncopyoldptr)

6.24.l.b: Deprecated
The preprocessor macro CppADTrackExtend is the same as CPPAD_TRACK_EXTEND. It has been deprecated; i.e., it is still defined in the CppAD distribution, but it should not be used.

6.24.m: TrackCount
The return value count has prototype
     size_t 
count
If NDEBUG is defined, count will be zero. Otherwise, it will be the number of vectors that have been allocated (by TrackNewVec or TrackExtend) and not yet freed (by TrackDelete).

6.24.m.a: Macro
The preprocessor macro call
     CPPAD_TRACK_COUNT()
expands to
     CppAD::TrackCount(__FILE__, __LINE__)

6.24.m.b: Deprecated
The preprocessor macro CppADTrackCount is the same as CPPAD_TRACK_COUNT. It has been deprecated; i.e., it is still defined in the CppAD distribution, but it should not be used.

6.24.m.c: OpenMP
In the case of multi-threading with OpenMP, the information for all of the threads is checked so only one thread can be running when this routine is called.

6.24.n: Example
The file 6.24.1: TrackNewDel.cpp contains an example and test of these functions. It returns true, if it succeeds, and false otherwise.
Input File: cppad/track_new_del.hpp
6.24.1: Tracking Use of New and Delete: Example and Test
 

# include <cppad/track_new_del.hpp>

bool TrackNewDel(void)
{	bool ok = true;

	// initial count
	size_t count = CPPAD_TRACK_COUNT();

	// allocate an array of lenght 5
	double *ptr = 0;
	size_t  newlen = 5;
	ptr = CPPAD_TRACK_NEW_VEC(newlen, ptr);

	// copy data into the array
	size_t ncopy = newlen;
	size_t i;
	for(i = 0; i < ncopy; i++)
		ptr[i] = double(i);

	// extend the buffer to be lenght 10
	newlen = 10;
	ptr    = CPPAD_TRACK_EXTEND(newlen, ncopy, ptr);
		
	// copy data into the new part of the array
	for(i = ncopy; i < newlen; i++)
		ptr[i] = double(i);

	// check the values in the array
 	for(i = 0; i < newlen; i++)
		ok &= (ptr[i] == double(i));

	// free the memory allocated since previous call to TrackCount
	CPPAD_TRACK_DEL_VEC(ptr);

	// check for memory leak
	ok &= (count == CPPAD_TRACK_COUNT());

	return ok;
}


Input File: example/track_new_del.cpp
7: Preprocessor Definitions Used by CppAD

7.a: Rule
All of the preprocessor symbols used by CppAD begin either with CppAD or with CPPAD_.

7.b: Example
For example, the preprocessor symbol 8.4: CPPAD_TEST_VECTOR determines which 6.7: SimpleVector template class is extensively used by the tests in the Example and TestMore directories.

7.c: Exceptions
The following is a list of exceptions to the rule above. These preprocessor symbols may be undefined after you include any CppAD include file.
 

# undef PACKAGE
# undef PACKAGE_BUGREPORT
# undef PACKAGE_NAME
# undef PACKAGE_STRING
# undef PACKAGE_TARNAME
# undef PACKAGE_VERSION
# undef VERSION

Input File: cppad/local/preprocessor.hpp
8: Examples

8.a: Introduction
This section organizes the information related to the CppAD examples. Each CppAD operation has its own specific example, for example 4.4.1.3.1: Add.cpp is an example for 4.4.1.3: addition . Some of the examples are of a more general nature (not connected of a specific feature of CppAD). In addition, there are some utility functions that are used by the examples.

8.b: Running Examples
The 2: installation instructions show how the examples can be run on your system.

8.c: The CppAD Test Vector Template Class
Many of the examples use the 8.4: CPPAD_TEST_VECTOR preprocessor symbol to determine which 6.7: SimpleVector template class is used with the examples.

8.d: Contents
8.1: General Examples
8.2: Utility Routines used by CppAD Examples
8.3: List of All the CppAD Examples
8.4: Choosing The Vector Testing Template Class

Input File: omh/example.omh
8.1: General Examples

8.1.a: Description
Most of the examples in CppAD are part of the documentation for a specific feature; for example, 4.4.1.3.1: Add.cpp is an example using the 4.4.1.3: addition operator . The examples list in this section are of a more general nature.

8.1.b: Contents
ipopt_cppad_nlp: 8.1.1Nonlinear Programming Using the CppAD Interface to Ipopt
Interface2C.cpp: 8.1.2Interfacing to C: Example and Test
JacMinorDet.cpp: 8.1.3Gradient of Determinant Using Expansion by Minors: Example and Test
JacLuDet.cpp: 8.1.4Gradient of Determinant Using Lu Factorization: Example and Test
HesMinorDet.cpp: 8.1.5Gradient of Determinant Using Expansion by Minors: Example and Test
HesLuDet.cpp: 8.1.6Gradient of Determinant Using LU Factorization: Example and Test
OdeStiff.cpp: 8.1.7A Stiff Ode: Example and Test
ode_taylor.cpp: 8.1.8Taylor's Ode Solver: An Example and Test
ode_taylor_adolc.cpp: 8.1.9Using Adolc with Taylor's Ode Solver: An Example and Test
StackMachine.cpp: 8.1.10Example Differentiating a Stack Machine Interpreter
mul_level: 8.1.11Using Multiple Levels of AD

Input File: omh/example_list.omh
8.1.1: Nonlinear Programming Using the CppAD Interface to Ipopt

8.1.1.a: Syntax
# include "ipopt_cppad_nlp.hpp"
# ipopt_cppad_solution solution;
ipopt_cppad_nlp cppad_nlp(
     
nmx_ix_lx_ug_lg_u, &fg_info, &solution
)


8.1.1.b: Purpose
The class ipopt_cppad_nlp is used to solve nonlinear programming problems of the form  \[
\begin{array}{rll}
{\rm minimize}      & f(x) 
\\
{\rm subject \; to} & g^l \leq g(x) \leq g^u
\\
                    & x^l  \leq x   \leq x^u
\end{array}
\] 
This is done using Ipopt (https://www.coin-or.org/projects/Ipopt) optimizer and CppAD (http://www.coin-or.org/CppAD/) Algorithmic Differentiation package.

8.1.1.c: Warning
This is only an example use of CppAD. It is expected that this class will be improved and that its user interface may change in ways that are not backward compatible.

8.1.1.d: fg(x)
The function  fg : \R^n \rightarrow \R^{m+1} is defined by  \[
\begin{array}{rcl}
     fg_0 (x)     & = & f(x)         \\
     fg_1 (x)     & = & g_0 (x)      \\
                  & \vdots &         \\
     fg_m (x)     & = & g_{m-1} (x)
     \end{array}
\] 


8.1.1.d.a: Index Vector
We define an index vector as a vector of non-negative integers for which none of the values are equal; i.e., it is both a vector and a set. If  I is an index vector  |I| is used to denote the number of elements in  I and  \| I \| is used to denote the value of the maximum element in  I .

8.1.1.d.b: Projection
Given an index vector  J and a positive integer  n where  n > \| J \| , we use  J \otimes n  for the mapping  ( J \otimes n ) : \R^n \rightarrow \R^{|J|} defined by  \[
     [ J \otimes n ] (x)_j = x_{J(j)}
\] 
for  j = 0 , \ldots |J| - 1 .

8.1.1.d.c: Injection
Given an index vector  I and a positive integer  m where  m > \| I \| , we use  m \otimes I for the mapping  ( m \otimes I ): \R^{|I|} \rightarrow \R^m defined by  \[
[ m \otimes I ] (y)_i = \left\{ \begin{array}{ll}
y_k & {\rm if} \; i = I(k) \; {\rm for \; some} \; 
     k \in \{ 0 , \cdots, |I|-1 \} 
\\
0   & {\rm otherwise}
\end{array} \right.
\] 


8.1.1.d.d: Representation
In many applications, each of the component functions of  fg(x) only depend on a few of the components of  x . In this case, expressing  fg(x) in terms of simpler functions with fewer arguments can greatly reduce the amount of work required to compute its derivatives.

We use the functions  r_k : \R^{q(k)} \rightarrow \R^{p(k)} for  k = 0 , \ldots , K to express our representation of  fg(x) in terms of simpler functions as follows  \[
fg(x) = \sum_{k=0}^{K-1} \; \sum_{\ell=0}^{L(k) - 1} 
[ (m+1) \otimes I_{k,\ell} ] \; \circ
      \; r_k \; \circ \; [ J_{k,\ell} \otimes n ] \; (x)
\] 
where  \circ represents function composition, for  k = 0 , \ldots , K - 1 , and  \ell = 0 , \ldots , L(k) ,  I_{k,\ell} and  J_{k,\ell} are index vectors with  | J_{k,\ell} | = q(k) ,  \| J_{k,\ell} \| < n ,  | I_{k,\ell} | = p(k) , and  \| I_{k,\ell} \| \leq m .

8.1.1.e: Simple Representation
In the simple representation,  r_0 (x) = fg(x) ,  K = 1 ,  q(0) = n ,  p(0) = m+1 ,  L(0) = 1 ,  I_{0,0} = (0 , \ldots , m) , and  J_{0,0} = (0 , \ldots , n-1) .

8.1.1.f: SizeVector
The type SizeVector is defined by the ipopt_cppad_nlp.hpp include file to be a 6.7: SimpleVector class with elements of type size_t.

8.1.1.g: NumberVector
The type NumberVector is defined by the ipopt_cppad_nlp.hpp include file to be a 6.7: SimpleVector class with elements of type Ipopt::Number.

8.1.1.h: ADNumber
The type ADNumber is defined by the ipopt_cppad_nlp.hpp include file to be a an AD type that can be used to compute derivatives.

8.1.1.i: ADVector
The type ADVector is defined by the ipopt_cppad_nlp.hpp include file to be a 6.7: SimpleVector class with elements of type ADNumber.

8.1.1.j: n
The argument n has prototype
     size_t 
n
It specifies the dimension of the argument space; i.e.,  x \in \R^n .

8.1.1.k: m
The argument m has prototype
     size_t 
m
It specifies the dimension of the range space for  g ; i.e.,  g : \R^n \rightarrow \R^m .

8.1.1.l: x_i
The argument x_i has prototype
     const NumberVector& 
x_i
and its size is equal to  n . It specifies the initial point where Ipopt starts the optimization process.

8.1.1.m: x_l
The argument x_l has prototype
     const NumberVector& 
x_l
and its size is equal to  n . It specifies the lower limits for the argument in the optimization problem; i.e.,  x^l .

8.1.1.n: x_u
The argument x_u has prototype
     const NumberVector& 
x_u
and its size is equal to  n . It specifies the upper limits for the argument in the optimization problem; i.e.,  x^u .

8.1.1.o: g_l
The argument g_l has prototype
     const NumberVector& 
g_l
and its size is equal to  m . It specifies the lower limits for the constraints in the optimization problem; i.e.,  g^l .

8.1.1.p: g_u
The argument g_u has prototype
     const NumberVector& 
g_u
and its size is equal to  n . It specifies the upper limits for the constraints in the optimization problem; i.e.,  g^u .

8.1.1.q: fg_info
The argument fg_info has prototype
     
FG_info fg_info
where the class FG_info is derived from the base class ipopt_cppad_fg_info. Certain virtual member functions of fg_info are used to compute the value of  fg(x) . The specifications for these member functions are given below:

8.1.1.q.a: fg_info.number_functions
This member function has prototype
     virtual size_t ipopt_cppad_fg_info::number_functions(void)
If K has type size_t, the syntax
     
K = fg_info.number_functions()
sets K to the number of functions used in the representation of  fg(x) ; i.e.,  K in the 8.1.1.d.d: representation above.

The ipopt_cppad_fg_info implementation of this function corresponds to the simple representation mentioned above; i.e. K = 1 .

8.1.1.q.b: fg_info.eval_r
This member function has the prototype
virtual ADVector ipopt_cppad_fg_info::eval_r(size_t 
k, const ADVector& u) = 0;
Thus it is a pure virtual function and must be defined in the derived class FG_info .

This function computes the value of  r_k (u) used in the 8.1.1.d.d: representation for  fg(x) . If k in  \{0 , \ldots , K-1 \} has type size_t, u is an ADVector of size q(k) and r is an ADVector of size p(k) the syntax
     
r = fg_info.eval_r(ku)
set r to the vector  r_k (u) .

8.1.1.q.c: fg_info.retape
This member function has the prototype
     virtual bool ipopt_cppad_fg_info::retape(size_t 
k)
If k in  \{0 , \ldots , K-1 \} has type size_t, and retape has type bool, the syntax
        
retape = fg_info.retape(k)
sets retape to true or false. If retape is true, ipopt_cppad_nlp will retape the operation sequence corresponding to  r_k (u) for every value of u . An ipopt_cppad_nlp object should use much less memory and run faster if retape is false. You can test both the true and false cases to make sure the operation sequence does not depend on u .

The ipopt_cppad_fg_info implementation of this function sets retape to true (while slower it is also safer to always retape).

8.1.1.q.d: fg_info.domain_size
This member function has prototype
     virtual size_t ipopt_cppad_fg_info::domain_size(size_t 
k)
If k in  \{0 , \ldots , K-1 \} has type size_t, and q has type size_t, the syntax
     
q = fg_info.domain_size(k)
sets q to the dimension of the domain space for  r_k (u) ; i.e.,  q(k) in the 8.1.1.d.d: representation above.

The ipopt_cppad_h_base implementation of this function corresponds to the simple representation mentioned above; i.e.,  q = n .

8.1.1.q.e: fg_info.range_size
This member function has prototype
     virtual size_t ipopt_cppad_fg_info::range_size(size_t 
k)
If k in  \{0 , \ldots , K-1 \} has type size_t, and p has type size_t, the syntax
     
p = fg_info.range_size(k)
sets p to the dimension of the range space for  r_k (u) ; i.e.,  p(k) in the 8.1.1.d.d: representation above.

The ipopt_cppad_h_base implementation of this function corresponds to the simple representation mentioned above; i.e.,  p = m+1 .

8.1.1.q.f: fg_info.number_terms
This member function has prototype
     virtual size_t ipopt_cppad_fg_info::number_terms(size_t 
k)
If k in  \{0 , \ldots , K-1 \} has type size_t, and L has type size_t, the syntax
     
L = fg_info.number_terms(k)
sets L to the number of terms in representation for this value of k ; i.e.,  L(k) in the 8.1.1.d.d: representation above.

The ipopt_cppad_h_base implementation of this function corresponds to the simple representation mentioned above; i.e.,  L = 1 .

8.1.1.q.g: fg_info.index
This member function has prototype
     virtual void ipopt_cppad_fg_info::index(
          size_t 
k, size_t ell, SizeVector& I, SizeVector& J
     )
The argument  
     k
has type size_t and is a value between zero and  K-1 inclusive. The argument  
     ell
has type size_t and is a value between zero and  L(k)-1 inclusive. The argument
     I
is a 6.7: SimpleVector with elements of type size_t and size greater than or equal to  p(k) . The input value of the elements of I does not matter. The output value of the first  p(k) elements of I must be the corresponding elements of  I_{k,ell} in the 8.1.1.d.d: representation above. The argument
     J
is a 6.7: SimpleVector with elements of type size_t and size greater than or equal to  q(k) . The input value of the elements of J does not matter. The output value of the first  q(k) elements of J must be the corresponding elements of  J_{k,ell} in the 8.1.1.d.d: representation above.

The ipopt_cppad_h_base implementation of this function corresponds to the simple representation mentioned above; i.e., for  i = 0 , \ldots , m , I[i] = i , and for  j = 0 , \ldots , n-1 , J[j] = j .

8.1.1.r: solution
After the optimization process is completed, solution contains the following information:

8.1.1.r.a: status
The status field of solution has prototype
     ipopt_cppad_solution::solution_status 
solution.status
It is the final Ipopt status for the optimizer. Here is a list of the possible values for the status:
status Meaning
not_defined The optimizer did not return a final status to this ipopt_cppad_nlp object.
unknown The status returned by the optimizer is not defined in the Ipopt documentation for finalize_solution.
success Algorithm terminated successfully at a point satisfying the convergence tolerances (see Ipopt options).
maxiter_exceeded The maximum number of iterations was exceeded (see Ipopt options).
stop_at_tiny_step Algorithm terminated because progress was very slow.
stop_at_acceptable_point Algorithm stopped at a point that was converged, not to the 'desired' tolerances, but to 'acceptable' tolerances (see Ipopt options).
local_infeasibility Algorithm converged to a non-feasible point (problem may have no solution).
user_requested_stop This return value should not happen.
diverging_iterates It the iterates are diverging.
restoration_failure Restoration phase failed, algorithm doesn't know how to proceed.
error_in_step_computation An unrecoverable error occurred while Ipopt tried to compute the search direction.
invalid_number_detected Algorithm received an invalid number (such as nan or inf) from the users function fg_info.eval or from the CppAD evaluations of its derivatives (see the Ipopt option check_derivatives_for_naninf).
internal_error An unknown Ipopt internal error occurred. Contact the Ipopt authors through the mailing list.

8.1.1.r.b: x
The x field of solution has prototype
     NumberVector 
solution.x
and its size is equal to  n . It is the final  x value for the optimizer.

8.1.1.r.c: z_l
The z_l field of solution has prototype
     NumberVector 
solution.z_l
and its size is equal to  n . It is the final Lagrange multipliers for the lower bounds on  x .

8.1.1.r.d: z_u
The z_u field of solution has prototype
     NumberVector 
solution.z_u
and its size is equal to  n . It is the final Lagrange multipliers for the upper bounds on  x .

8.1.1.r.e: g
The g field of solution has prototype
     NumberVector 
solution.g
and its size is equal to  m . It is the final value for the constraint function  g(x) .

8.1.1.r.f: lambda
The lambda field of solution has prototype
     NumberVector 
solution.lambda
and its size is equal to  m . It is the final value for the Lagrange multipliers corresponding to the constraint function.

8.1.1.r.g: obj_value
The obj_value field of solution has prototype
     Number 
solution.obj_value
It is the final value of the objective function  f(x) .

8.1.1.s: Visual Studio
If you are using Visual Studio, see the special 8.1.1.1: ipopt_cppad_windows instructions.

8.1.1.t: Example
The file 8.1.1.2: ipopt_cppad_simple.cpp is an example and test of ipopt_cppad_nlp that uses the 8.1.1.e: simple representation . It returns true if it succeeds and false otherwise. The section 8.1.1.3: ipopt_cppad_ode discusses an example that uses a more complex representation.
Input File: ipopt_cppad/ipopt_cppad_nlp.hpp
8.1.1.1: Linking the CppAD Interface to Ipopt in Visual Studio 9.0

8.1.1.1.a: Purpose
In the special case where you are using Visual Studio 9.0, you do not need to build Ipopt. You can instead follow these instructions to install the Coin binary distribution for the Ipopt libraries (where Visual Studio can find them).
  1. Download the binary file below which contains a build of all most of the Coin-Or projects as of 2008-09-28 CoinAll-1.2-VisualStudio.zip (http://www.coin-or.org/download/binary/CoinAll/CoinAll-1.2-VisualStudio.zip)
  2. Choose a directory and unzip the file CoinAll-1.2-VisualStudio.zip in that directory. We refer to this as the from_directory below.
  3. Open a Dos shell window and change into your CppAD distribution directory; e.g., if you install from a tarball, this will be cppad-yyyymmdd where yyyymmdd is the year, month, and date corresponding to your version of CppAD.
  4. Execute the dos command
         ipopt_cppad\ipopt_cppad_windows.bat 
    from_directory
  5. In Visual Studio open the project file
     
    	ipopt_cppad\ipopt_cppad.sln
    
    and build the Release version of the project (the Debug version is not supported by the CoinAll binary).
  6. In the Dos shell window, execute the command
     
    	ipopt_cppad\Release\ipopt_cppad
    
    It should generate the following output:
     
    	Ok:    ipopt_cppad_ode
    	Ok:    ipopt_cppad_simple
    	Ok:    No memory leak detected
    	All 3 tests passed.
    

Input File: ipopt_cppad/ipopt_cppad_windows.bat
8.1.1.2: Nonlinear Programming Using CppAD and Ipopt: Example and Test

8.1.1.2.a: Purpose
This example program demonstrates how to use the class ipopt_cppad_nlp to solve the example problem in the Ipopt documentation; i.e., the problem  \[
\begin{array}{lc}
{\rm minimize \; }      &  x_1 * x_4 * (x_1 + x_2 + x_3) + x_3
\\
{\rm subject \; to \; } &  x_1 * x_2 * x_3 * x_4  \geq 25
\\
                        &  x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40
\\
                        &  1 \leq x_1, x_2, x_3, x_4 \leq 5  
\end{array}
\] 


8.1.1.2.b: Configuration Requirement
This example will be compiled and tested provided that the value 2.1.s: IpoptDir is specified on the 2.1.d: configure command line.
 

# include "ipopt_cppad_nlp.hpp"

namespace {

	class FG_info : public ipopt_cppad_fg_info
	{
	private:
		bool retape_;
	public:
		// derived class part of constructor
		FG_info(bool retape)
		: retape_ (retape)
		{ }
		// Evaluation of the objective f(x), and constraints g(x)
		// using an Algorithmic Differentiation (AD) class.
		ADVector eval_r(size_t k, const ADVector&  x)
		{	ADVector fg(3);

			// Fortran style indexing 
			ADNumber x1 = x[0];
			ADNumber x2 = x[1];
			ADNumber x3 = x[2];
			ADNumber x4 = x[3];
			// f(x)
			fg[0] = x1 * x4 * (x1 + x2 + x3) + x3;
			// g_1 (x)
			fg[1] = x1 * x2 * x3 * x4;
			// g_2 (x)
			fg[2] = x1 * x1 + x2 * x2 + x3 * x3 + x4 * x4;
			return fg;
		}
		bool retape(size_t k)
		{	return retape_; }
	};
}
	
bool ipopt_cppad_simple(void)
{	bool ok = true;
	size_t j;


	// number of independent variables (domain dimension for f and g)
	size_t n = 4;  
	// number of constraints (range dimension for g)
	size_t m = 2;
	// initial value of the independent variables
	NumberVector x_i(n);
	x_i[0] = 1.0;
	x_i[1] = 5.0;
	x_i[2] = 5.0;
	x_i[3] = 1.0;
	// lower and upper limits for x
	NumberVector x_l(n);
	NumberVector x_u(n);
	for(j = 0; j < n; j++)
	{	x_l[j] = 1.0;
		x_u[j] = 5.0;
	}
	// lower and upper limits for g
	NumberVector g_l(m);
	NumberVector g_u(m);
	g_l[0] = 25.0;     g_u[0] = 1.0e19;
  	g_l[1] = 40.0;     g_u[1] = 40.0;

	size_t icase;
	for(icase = 0; icase <= 1; icase++)
	{	// Should ipopt_cppad_nlp retape the operation sequence for
		// every new x. Can test both true and false cases because 
		// the operation sequence does not depend on x (for this case).
		bool retape = icase != 0;

		// object in derived class
		FG_info fg_info(retape);

		// create the Ipopt interface
		ipopt_cppad_solution solution;
		Ipopt::SmartPtr<Ipopt::TNLP> cppad_nlp = new ipopt_cppad_nlp(
		n, m, x_i, x_l, x_u, g_l, g_u, &fg_info, &solution
		);

		// Create an instance of the IpoptApplication
		using Ipopt::IpoptApplication;
		Ipopt::SmartPtr<IpoptApplication> app = new IpoptApplication();

		// turn off any printing
		app->Options()->SetIntegerValue("print_level", -2);

		// maximum number of iterations
		app->Options()->SetIntegerValue("max_iter", 10);

		// approximate accuracy in first order necessary conditions;
		// see Mathematical Programming, Volume 106, Number 1, 
		// Pages 25-57, Equation (6)
		app->Options()->SetNumericValue("tol", 1e-9);

		// derivative testing
		app->Options()->
		SetStringValue("derivative_test", "second-order");

		// Initialize the IpoptApplication and process the options
		Ipopt::ApplicationReturnStatus status = app->Initialize();
		ok    &= status == Ipopt::Solve_Succeeded;

		// Run the IpoptApplication
		status = app->OptimizeTNLP(cppad_nlp);
		ok    &= status == Ipopt::Solve_Succeeded;

		/*
 		Check some of the solution values
 		*/
		ok &= solution.status == ipopt_cppad_solution::success;
		//
		double check_x[]   = { 1.000000, 4.743000, 3.82115, 1.379408 };
		double check_z_l[] = { 1.087871, 0.,       0.,      0.       };
		double check_z_u[] = { 0.,       0.,       0.,      0.       };
		double rel_tol     = 1e-6;  // relative tolerance
		double abs_tol     = 1e-6;  // absolute tolerance
		for(j = 0; j < n; j++)
		{	ok &= CppAD::NearEqual(
			check_x[j],   solution.x[j],   rel_tol, abs_tol
			);
			ok &= CppAD::NearEqual(
			check_z_l[j], solution.z_l[j], rel_tol, abs_tol
			);
			ok &= CppAD::NearEqual(
			check_z_u[j], solution.z_u[j], rel_tol, abs_tol
			);
		}
	}

	return ok;
}


Input File: ipopt_cppad/ipopt_cppad_simple.cpp
8.1.1.3: Example Simultaneous Solution of Forward and Inverse Problem

8.1.1.3.a: Contents
8.1.1.3.1: An ODE Forward Problem Example
8.1.1.3.2: An ODE Inverse Problem Example
8.1.1.3.3: Simulating ODE Measurement Values
8.1.1.3.4: ipopt_cppad_nlp ODE Problem Representation
8.1.1.3.5: ipopt_cppad_nlp ODE Example Source Code

Input File: omh/ipopt_cppad_ode1.omh
8.1.1.3.1: An ODE Forward Problem Example

8.1.1.3.1.a: Problem
We consider the following ordinary differential equation:  \[
\begin{array}{rcl}
     \partial_t y_0 ( t , a ) & = & - a_1 * y_0 (t, a )  
     \\
     \partial_t y_1 (t , a ) & = & + a_1 * y_0 (t, a ) - a_2 * y_1 (t, a )
\end{array}
\] 
with the initial conditions  \[
y_0 (0 , a) = F(a) = \left( \begin{array}{c} a_0 \\ 0 \end{array} \right) 
\] 
where  a \in \R^3  is an unknown parameter vector and  F : \R^3 \rightarrow \R^2  is defined by the equation above. Our forward problem is stated as follows: Given  a \in \R^3 determine the value of  y ( t , a )  for various values of  t  .

8.1.1.3.1.b: Numerical Procedure
Our numerical procedure for solving the forward problem starts with  \[
     y^0 = y(0, a) = \left( \begin{array}{c} a_0 \\ 0 \end{array} \right)
\] 
Given an approximation value  y^M  for  y ( s_M , a ) , the a trapezoidal method approximates  y ( s_{M+1} , a ) (denoted by  y^{M+1} ) by solving the equation  \[
y^{M+1}  =  y^M + 
\left[ G( y^M , a ) + G( y^{M+1} , a ) \right] * \frac{s_{M+1} - s_M }{ 2 }
\] 
where  G : \R^2 \times \R^3 \rightarrow \R^2 is the function representing this ODE; i.e.  \[
     G(y, a) = \left(  \begin{array}{c}
          - a_1 * y_0
          \\
          + a_1 * y_0 - a_2 * y_1
     \end{array} \right)
\] 
This  G(y, a) is linear with respect to  y , hence the implicit equation defining  y^{M+1}  can be solved inverting the a set of linear equations. In the general case, where  G(y, a) is non-linear with respect to  y , an iterative procedure is used to calculate  y^{M+1} from  y^M .
Input File: omh/ipopt_cppad_ode2.omh
8.1.1.3.2: An ODE Inverse Problem Example

8.1.1.3.2.a: Problem
We define  y : \R \times \R^3 \rightarrow \R^2 as the solution of our 8.1.1.3.1.a: ode forward problem . Suppose we are also given measurement values  z_k \in \R for  k = 1, 2, 3, 4 that are modeled by  \[
     z_k = y_1 ( s_k , a) + e_k
\] 
where  s_k \in \R ,  e_k \sim {\bf N} (0 , \sigma^2 )  , and  \sigma \in \R_+ . The maximum likelihood estimate for  a given  ( z_1 , z_2 , z_3 , z_4 ) solves the following inverse problem  \[
\begin{array}{rcl}
{\rm minimize} \; 
     & \sum_{k=1}^4 H_k [ y( s_k , a ) , a ] 
     & \;{\rm w.r.t} \; a \in \R^3
\end{array}
\] 
where the function  H_k : \R^2 \times \R^3 \rightarrow \R is defined by  \[
     H_k (y, a) = ( z_k - y_1 )^2 
\] 


8.1.1.3.2.b: Black Box Method
A common approach to an inverse problem is to treat the forward problem as a black box (that we do not look inside of or try to understand). In this approach, for each value of the parameter vector  a one uses a 8.1.1.3.1.b: numerical procedure to solve for  y_1 ( s_k , a ) for  k = 1 , 2 , 3, 4 .

8.1.1.3.2.b.a: Two levels of Iteration
As noted above, this numerical procedure often involves iterative procedures for solving a set of equations. Thus, in this approach, there are two levels of iterations, one with respect to the parameter values and the other for solving the forward problem.

8.1.1.3.2.b.b: Derivatives
In addition, in the black box approach, differentiating the forward problem often involves differentiating an iterative procedure. Since the iterative procedure only returns an approximate solution, finite differences often lead to very inaccurate approximations for the derivatives (which in turn create problems for the optimization process). On the other hand, direct application of AD to compute the derivatives requires a huge amount of memory and calculations to differentiate the forward iterative procedure. (There are special techniques for applying AD to the solutions of iterative procedures, but that is outside the scope of this presentation).

8.1.1.3.2.c: Simultaneous Method
The simultaneous forward and inverse method uses constraints to include the solution of the forward problem in the inverse problem. To be specific for our example,  \[
\begin{array}{rcl}
{\rm minimize} 
     & \sum_{k=1}^4 H_k( y^{k * ns} , a )
     & \; {\rm w.r.t} \; y^1 \in \R^2 , \ldots , y^{4 * ns} \in \R^2 ,
     \; a \in \R^3 
\\
{\rm subject \; to}
     & y^{M+1}  =  y^M + 
\left[ G( y^M , a ) + G( y^{M+1} , a ) \right] * \frac{ s_{M+1} - s_M }{ 2 }
     & \; {\rm for} \; M = 0 , \ldots , 4 * ns - 1
\\
     & y^0 = F(a)
\end{array}
\] 
where  ns is the number of time intervals (used by the trapezoidal approximation) between each of the measurement times. Note that, in this form, the iterations of the optimization procedure also solve the forward problem equations. In addition, the functions that need to be differentiated do not involve an iterative procedure.
Input File: omh/ipopt_cppad_ode2.omh
8.1.1.3.3: Simulating ODE Measurement Values

8.1.1.3.3.a: Forward Analytic Solution
The forward problem has the following closed form analytic solution  \[
\begin{array}{rcl}
     y_0 (t , a) & = & a_0 * \exp( - a_1 * t )
     \\
     y_1 (t , a) & = & 
     a_0 * a_1 * \frac{\exp( - a_2 * t ) - \exp( -a_1 * t )}{ a_1 - a_2 }
\end{array}
\] 


8.1.1.3.3.b: Simulation Parameter Values
 \bar{a}_0 = 1   initial value of  y_0 (t, a)
 \bar{a}_1 = 2   transfer rate from compartment zero to compartment one
 \bar{a}_2 = 1   transfer rate from compartment one to outside world
 \sigma = 0   standard deviation of measurement noise
 e_k = 0   simulated measurement noise,  k = 1 , 2 , 3, 4
 s_k = k * .5   time corresponding to the k-th measurement,  k = 1 , 2 , 3, 4

8.1.1.3.3.c: Measurement Values
The simulated measurement values are given by the equation  \[
\begin{array}{rcl}
z_k 
& = &  y_1 ( s_k , \bar{a} ) + e_k
\\
& = & 
\bar{a}_0 * \bar{a}_1 * 
     \frac{\exp( - \bar{a}_2 * s_k ) - \exp( -\bar{a}_1 * s_k )}
          { \bar{a}_1 - \bar{a}_2 }
\end{array}
\] 
for  k = 1, 2, 3, 4 .
Input File: omh/ipopt_cppad_ode2.omh
8.1.1.3.4: ipopt_cppad_nlp ODE Problem Representation

8.1.1.3.4.a: Purpose
In this section we represent the objective and constraint functions, (in the simultaneous forward and reverse optimization problem) in terms of much simpler functions that are faster to differentiate (either by hand coding or by using AD).

8.1.1.3.4.b: Trapezoidal Time Grid
The discrete time grid, used for the trapezoidal approximation, is denote by  \{ t_M \}  . For  k = 1 , 2 , 3, 4 , and  \ell = 0 , \ldots , ns , we define  \[
\begin{array}{rcl}
     \Delta_k & = & ( s_k - s_{k-1} ) / ns
     \\
     t_{ (k-1) * ns + \ell } & = &  s_{k-1} + \Delta_k * \ell
\end{array}
\] 
Note that for  M = 1 , \ldots , 4 * ns  ,  y^M denotes our approximation for  y( t_M , a ) ,  t_0 is equal to 0, and  t_{k*ns} is equal to  s_k .

8.1.1.3.4.c: Argument Vector
The argument vector that we are optimizing with respect to (  x in 8.1.1: ipopt_cppad_nlp ) has the following structure  \[
     x = \left( \begin{array}{c}
          y^0 \\ \vdots \\ y^{4 * ns} \\ a
     \end{array} \right)
\] 
Note that  x \in \R^{2 *(4 * ns + 1) + 3} and  \[
\begin{array}{rcl}
     y^M & = & ( x_{2 * M} , x_{2 * M + 1} )^\T
     \\
     a   & = & ( x_{8 * ns + 2} , x_{8 * ns + 3} , x_{8 * ns + 4} )^\T
\end{array}
\] 


8.1.1.3.4.d: Objective
The objective function (  fg_0 (x) in 8.1.1: ipopt_cppad_nlp ) has the following representation,  \[
     fg_0 (x) 
     = \sum_{k=1}^4 H_k ( y^{k * ns} , a ) 
     = \sum_{k=0}^3 r^k ( u^{k,0} )
\] 
where  r^k = H_{k+1} and  u^{k,0} =   ( y^{k * ns} , a ) for  k = 0, 1, 2, 3 . The range index (in the vector  fg (x) ) corresponding to each term in the objective is 0. The domain components (in the vector  x ) corresponding to the k-th term are  \[
(    x_{2 * k * ns} , 
     x_{2 * k * ns + 1} , 
     x_{8 * ns + 2} , 
     x_{8 * ns + 3} , 
     x_{8 * ns + 4} 

= u^{k,0} 
= ( y_0^{k * ns} , y_1^{k * ns} , a_0, a_1, a_2 ) 
\] 


8.1.1.3.4.e: Initial Condition
We define the function  r^k : \R^2 \times \R^3 \rightarrow \R^2 for  k = 4 by  \[
     r^k ( u ) = r^k ( y , a ) = y - F(a)
\] 
where  u  = ( y , a) . Using this notation, and the function  fg (x) in 8.1.1: ipopt_cppad_nlp , the constraint becomes  \[
\begin{array}{rcl}
     fg_1 (x) & = & r_0^4 ( u^{4,0} ) = r_0^4 ( y^0 , a)
     \\
     fg_2 (x) & = & r_1^4 ( u^{4,0} ) = r_1^4 ( y^0 , a)
\end{array}
\] 
The range indices (in the vector  fg (x) ) corresponding to this constraint are  ( 1, 2 ) . The domain components (in the vector  x ) corresponding to this constraint are  \[
(    x_0 , 
     x_1 , 
     x_{8 * ns + 2} , 
     x_{8 * ns + 3} , 
     x_{8 * ns + 4} 

=
u^{4,0}

( y_0^0, y_1^0 , \ldots , y_0^{4*ns}, y_1^{4*ns}, a_0 , a_1 , a_2 ) 
\] 


8.1.1.3.4.f: Trapezoidal Approximation
For  k = 5 , 6, 7 , 8 , and  \ell = 0 , \ldots , ns , define  M = (k - 5) * ns + \ell . The corresponding trapezoidal approximation is represented by the constraint  \[
0 = y^{M+1}  -  y^{M} - 
\left[ G( y^{M} , a ) + G( y^{M+1} , a ) \right] * \frac{\Delta_k }{ 2 }
\] 
For  k = 5, 6, 7, 8 , we define the function  r^k : \R^2 \times \R^2 \times \R^3 \rightarrow \R^2 by  \[
r^k ( y , w , a ) = y - w  [ G( y , a ) + G( w , a ) ] * \frac{ \Delta_k }{ 2 }
\] 
Using this notation, (and the function  fg (x)  in 8.1.1: ipopt_cppad_nlp ) the constraint becomes  \[
\begin{array}{rcl}
fg_{2 * M + 3} (x)  & = & r_0 ( u^{k,\ell} ) = r_0^k ( y^M , y^{M+1} , a )
\\
fg_{2 * M + 4} (x)  & = & r_1 ( u^{k,\ell} ) = r_1^k ( y^M , y^{M+1} , a )
\end{array} 
\] 
where  M = (k - 5) * ns * \ell . The range indices (in the vector  fg (x) ) corresponding to this constraint are  ( 2 * M + 3 , 2 * M + 4 ) . The domain components (in the vector  x ) corresponding to this constraint are  \[
(    x_{2 * M} , 
     x_{2 * M + 1} , 
     x_{2 * M + 2} ,
     x_{2 * M + 3} ,
     x_{8 * ns + 2} , 
     x_{8 * ns + 3} , 
     x_{8 * ns + 4} 
)
= u^{k, \ell}
= ( y_0^M, y_1^M , y_0^{M+1} , y_1^{M+1} , a_0 , a_1 , a_2 ) 
\] 

Input File: omh/ipopt_cppad_ode2.omh
8.1.1.3.5: ipopt_cppad_nlp ODE Example Source Code

8.1.1.3.5.a: Source Code
Almost all the code below is for the general problem (where nd, ny, na, and ns are arbitrary) but some of it for a specific case defined by the function y_one(t) and discussed in the previous sections.
/* --------------------------------------------------------------------------
CppAD: C++ Algorithmic Differentiation: Copyright (C) 2003-08 Bradley M. Bell

CppAD is distributed under multiple licenses. This distribution is under
the terms of the 
                    Common Public License Version 1.0.

A copy of this license is included in the COPYING file of this distribution.
Please visit http://www.coin-or.org/CppAD/ for information on other licenses.
-------------------------------------------------------------------------- */

# include "ipopt_cppad_nlp.hpp"

// include a definition for Number.
typedef Ipopt::Number Number;

namespace {
	//------------------------------------------------------------------
	// simulated data
	Number a0 = 1.;  // simulation value for a[0]
	Number a1 = 2.;  // simulation value for a[1]
	Number a2 = 1.;  // simulatioln value for a[2]

	// function used to simulate data
	Number y_one(Number t)
	{	Number y_1 =  a0*a1 * (exp(-a2*t) - exp(-a1*t)) / (a1 - a2);
		return y_1;
	}

	// time points were we have data (no data at first point)
	double s[] = { 0.0,        0.5,        1.0,        1.5,        2.0 }; 
	// Simulated data for case with no noise (first point is not used)
	double z[] = { 0.0,  y_one(0.5), y_one(1.0), y_one(1.5), y_one(2.0) };
}
// ---------------------------------------------------------------------------
namespace { // Begin empty namespace 

size_t nd = sizeof(s)/sizeof(s[0]) - 1; // number of actual data values
size_t ny = 2;   // dimension of y(t, a) 
size_t na = 3;   // dimension of a 
size_t ns = 5;   // number of grid intervals between each data value 

// F(a) = y(0, a); i.e., initial condition
template <class Vector>
Vector eval_F(const Vector &a)
{	// This particual F is a case where ny == 2 and na == 3	
	Vector F(ny);
	// y_0 (t) = a[0]*exp(-a[1] * t)
	F[0] = a[0];
	// y_1 (t) = a[0]*a[1]*(exp(-a[2] * t) - exp(-a[1] * t))/(a[1] - a[2])
	F[1] = 0.; 
	return F;
}
// G(y, a) =  y'(t, a); i.e. ODE
template <class Vector>
Vector eval_G(const Vector &y , const Vector &a)
{	// This particular G is for a case where ny == 2 and na == 3
	Vector G(ny);
	// y_0 (t) = a[0]*exp(-a[1] * t)
	G[0] = -a[1] * y[0];  
	// y_1 (t) = a[0]*a[1]*(exp(-a[2] * t) - exp(-a[1] * t))/(a[1] - a[2])
	G[1] = +a[1] * y[0] - a[2] * y[1]; 
	return G;
} 
// H(k, y, a) = contribution to objective at k-th data point
template <class Scalar, class Vector>
Scalar eval_H(size_t k, const Vector &y, const Vector &a)
{	// This particular H is for a case where y_1 (t) is measured
	Scalar diff = z[k] - y[1];
 	return diff * diff;
}

// -----------------------------------------------------------------------------
class FG_info : public ipopt_cppad_fg_info
{
private:
	bool retape_;
public:
	// derived class part of constructor
	FG_info(bool retape)
	: retape_ (retape)
	{ }
	// r^k for k = 0, 1, ..., nd-1 corresponds to the data value
	// r^k for k = nd corresponds to initial condition
	// r^k for k = nd+1 , ... , 2*nd is used for trapezoidal approximation
	size_t number_functions(void)
	{	return nd + 1 + nd; }
	ADVector eval_r(size_t k, const ADVector &u)
	{	size_t j;
		ADVector y_M(ny), a(na);
		if( k < nd )
		{	// r^k for k = 0, ... , nd-1
			// We use a differnent k for each data point
			ADVector r(1); // return value is a scalar
			size_t j;
			// u is [y( s[k+1] ) , a] 
			for(j = 0; j < ny; j++)
				y_M[j] = u[j];
			for(j = 0; j < na; j++)
				a[j] = u[ny + j];
			r[0] = eval_H<ADNumber>(k+1, y_M, a);
			return r;
		}
		if( k == nd )
		{	// r^k for k = nd corresponds to initial condition
			ADVector r(ny), F(ny);
			// u is [y(0), a] 
			for(j = 0; j < na; j++)
				a[j] = u[ny + j];
			F    = eval_F(a);
			for(j = 0; j < ny; j++)
			{	y_M[j] = u[j];
				// y(0) - F(a)
				r[j]   = y_M[j] - F[j]; 
			}
			return  r;
		}
		// r^k for k = nd+1, ... , 2*nd
		// corresponds to trapezoidal approximations in the 
		// data interval [ s[k-nd] , s[k-nd-1] ]
		ADVector y_M1(ny);
		// u = [y_M, y_M1, a] where y_M is y(t) at 
		// t = t[ (k-nd-1) * ns + ell ] 
		for(j = 0; j < ny; j++)
		{	y_M[j]  = u[j];
			y_M1[j] = u[ny + j];
		}
		for(j = 0; j < na; j++)
			a[j] = u[2 * ny + j];
		Number dt      = (s[k-nd] - s[k-nd-1]) / Number(ns);
		ADVector G_M   = eval_G(y_M,  a);
		ADVector G_M1  = eval_G(y_M1, a);
		ADVector r(ny);
		for(j = 0; j < ny; j++)
			r[j] = y_M1[j] - y_M[j] - (G_M1[j] + G_M[j]) * dt/2.;
		return r;
	}
	// Operation sequence does not depend on u so retape = false should
	// work and be faster. Allow for both for testing.
	bool retape(size_t k)
	{	return retape_; }
	// size of the vector u in eval_r
	size_t domain_size(size_t k)
	{	if( k < nd )
			return ny + na;   // objective function
		if( k == nd )
			return ny + na;  // initial value constraint
		return 2 * ny + na;      // trapezodial constraints
	}
	// size of the vector r in eval_r
	size_t range_size(size_t k)
	{	if( k < nd )
			return 1;
		return ny; 
	}
	size_t number_terms(size_t k)
	{	if( k <= nd )
			return 1;     // r^k used once for k <= nd
		return ns;            // r^k used ns times for k > nd
	}
	void index(size_t k, size_t ell, SizeVector& I, SizeVector& J)
	{	size_t i, j;
		// number of components of x corresponding to value of y
		size_t ny_inx = (nd * ns + 1) * ny;
		if( k < nd )
		{	// r^k for k = 0 , ... , nd-1 corresponds to objective
			I[0] = 0;
			// The first ny components of u is y(t) at 
			// 	t = s[k+1] = t[(k+1)*ns]
			// components of x corresponding to this value of y
			for(j = 0; j < ny; j++)
				J[j] = (k + 1) * ns * ny + j;
			// components of x correspondig to a
			for(j = 0; j < na; j++)
				J[ny + j] = ny_inx + j; 
			return;
		}
		if( k == nd )
		{	// r^k corresponds to initial condition
			for(i = 0; i < ny; i++)
				I[i] = 1 + i;
			// u starts with the first j components of x
			// (which corresponding to y(t) at t[0])
			for(j = 0; j < ny; j++)
				J[j] = j;
			// following that, u contains the vector a 
			for(j = 0; j < na; j++)
				J[ny + j] = ny_inx + j;
			return;
		}
		// index of first grid point in ts for difference equation
		size_t M = (k - nd - 1) * ns + ell;
		for(j = 0; j < ny; j++)
		{	J[j]          = M * ny  + j; // index of y_M in x
			J[ny + j]     = J[j] + ny;   // index of y_M1
		}
		for(j = 0; j < na; j++)
			J[2 * ny + j] = ny_inx + j;                      // a
		// There are ny difference equations for each grid point.
		// Add one for the objective function index, and ny for the
		// initial value constraint.
		for(i = 0; i < ny; i++)
			I[i] = 1 + ny + M * ny + i ;
	} 
};

} // End empty namespace
// ---------------------------------------------------------------------------

bool ipopt_cppad_ode(void)
{	bool ok = true;
	size_t j, I;

	// number of components of x corresponding to value of y
	size_t ny_inx = (nd * ns + 1) * ny;
	// number of constraints (range dimension of g)
	size_t m = ny + nd * ns * ny;
	// number of components in x (domain dimension for f and g)
	size_t n = ny_inx + na;
	// the argument vector for the optimization is 
	// y(t) at t[0] , ... , t[nd*ns] , followed by a
	NumberVector x_i(n), x_l(n), x_u(n);
	for(j = 0; j < ny_inx; j++)
	{	x_i[j] = 0.;       // initial y(t) for optimization
		x_l[j] = -1.0e19;  // no lower limit
		x_u[j] = +1.0e19;  // no upper limit
	}
	for(j = 0; j < na; j++)
	{	x_i[ny_inx + j ] = .5;       // initiali a for optimization
		x_l[ny_inx + j ] =  -1.e19;  // no lower limit
		x_u[ny_inx + j ] =  +1.e19;  // no upper
	}
	// all of the difference equations are constrained to the value zero
	NumberVector g_l(m), g_u(m);
	for(I = 0; I < m; I++)
	{	g_l[I] = 0.;
		g_u[I] = 0.;
	}
	// derived class object
	
	for(size_t icase = 0; icase <= 1; icase++)
	{	// Retaping is slow, so only do icase = 0 for large values 
		// of ns.
		bool retape = icase != 0;

		// object defining the objective f(x) and constraints g(x)
		FG_info fg_info(retape);

		// create the CppAD Ipopt interface
		ipopt_cppad_solution solution;
		Ipopt::SmartPtr<Ipopt::TNLP> cppad_nlp = new ipopt_cppad_nlp(
			n, m, x_i, x_l, x_u, g_l, g_u, &fg_info, &solution
		);

		// Create an Ipopt application
		using Ipopt::IpoptApplication;
		Ipopt::SmartPtr<IpoptApplication> app = new IpoptApplication();

		// turn off any printing
		app->Options()->SetIntegerValue("print_level", -2);

		// maximum number of iterations
		app->Options()->SetIntegerValue("max_iter", 30);

		// approximate accuracy in first order necessary conditions;
		// see Mathematical Programming, Volume 106, Number 1, 
		// Pages 25-57, Equation (6)
		app->Options()->SetNumericValue("tol", 1e-9);

		// Derivative testing is very slow for large problems
		// so comment this out if you use a large value for ns.
		app->Options()-> SetStringValue(
			"derivative_test", "second-order"
		);

		// Initialize the application and process the options
		Ipopt::ApplicationReturnStatus status = app->Initialize();
		ok    &= status == Ipopt::Solve_Succeeded;

		// Run the application
		status = app->OptimizeTNLP(cppad_nlp);
		ok    &= status == Ipopt::Solve_Succeeded;

		// split out return values
		NumberVector a(na), y_0(ny), y_1(ny), y_2(ny);
		for(j = 0; j < na; j++)
			a[j] = solution.x[ny_inx+j];
		for(j = 0; j < ny; j++)
		{	y_0[j] = solution.x[j];
			y_1[j] = solution.x[ny + j];
			y_2[j] = solution.x[2 * ny + j];
		} 

		// Check some of the return values
		Number rel_tol = 1e-2; // use a larger value of ns
		Number abs_tol = 1e-2; // to get better accuracy here.
		Number check_a[] = {a0, a1, a2}; // see the y_one function
		for(j = 0; j < na; j++)
		{
			ok &= CppAD::NearEqual( 
				check_a[j], a[j], rel_tol, abs_tol
			);
		}
		rel_tol = 1e-9;
		abs_tol = 1e-9;

		// check the initial value constraint
		NumberVector F = eval_F(a);
		for(j = 0; j < ny; j++)
			ok &= CppAD::NearEqual(F[j], y_0[j], rel_tol, abs_tol);

		// check the first trapezoidal equation
		NumberVector G_0 = eval_G(y_0, a);
		NumberVector G_1 = eval_G(y_1, a);
		Number dt = (s[1] - s[0]) / Number(ns);
		Number check;
		for(j = 0; j < ny; j++)
		{	check = y_1[j] - y_0[j] - (G_1[j]+G_0[j])*dt/2;
			ok &= CppAD::NearEqual( check, 0., rel_tol, abs_tol);
		}
		//
		// check the second trapezoidal equation
		NumberVector G_2 = eval_G(y_2, a);
		if( ns == 1 )
			dt = (s[2] - s[1]) / Number(ns);
		for(j = 0; j < ny; j++)
		{	check = y_2[j] - y_1[j] - (G_2[j]+G_1[j])*dt/2;
			ok &= CppAD::NearEqual( check, 0., rel_tol, abs_tol);
		}
		//
		// check the objective function (specialized to this case)
		check = 0.;
		NumberVector y_M(ny);
		for(size_t k = 0; k < nd; k++)
		{	for(j = 0; j < ny; j++)
			{	size_t M = (k + 1) * ns;
				y_M[j] =  solution.x[M * ny + j];
			}
			check += eval_H<Number>(k + 1, y_M, a);
		}
		Number obj_value = solution.obj_value;
		ok &= CppAD::NearEqual(check, obj_value, rel_tol, abs_tol);
	}
	return ok;
}


Input File: omh/ipopt_cppad_ode2.omh
8.1.2: Interfacing to C: Example and Test
 
# include <cppad/cppad.hpp>  // CppAD utilities
# include <cassert>        // assert macro

namespace { // Begin empty namespace
/*
Compute the value of a sum of Gaussians defined by a and evaluated at x
	y = sum_{i=1}^n a[3*i] exp( (x - a[3*i+1])^2 / a[3*i+2])^2 )
where the floating point type is a template parameter
*/
template <class Float>
Float sumGauss(const Float &x, const CppAD::vector<Float> &a)   
{ 
	// number of components in a
	size_t na = a.size();

	// number of Gaussians
	size_t n = na / 3;

	// check the restricitons on na 
	assert( na == n * 3 );

	// declare temporaries used inside of loop
	Float ex, arg;

	// initialize sum
  	Float y = 0.; 

	// loop with respect to Gaussians
	size_t i;
	for(i = 0; i < n; i++)
	{
		arg =   (x - a[3*i+1]) / a[3*i+2]; 
		ex  =   exp(-arg * arg); 
		y  +=   a[3*i] * ex; 
	} 
	return y;
}
/*
Create a C function interface that computes both
	y = sum_{i=1}^n a[3*i] exp( (x - a[3*i+1])^2 / a[3*i+2])^2 )
and its derivative with respect to the parameter vector a.
*/
extern "C"
void sumGauss(float x, float a[], float *y, float dyda[], size_t na)   
{	// Note that any simple vector could replace CppAD::vector; 
	// for example, std::vector, std::valarray

	// check the restrictions on na
	assert( na % 3 == 0 );  // mod(na, 3) = 0

	// use the shorthand ADfloat for the type CppAD::AD<float>
	typedef CppAD::AD<float> ADfloat;

	// vector for indpendent variables
	CppAD::vector<ADfloat> A(na);      // used with template function above
	CppAD::vector<float>   acopy(na);  // used for derivative calculations

	// vector for the dependent variables (there is only one)
	CppAD::vector<ADfloat> Y(1); 

	// copy the independent variables from C vector to CppAD vectors
	size_t i;
	for(i = 0; i < na; i++)
 		A[i] = acopy[i] = a[i];

	// declare that A is the independent variable vector
	CppAD::Independent(A);

	// value of x as an ADfloat object
	ADfloat X = x;

	// Evaluate template version of sumGauss with ADfloat as the template 
	// parameter. Set the independent variable to the resulting value
	Y[0] = sumGauss(X, A); 

	// create the AD function object F : A -> Y
	CppAD::ADFun<float> F(A, Y);

	// use Value to convert Y[0] to float and return y = F(a)  
	*y = CppAD::Value(Y[0]);

	// evaluate the derivative F'(a)
	CppAD::vector<float> J(na);
	J = F.Jacobian(acopy);

	// return the value of dyda = F'(a) as a C vector
	for(i = 0; i < na; i++)
		dyda[i] = J[i];

	return;
}
/*
Link CppAD::NearEqual so do not have to use namespace notation in Interface2C
*/
bool NearEqual(float x, float y, float r, float a)
{	return CppAD::NearEqual(x, y, r, a);
}

} // End empty namespace

bool Interface2C(void)
{	// This routine is intentionally coded as if it were a C routine
	// except for the fact that it uses the predefined type bool.
	bool ok = true; 

	// declare variables
	float x, a[6], y, dyda[6], tmp[6];
	size_t na, n, i;

	// number of parameters (3 for each Gaussian)
	na = 6;

	// number of Gaussians
	n  = na / 3;

	// value of x
	x = 1.;

	// value of the parameter vector a
	for(i = 0; i < na; i++)
		a[i] = (float) (i+1);

	// evaulate function and derivative
	sumGauss(x, a, &y, dyda, na);

	// compare dyda to central difference approximation for deriative
	for(i = 0; i < na; i++)
	{	// local variables
		float small, ai, yp, ym, dy_da;

		// We assume that the type float has at least 7 digits of 
		// precision, so we choose small to be about pow(10., -7./2.).
		small  = (float) 3e-4;

		// value of this component of a
		ai    = a[i];

		// evaluate F( a + small * ei )
		a[i]  = ai + small;
		sumGauss(x, a, &yp, tmp, na);

		// evaluate F( a - small * ei )
		a[i]  = ai - small;
		sumGauss(x, a, &ym, tmp, na);

		// evaluate central difference approximates for partial
		dy_da = (yp - ym) / (2 * small);

		// restore this component of a
		a[i]  = ai;

		ok   &= NearEqual(dyda[i], dy_da, small, small); 
	}
	return ok;
}

Input File: example/interface_2c.cpp
8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
 

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <complex>


typedef std::complex<double>     Complex;
typedef CppAD::AD<Complex>       ADComplex;
typedef CPPAD_TEST_VECTOR<ADComplex>   ADVector;

// ----------------------------------------------------------------------------

bool JacMinorDet()
{	bool ok = true;

	using namespace CppAD;

	size_t n = 2;

	// object for computing determinant
	det_by_minor<ADComplex> Det(n);

	// independent and dependent variable vectors
	CPPAD_TEST_VECTOR<ADComplex>  X(n * n);
	CPPAD_TEST_VECTOR<ADComplex>  D(1);

	// value of the independent variable
	size_t i;
	for(i = 0; i < n * n; i++)
		X[i] = Complex(int(i), -int(i));

	// set the independent variables
	Independent(X);

	// comupute the determinant
	D[0] = Det(X); 

	// create the function object
	ADFun<Complex> f(X, D);

	// argument value
	CPPAD_TEST_VECTOR<Complex>     x( n * n );
	for(i = 0; i < n * n; i++)
		x[i] = Complex(2 * i, i);

	// first derivative of the determinant
	CPPAD_TEST_VECTOR<Complex> J( n * n );
	J = f.Jacobian(x);

	/*
	f(x)     = x[0] * x[3] - x[1] * x[2]
	f'(x)    = ( x[3], -x[2], -x[1], x[0] )
	*/
	Complex Jtrue[] = { x[3], -x[2], -x[1], x[0] };
	for(i = 0; i < n * n; i++)
		ok &= Jtrue[i] == J[i];

	return ok;

}


Input File: example/jac_minor_det.cpp
8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
 

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_lu.hpp>

bool JacLuDet()
{	bool ok = true;

	using namespace CppAD;

	size_t n = 2;

	// object for computing determinants
	det_by_lu<ADComplex> Det(n);

	// independent and dependent variable vectors
	CPPAD_TEST_VECTOR<ADComplex>  X(n * n);
	CPPAD_TEST_VECTOR<ADComplex>  D(1);

	// value of the independent variable
	size_t i;
	for(i = 0; i < n * n; i++)
		X[i] = Complex(int(i), -int(i));

	// set the independent variables
	Independent(X);

	// compute the determinant
	D[0]  = Det(X);

	// create the function object
	ADFun<Complex> f(X, D);

	// argument value
	CPPAD_TEST_VECTOR<Complex>     x( n * n );
	for(i = 0; i < n * n; i++)
		x[i] = Complex(2 * i, i);

	// first derivative of the determinant
	CPPAD_TEST_VECTOR<Complex> J( n * n );
	J = f.Jacobian(x);

	/*
	f(x)     = x[0] * x[3] - x[1] * x[2]
	*/
	Complex Jtrue[]  = { x[3], -x[2], -x[1], x[0] };
	for( i = 0; i < n*n; i++)
		ok &= NearEqual( Jtrue[i], J[i], 1e-10 , 1e-10 );

	return ok;
}


Input File: example/jac_lu_det.cpp
8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
 

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <complex>

typedef std::complex<double>     Complex;
typedef CppAD::AD<Complex>       ADComplex;
typedef CPPAD_TEST_VECTOR<ADComplex>   ADVector;

// ----------------------------------------------------------------------------

bool HesMinorDet()
{	bool ok = true;

	using namespace CppAD;

	size_t n = 2;

	// object for computing determinants
	det_by_minor<ADComplex> Det(n);

	// independent and dependent variable vectors
	CPPAD_TEST_VECTOR<ADComplex>  X(n * n);
	CPPAD_TEST_VECTOR<ADComplex>  D(1);

	// value of the independent variable
	size_t i;
	for(i = 0; i < n * n; i++)
		X[i] = Complex(int(i), -int(i));

	// set the independent variables
	Independent(X);

	// comupute the determinant
	D[0] = Det(X); 

	// create the function object
	ADFun<Complex> f(X, D);

	// argument value
	CPPAD_TEST_VECTOR<Complex>     x( n * n );
	for(i = 0; i < n * n; i++)
		x[i] = Complex(2 * i, i);

	// first derivative of the determinant
	CPPAD_TEST_VECTOR<Complex> H( n * n * n * n);
	H = f.Hessian(x, 0);

	/*
	f(x)     = x[0] * x[3] - x[1] * x[2]
	f'(x)    = ( x[3], -x[2], -x[1], x[0] )
	*/
	Complex zero(0., 0.);
	Complex one(1., 0.);
	Complex Htrue[]  = { 
		zero, zero, zero,  one,
		zero, zero, -one, zero,
		zero, -one, zero, zero,
		 one, zero, zero, zero
	};
	for( i = 0; i < n*n*n*n; i++)
		ok &= Htrue[i] == H[i];

	return ok;

}


Input File: example/hes_minor_det.cpp
8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
 

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_lu.hpp>

bool HesLuDet(void)
{	bool ok = true;

	using namespace CppAD;

	typedef std::complex<double> Complex;

	size_t n = 2;

	// object for computing determinants
	det_by_lu< AD<Complex> > Det(n);

	// independent and dependent variable vectors
	CPPAD_TEST_VECTOR< AD<Complex> >  X(n * n);
	CPPAD_TEST_VECTOR< AD<Complex> >  D(1);

	// value of the independent variable
	size_t i;
	for(i = 0; i < n * n; i++)
		X[i] = Complex(int(i), -int(i) );

	// set the independent variables
	Independent(X);

	D[0]  = Det(X);

	// create the function object
	ADFun<Complex> f(X, D);

	// argument value
	CPPAD_TEST_VECTOR<Complex>     x( n * n );
	for(i = 0; i < n * n; i++)
		x[i] = Complex(2 * i, i);

	// first derivative of the determinant
	CPPAD_TEST_VECTOR<Complex> H( n * n * n * n );
	H = f.Hessian(x, 0);

	/*
	f(x)     = x[0] * x[3] - x[1] * x[2]
	f'(x)    = ( x[3], -x[2], -x[1], x[0] )
	*/
	Complex zero(0., 0.);
	Complex one(1., 0.);
	Complex Htrue[]  = { 
		zero, zero, zero,  one,
		zero, zero, -one, zero,
		zero, -one, zero, zero,
		 one, zero, zero, zero
	};
	for( i = 0; i < n*n*n*n; i++)
		ok &= NearEqual( Htrue[i], H[i], 1e-10 , 1e-10 );

	return ok;
}


Input File: example/hes_lu_det.cpp
8.1.7: A Stiff Ode: Example and Test
Define  x : \R \rightarrow \R^2 by  \[
\begin{array}{rcl}
     x_0 (0)        & = & 1 \\
     x_1 (0)        & = & 0 \\
     x_0^\prime (t) & = & - a_0 x_0 (t) \\
     x_1^\prime (t) & = & + a_0 x_0 (t) - a_1 x_1 (t)
\end{array}
\] 
If  a_0 \gg a_1 > 0 , this is a stiff Ode and the analytic solution is  \[
\begin{array}{rcl}
x_0 (t)    & = & \exp( - a_0 t ) \\
x_1 (t)    & = & a_0 [ \exp( - a_1 t ) - \exp( - a_0 t ) ] / ( a_0 - a_1 ) 
\end{array}
\] 
The example tests Rosen34 using the relations above:
 

# include <cppad/cppad.hpp> 

// To print the comparision, change the 0 to 1 on the next line.
# define CppADOdeStiffPrint 0

namespace {
	// --------------------------------------------------------------
	class Fun {
	private:
		CPPAD_TEST_VECTOR<double> a;
	public:
		// constructor
		Fun(const CPPAD_TEST_VECTOR<double>& a_) : a(a_)
		{ }
		// compute f(t, x) 
		void Ode(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f)
		{	f[0]  = - a[0] * x[0];
			f[1]  = + a[0] * x[0] - a[1] * x[1]; 
		}
		// compute partial of f(t, x) w.r.t. t 
		void Ode_ind(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f_t)
		{	f_t[0] = 0.;
			f_t[1] = 0.;
		}
		// compute partial of f(t, x) w.r.t. x 
		void Ode_dep(
			const double                    &t, 
			const CPPAD_TEST_VECTOR<double> &x, 
			CPPAD_TEST_VECTOR<double>       &f_x)
		{	f_x[0] = -a[0];  
			f_x[1] = 0.;
			f_x[2] = +a[0];
			f_x[3] = -a[1];
		}
	};
	// --------------------------------------------------------------
	class RungeMethod {
	private:
		Fun F;
	public:
		// constructor
		RungeMethod(const CPPAD_TEST_VECTOR<double> &a_) : F(a_)
		{ }
		void step(
			double                     ta , 
			double                     tb , 
			CPPAD_TEST_VECTOR<double> &xa ,
			CPPAD_TEST_VECTOR<double> &xb ,
			CPPAD_TEST_VECTOR<double> &eb )
		{	xb = CppAD::Runge45(F, 1, ta, tb, xa, eb);
		}
		size_t order(void)
		{	return 5; }
	};
	class RosenMethod {
	private:
		Fun F;
	public:
		// constructor
		RosenMethod(const CPPAD_TEST_VECTOR<double> &a_) : F(a_)
		{ }
		void step(
			double                     ta , 
			double                     tb , 
			CPPAD_TEST_VECTOR<double> &xa ,
			CPPAD_TEST_VECTOR<double> &xb ,
			CPPAD_TEST_VECTOR<double> &eb )
		{	xb = CppAD::Rosen34(F, 1, ta, tb, xa, eb);
		}
		size_t order(void)
		{	return 4; }
	};
}

bool OdeStiff(void)
{	bool ok = true;     // initial return value

	CPPAD_TEST_VECTOR<double> a(2);
	a[0] = 1e3;
	a[1] = 1.;
	RosenMethod rosen(a);
	RungeMethod runge(a);
	Fun          gear(a);

	CPPAD_TEST_VECTOR<double> xi(2);
	xi[0] = 1.;
	xi[1] = 0.;

	CPPAD_TEST_VECTOR<double> eabs(2);
	eabs[0] = 1e-6;
	eabs[1] = 1e-6;

	CPPAD_TEST_VECTOR<double> ef(2);
	CPPAD_TEST_VECTOR<double> xf(2);
	CPPAD_TEST_VECTOR<double> maxabs(2);
	size_t                nstep;

	size_t k;
	for(k = 0; k < 3; k++)
	{	
		size_t M    = 5;
		double ti   = 0.;
		double tf   = 1.;
		double smin = 1e-7;
		double sini = 1e-7;
		double smax = 1.;
		double scur = .5;
		double erel = 0.;

		const char *method;
		if( k == 0 )
		{	method = "Rosen34";
			xf = CppAD::OdeErrControl(rosen, ti, tf, 
			xi, smin, smax, scur, eabs, erel, ef, maxabs, nstep);
		}
		else if( k == 1 )
		{	method = "Runge45";
			xf = CppAD::OdeErrControl(runge, ti, tf, 
			xi, smin, smax, scur, eabs, erel, ef, maxabs, nstep);
		}
		else if( k == 2 )
		{	method = "Gear5";
			xf = CppAD::OdeGearControl(gear, M, ti, tf,
			xi, smin, smax, sini, eabs, erel, ef, maxabs, nstep);
		}
		double x0 = exp(-a[0]*tf);
		ok &= CppAD::NearEqual(x0, xf[0], 0., eabs[0]);
		ok &= CppAD::NearEqual(0., ef[0], 0., eabs[0]);

		double x1 = a[0] * 
			(exp(-a[1]*tf) - exp(-a[0]*tf))/(a[0] - a[1]);
		ok &= CppAD::NearEqual(x1, xf[1], 0., eabs[1]);
		ok &= CppAD::NearEqual(0., ef[1], 0., eabs[0]);
# if CppADOdeStiffPrint
		std::cout << "method     = " << method << std::endl;
		std::cout << "nstep      = " << nstep  << std::endl;
		std::cout << "x0         = " << x0 << std::endl;
		std::cout << "xf[0]      = " << xf[0] << std::endl;
		std::cout << "x0 - xf[0] = " << x0 - xf[0] << std::endl;
		std::cout << "ef[0]      = " << ef[0] << std::endl;
		std::cout << "x1         = " << x1 << std::endl;
		std::cout << "xf[1]      = " << xf[1] << std::endl;
		std::cout << "x1 - xf[1] = " << x1 - xf[1] << std::endl;
		std::cout << "ef[1]      = " << ef[1] << std::endl;
# endif
	}

	return ok;
}


Input File: example/ode_stiff.cpp
8.1.8: Taylor's Ode Solver: An Example and Test

8.1.8.a: Purpose
This is a realistic example using two levels of taping (see 8.1.11: mul_level ). The first level of taping uses AD<double> to tape the solution of an ordinary differential equation. This solution is then differentiated with respect to a parameter vector. The second level of taping uses AD< AD<double> > to take derivatives during the solution of the differential equation. These derivatives are used in the application of Taylor's method to the solution of the ODE. The example 8.1.9: ode_taylor_adolc.cpp computes the same values using Adolc's type adouble and CppAD's type AD<adouble>.

8.1.8.b: ODE
For this example the ODE's are defined by the function  h : \R^n \times \R^n \rightarrow \R^n where  \[
     h[ x, y(t, x) ] = 
     \left( \begin{array}{c}
               x_0                     \\
               x_1 y_0 (t, x)          \\
               \vdots                  \\
               x_{n-1} y_{n-2} (t, x)
     \end{array} \right)
     = 
     \left( \begin{array}{c}
               \partial_t y_0 (t , x)      \\
               \partial_t y_1 (t , x)      \\
               \vdots                      \\
               \partial_t y_{n-1} (t , x) 
     \end{array} \right)
\] 
and the initial condition  y(0, x) = 0 . The value of  x is fixed during the solution of the ODE and the function  g : \R^n \rightarrow \R^n is used to define the ODE where  \[
     g(y) = 
     \left( \begin{array}{c}
               x_0     \\
               x_1 y_0 \\
               \vdots  \\
               x_{n-1} y_{n-2} 
     \end{array} \right)
\] 


8.1.8.c: ODE Solution
The solution for this example can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain  \[
     y(t, x ) =
     \left( \begin{array}{c}
          x_0 t                  \\
          x_1 x_0 t^2 / 2        \\
          \vdots                 \\
          x_{n-1} x_{n-2} \ldots x_0 t^n / n !
     \end{array} \right)
\] 


8.1.8.d: Derivative of ODE Solution
Differentiating the solution above, with respect to the parameter vector  x , we notice that  \[
\partial_x y(t, x ) =
\left( \begin{array}{cccc}
y_0 (t,x) / x_0      & 0                   & \cdots & 0      \\
y_1 (t,x) / x_0      & y_1 (t,x) / x_1     & 0      & \vdots \\
\vdots               & \vdots              & \ddots & 0      \\
y_{n-1} (t,x) / x_0  & y_{n-1} (t,x) / x_1 & \cdots & y_{n-1} (t,x) / x_{n-1}
\end{array} \right)
\] 


8.1.8.e: Taylor's Method Using AD
An m-th order Taylor method for approximating the solution of an ordinary differential equations is  \[
     y(t + \Delta t , x) 
     \approx 
     \sum_{k=0}^p \partial_t^k y(t , x ) \frac{ \Delta t^k }{ k ! }
     =
     y^{(0)} (t , x ) + 
     y^{(1)} (t , x ) \Delta t + \cdots + 
     y^{(p)} (t , x ) \Delta t^p
\] 
where the Taylor coefficients  y^{(k)} (t, x) are defined by  \[
     y^{(k)} (t, x) = \partial_t^k y(t , x ) / k !
\] 
We define the function  z(t, x) by the equation  \[
     z ( t , x ) = g[ y ( t , x ) ] = h [ x , y( t , x ) ]
\] 
It follows that  \[
\begin{array}{rcl}
     \partial_t y(t, x) & = & z (t , x) 
     \\
      \partial_t^{k+1} y(t , x) & = & \partial_t^k z (t , x)
     \\
     y^{(k+1)} ( t , x) & = & z^{(k)} (t, x) / (k+1) 
\end{array}
\] 
where   z^{(k)} (t, x) is the k-th order Taylor coefficient for  z(t, x) . In the example below, the Taylor coefficients  \[
     y^{(0)} (t , x) , \ldots , y^{(k)} ( t , x )
\] 
are used to calculate the Taylor coefficient  z^{(k)} ( t , x ) which in turn gives the value for   y^{(k+1)} y ( t , x) .
 

# include <cppad/cppad.hpp>

// =========================================================================
// define types for each level
namespace { // BEGIN empty namespace
typedef CppAD::AD<double>     ADdouble;
typedef CppAD::AD< ADdouble > ADDdouble;

// -------------------------------------------------------------------------
// class definition for C++ function object that defines ODE
class Ode {
private:
	// copy of a that is set by constructor and used by g(y)
	CPPAD_TEST_VECTOR< ADdouble > x_; 
public:
	// constructor
	Ode( CPPAD_TEST_VECTOR< ADdouble > x) : x_(x)
	{ }
	// the function g(y) is evaluated with two levels of taping
	CPPAD_TEST_VECTOR< ADDdouble > operator()
	( const CPPAD_TEST_VECTOR< ADDdouble > &y) const
	{	size_t n = y.size();
		CPPAD_TEST_VECTOR< ADDdouble > g(n);
		size_t i;
		g[0] = x_[0];
		for(i = 1; i < n; i++)
			g[i] = x_[i] * y[i-1];

		return g;
	}
};

// -------------------------------------------------------------------------
// Routine that uses Taylor's method to solve ordinary differential equaitons
// and allows for algorithmic differentiation of the solution. 
CPPAD_TEST_VECTOR < ADdouble > taylor_ode(
	Ode                     G       ,  // function that defines the ODE
	size_t                  order   ,  // order of Taylor's method used
	size_t                  nstep   ,  // number of steps to take
	ADdouble                &dt     ,  // Delta t for each step
	CPPAD_TEST_VECTOR< ADdouble > &y_ini  )  // y(t) at the initial time
{
	// some temporary indices
	size_t i, k, ell;

	// number of variables in the ODE
	size_t n = y_ini.size();

	// copies of x and g(y) with two levels of taping
	CPPAD_TEST_VECTOR< ADDdouble >   Y(n), Z(n);

	// y, y^{(k)} , z^{(k)}, and y^{(k+1)}
	CPPAD_TEST_VECTOR< ADdouble >  y(n), y_k(n), z_k(n), y_kp(n);
	
	// initialize x
	for(i = 0; i < n; i++)
		y[i] = y_ini[i];

	// loop with respect to each step of Taylors method
	for(ell = 0; ell < nstep; ell++)
	{	// prepare to compute derivatives of in ADdouble
		for(i = 0; i < n; i++)
			Y[i] = y[i];
		CppAD::Independent(Y);

		// evaluate ODE in ADDdouble
		Z = G(Y);

		// define differentiable version of g: X -> Y
		// that computes its derivatives in ADdouble
		CppAD::ADFun<ADdouble> g(Y, Z);

		// Use Taylor's method to take a step
		y_k            = y;     // initialize y^{(k)}
		ADdouble dt_kp = dt;    // initialize dt^(k+1)
		for(k = 0; k <= order; k++)
		{	// evaluate k-th order Taylor coefficient of y
			z_k = g.Forward(k, y_k);
 
			for(i = 0; i < n; i++)
			{	// convert to (k+1)-Taylor coefficient for x
				y_kp[i] = z_k[i] / ADdouble(k + 1);

				// add term for to this Taylor coefficient
				// to solution for y(t, x)
				y[i]    += y_kp[i] * dt_kp;
			}
			// next power of t
			dt_kp *= dt;
			// next Taylor coefficient
			y_k   = y_kp;
		}
	}
	return y;
}
} // END empty namespace
// ==========================================================================
// Routine that tests alogirhtmic differentiation of solutions computed
// by the routine taylor_ode.
bool ode_taylor(void)
{	// initialize the return value as true	
	bool ok = true;

	// number of components in differential equation
	size_t n = 4;

	// some temporary indices
	size_t i, j;

	// parameter vector in both double and ADdouble
	CPPAD_TEST_VECTOR<double>   x(n);
	CPPAD_TEST_VECTOR<ADdouble> X(n);
	for(i = 0; i < n; i++)
		X[i] = x[i] = double(i + 1);

	// declare the parameters as the independent variable
	CppAD::Independent(X);

	// arguments to taylor_ode 
	Ode G(X);                // function that defines the ODE
	size_t   order = n;      // order of Taylor's method used
	size_t   nstep = 2;      // number of steps to take
	ADdouble DT    = 1.;     // Delta t for each step
	// value of y(t, x) at the initial time
	CPPAD_TEST_VECTOR< ADdouble > Y_INI(n);
	for(i = 0; i < n; i++)
		Y_INI[i] = 0.;

	// integrate the differential equation
	CPPAD_TEST_VECTOR< ADdouble > Y_FINAL(n);
 	Y_FINAL = taylor_ode(G, order, nstep, DT, Y_INI);

	// define differentiable fucntion object f : A -> Y_FINAL
	// that computes its derivatives in double
	CppAD::ADFun<double> f(X, Y_FINAL);

	// check function values
	double check = 1.;
	double t     = nstep * Value(DT);
	for(i = 0; i < n; i++)
	{	check *= x[i] * t / double(i + 1);
		ok &= CppAD::NearEqual(Value(Y_FINAL[i]), check, 1e-10, 1e-10);
	}

	// evaluate the Jacobian of h at a
	CPPAD_TEST_VECTOR<double> jac = f.Jacobian(x);

	// check Jacobian 
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
		{	double jac_ij = jac[i * n + j]; 
			if( i < j )
				check = 0.;
			else	check = Value( Y_FINAL[i] ) / x[j];
			ok &= CppAD::NearEqual(jac_ij, check, 1e-10, 1e-10);
		}
	}
	return ok;
}


Input File: example/ode_taylor.cpp
8.1.9: Using Adolc with Taylor's Ode Solver: An Example and Test

8.1.9.a: Purpose
This is a realistic example using two levels of taping (see 8.1.11: mul_level ). The first level of taping uses Adolc's adouble type to tape the solution of an ordinary differential equation. This solution is then differentiated with respect to a parameter vector. The second level of taping uses CppAD's type AD<adouble> to take derivatives during the solution of the differential equation. These derivatives are used in the application of Taylor's method to the solution of the ODE. The example 8.1.8: ode_taylor.cpp computes the same values using AD<double> and AD< AD<double> >.

8.1.9.b: ODE
For this example the ODE's are defined by the function  h : \R^n \times \R^n \rightarrow \R^n where  \[
     h[ x, y(t, x) ] = 
     \left( \begin{array}{c}
               x_0                     \\
               x_1 y_0 (t, x)          \\
               \vdots                  \\
               x_{n-1} y_{n-2} (t, x)
     \end{array} \right)
     = 
     \left( \begin{array}{c}
               \partial_t y_0 (t , x)      \\
               \partial_t y_1 (t , x)      \\
               \vdots                      \\
               \partial_t y_{n-1} (t , x) 
     \end{array} \right)
\] 
and the initial condition  y(0, x) = 0 . The value of  x is fixed during the solution of the ODE and the function  g : \R^n \rightarrow \R^n is used to define the ODE where  \[
     g(y) = 
     \left( \begin{array}{c}
               x_0     \\
               x_1 y_0 \\
               \vdots  \\
               x_{n-1} y_{n-2} 
     \end{array} \right)
\] 


8.1.9.c: ODE Solution
The solution for this example can be calculated by starting with the first row and then using the solution for the first row to solve the second and so on. Doing this we obtain  \[
     y(t, x ) =
     \left( \begin{array}{c}
          x_0 t                  \\
          x_1 x_0 t^2 / 2        \\
          \vdots                 \\
          x_{n-1} x_{n-2} \ldots x_0 t^n / n !
     \end{array} \right)
\] 


8.1.9.d: Derivative of ODE Solution
Differentiating the solution above, with respect to the parameter vector  x , we notice that  \[
\partial_x y(t, x ) =
\left( \begin{array}{cccc}
y_0 (t,x) / x_0      & 0                   & \cdots & 0      \\
y_1 (t,x) / x_0      & y_1 (t,x) / x_1     & 0      & \vdots \\
\vdots               & \vdots              & \ddots & 0      \\
y_{n-1} (t,x) / x_0  & y_{n-1} (t,x) / x_1 & \cdots & y_{n-1} (t,x) / x_{n-1}
\end{array} \right)
\] 


8.1.9.e: Taylor's Method Using AD
An m-th order Taylor method for approximating the solution of an ordinary differential equations is  \[
     y(t + \Delta t , x) 
     \approx 
     \sum_{k=0}^p \partial_t^k y(t , x ) \frac{ \Delta t^k }{ k ! }
     =
     y^{(0)} (t , x ) + 
     y^{(1)} (t , x ) \Delta t + \cdots + 
     y^{(p)} (t , x ) \Delta t^p
\] 
where the Taylor coefficients  y^{(k)} (t, x) are defined by  \[
     y^{(k)} (t, x) = \partial_t^k y(t , x ) / k !
\] 
We define the function  z(t, x) by the equation  \[
     z ( t , x ) = g[ y ( t , x ) ] = h [ x , y( t , x ) ]
\] 
It follows that  \[
\begin{array}{rcl}
     \partial_t y(t, x) & = & z (t , x) 
     \\
      \partial_t^{k+1} y(t , x) & = & \partial_t^k z (t , x)
     \\
     y^{(k+1)} ( t , x) & = & z^{(k)} (t, x) / (k+1) 
\end{array}
\] 
where   z^{(k)} (t, x) is the k-th order Taylor coefficient for  z(t, x) . In the example below, the Taylor coefficients  \[
     y^{(0)} (t , x) , \ldots , y^{(k)} ( t , x )
\] 
are used to calculate the Taylor coefficient  z^{(k)} ( t , x ) which in turn gives the value for   y^{(k+1)} y ( t , x) .

8.1.9.f: base_adolc.hpp
The file 4.7.2: base_adolc.hpp is implements the 4.7: Base type requirements where Base is adolc.

8.1.9.g: Tracking New and Delete
Adolc uses raw memory arrays that depend on the number of dependent and independent variables, hence new and delete are used to allocate this memory. The preprocessor macros 6.24.j: CPPAD_TRACK_NEW_VEC and 6.24.k: CPPAD_TRACK_DEL_VEC are used to check for errors in the use of new and delete when the example is compiled for debugging (when NDEBUG is not defined).

8.1.9.h: Configuration Requirement
This example will be compiled and tested provided that the value 2.1.o: AdolcDir is specified on the 2.1.d: configure command line.
 
# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/drivers/drivers.h>

// definitions not in Adolc distribution and required to use CppAD::AD<adouble>
# include "base_adolc.hpp"

# include <cppad/cppad.hpp>
// ==========================================================================
namespace { // BEGIN empty namespace
// define types for each level
typedef adouble            ADdouble;
typedef CppAD::AD<adouble> ADDdouble;

// -------------------------------------------------------------------------
// class definition for C++ function object that defines ODE
class Ode {
private:
	// copy of a that is set by constructor and used by g(y)
	CPPAD_TEST_VECTOR< ADdouble > x_; 
public:
	// constructor
	Ode( CPPAD_TEST_VECTOR< ADdouble > x) : x_(x)
	{ }
	// the function g(y) is evaluated with two levels of taping
	CPPAD_TEST_VECTOR< ADDdouble > operator()
	( const CPPAD_TEST_VECTOR< ADDdouble > &y) const
	{	size_t n = y.size();
		CPPAD_TEST_VECTOR< ADDdouble > g(n);
		size_t i;
		g[0] = x_[0];
		for(i = 1; i < n; i++)
			g[i] = x_[i] * y[i-1];

		return g;
	}
};

// -------------------------------------------------------------------------
// Routine that uses Taylor's method to solve ordinary differential equaitons
// and allows for algorithmic differentiation of the solution. 
CPPAD_TEST_VECTOR < ADdouble > taylor_ode_adolc(
	Ode                     G       ,  // function that defines the ODE
	size_t                  order   ,  // order of Taylor's method used
	size_t                  nstep   ,  // number of steps to take
	ADdouble                &dt     ,  // Delta t for each step
	CPPAD_TEST_VECTOR< ADdouble > &y_ini  )  // y(t) at the initial time
{
	// some temporary indices
	size_t i, k, ell;

	// number of variables in the ODE
	size_t n = y_ini.size();

	// copies of x and g(y) with two levels of taping
	CPPAD_TEST_VECTOR< ADDdouble >   Y(n), Z(n);

	// y, y^{(k)} , z^{(k)}, and y^{(k+1)}
	CPPAD_TEST_VECTOR< ADdouble >  y(n), y_k(n), z_k(n), y_kp(n);
	
	// initialize x
	for(i = 0; i < n; i++)
		y[i] = y_ini[i];

	// loop with respect to each step of Taylors method
	for(ell = 0; ell < nstep; ell++)
	{	// prepare to compute derivatives of in ADdouble
		for(i = 0; i < n; i++)
			Y[i] = y[i];
		CppAD::Independent(Y);

		// evaluate ODE in ADDdouble
		Z = G(Y);

		// define differentiable version of g: X -> Y
		// that computes its derivatives in ADdouble
		CppAD::ADFun<ADdouble> g(Y, Z);

		// Use Taylor's method to take a step
		y_k            = y;     // initialize y^{(k)}
		ADdouble dt_kp = dt;    // initialize dt^(k+1)
		for(k = 0; k <= order; k++)
		{	// evaluate k-th order Taylor coefficient of y
			z_k = g.Forward(k, y_k);
 
			for(i = 0; i < n; i++)
			{	// convert to (k+1)-Taylor coefficient for x
				y_kp[i] = z_k[i] / ADdouble(k + 1);

				// add term for to this Taylor coefficient
				// to solution for y(t, x)
				y[i]    += y_kp[i] * dt_kp;
			}
			// next power of t
			dt_kp *= dt;
			// next Taylor coefficient
			y_k   = y_kp;
		}
	}
	return y;
}
} // END empty namespace
// ==========================================================================
// Routine that tests algorithmic differentiation of solutions computed
// by the routine taylor_ode.
bool ode_taylor_adolc(void)
{	// initialize the return value as true	
	bool ok = true;

	// number of components in differential equation
	size_t n = 4;

	// some temporary indices
	size_t i, j;

	// parameter vector in both double and ADdouble
	double *x;
	x = CPPAD_TRACK_NEW_VEC(n, x);  // track x = new double[n];
	CPPAD_TEST_VECTOR<ADdouble> X(n);
	for(i = 0; i < n; i++)
		X[i] = x[i] = double(i + 1);

	// declare the parameters as the independent variable
	int tag = 0;                     // Adolc setup
	int keep = 1;
	trace_on(tag, keep);
	for(i = 0; i < n; i++)
		X[i] <<= double(i + 1);  // X is independent for adouble type

	// arguments to taylor_ode_adolc 
	Ode G(X);                // function that defines the ODE
	size_t   order = n;      // order of Taylor's method used
	size_t   nstep = 2;      // number of steps to take
	ADdouble DT    = 1.;     // Delta t for each step
	// value of y(t, x) at the initial time
	CPPAD_TEST_VECTOR< ADdouble > Y_INI(n);
	for(i = 0; i < n; i++)
		Y_INI[i] = 0.;

	// integrate the differential equation
	CPPAD_TEST_VECTOR< ADdouble > Y_FINAL(n);
 	Y_FINAL = taylor_ode_adolc(G, order, nstep, DT, Y_INI);

	// declare the differentiable fucntion f : A -> Y_FINAL
	// (corresponding to the tape of adouble operations)
	double *y_final;
	y_final = CPPAD_TRACK_NEW_VEC(n, y_final); // y_final= new double[m]
	for(i = 0; i < n; i++)
		Y_FINAL[i] >>= y_final[i];
	trace_off();

	// check function values
	double check = 1.;
	double t     = nstep * DT.value();
	for(i = 0; i < n; i++)
	{	check *= x[i] * t / double(i + 1);
		ok &= CppAD::NearEqual(y_final[i], check, 1e-10, 1e-10);
	}

	// memory where Jacobian will be returned
	double *jac_;
	jac_ = CPPAD_TRACK_NEW_VEC(n * n, jac_); // jac_ = new double[n*n]
	double **jac;
	jac  = CPPAD_TRACK_NEW_VEC(n, jac);      // jac = new (*double)[n]
	for(i = 0; i < n; i++)
		jac[i] = jac_ + i * n;

	// evaluate Jacobian of h at a
	size_t m = n;              // # dependent variables
	jacobian(tag, int(m), int(n), x, jac); 
	
	// check Jacobian 
	for(i = 0; i < n; i++)
	{	for(j = 0; j < n; j++)
		{	if( i < j )
				check = 0.;
			else	check = y_final[i] / x[j];
			ok &= CppAD::NearEqual(jac[i][j], check, 1e-10, 1e-10);
		}
	}

	CPPAD_TRACK_DEL_VEC(x);        // check usage of delete
	CPPAD_TRACK_DEL_VEC(y_final);
	CPPAD_TRACK_DEL_VEC(jac_);
	CPPAD_TRACK_DEL_VEC(jac);
	return ok;
}


Input File: example/ode_taylor_adolc.cpp
8.1.10: Example Differentiating a Stack Machine Interpreter
 

# include <cstring>
# include <cstddef>
# include <cstdlib>
# include <cctype>
# include <cassert>
# include <stack>

# include <cppad/cppad.hpp>

namespace { 
// Begin empty namespace ------------------------------------------------

bool is_number( const std::string &s )
{	char ch = s[0];
	bool number = (std::strchr("0123456789.", ch) != 0);
	return number;
}
bool is_binary( const std::string &s )
{	char ch = s[0];
	bool binary = (strchr("+-*/.", ch) != 0);
	return binary;
}
bool is_variable( const std::string &s )
{	char ch = s[0];
	bool variable = ('a' <= ch) & (ch <= 'z');
	return variable;
}

void StackMachine( 
	std::stack< std::string >          &token_stack  ,
	CppAD::vector< CppAD::AD<double> > &variable     )
{	using std::string;
	using std::stack;

	using CppAD::AD;

	stack< AD<double> > value_stack;
	string              token;
	AD<double>          value_one;
	AD<double>          value_two;

	while( ! token_stack.empty() )
	{	string s = token_stack.top();
		token_stack.pop();

		if( is_number(s) )
		{	value_one = std::atof( s.c_str() );
			value_stack.push( value_one );
		}
		else if( is_variable(s) )
		{	value_one = variable[ size_t(s[0]) - size_t('a') ];
			value_stack.push( value_one );
		}
		else if( is_binary(s) ) 
		{	assert( value_stack.size() >= 2 );
			value_one = value_stack.top();
			value_stack.pop();
			value_two = value_stack.top();
			value_stack.pop();

			switch( s[0] )
			{
				case '+':
				value_stack.push(value_one + value_two);
				break;

				case '-':
				value_stack.push(value_one - value_two);
				break;

				case '*':
				value_stack.push(value_one * value_two);
				break;

				case '/':
				value_stack.push(value_one / value_two);
				break;

				default:
				assert(0);
			}
		}
		else if( s[0] == '=' )
		{	assert( value_stack.size() >= 1 ); 	
			assert( token_stack.size() >= 1 );
			//
			s = token_stack.top();
			token_stack.pop();
			//
			assert( is_variable( s ) );
			value_one = value_stack.top();
			value_stack.pop();
			//
			variable[ size_t(s[0]) - size_t('a') ] = value_one;
		}
		else assert(0);
	}
	return;
}

// End empty namespace -------------------------------------------------------
}

bool StackMachine(void)
{	bool ok = true;

	using std::string;
	using std::stack;

	using CppAD::AD;
	using CppAD::NearEqual;
	using CppAD::vector;

	// The users program in that stack machine language
	const char *program[] = {
		"1.0", "a", "+", "=", "b",  // b = a + 1
		"2.0", "b", "*", "=", "c",  // c = b * 2
		"3.0", "c", "-", "=", "d",  // d = c - 3
		"4.0", "d", "/", "=", "e"   // e = d / 4
	};
	size_t n_program = sizeof( program ) / sizeof( program[0] );

	// put the program in the token stack
	stack< string > token_stack;
	size_t i = n_program;
	while(i--)
		token_stack.push( program[i] );

	// domain space vector
	size_t n = 1;
	vector< AD<double> > X(n);
	X[0] = 0.;

	// declare independent variables and start tape recording
	CppAD::Independent(X);
		
	// x[0] corresponds to a in the stack machine
	vector< AD<double> > variable(26);
	variable[0] = X[0];

	// calculate the resutls of the program
	StackMachine( token_stack , variable);

	// range space vector
	size_t m = 4;
	vector< AD<double> > Y(m);
	Y[0] = variable[1];   // b = a + 1
	Y[1] = variable[2];   // c = (a + 1) * 2
	Y[2] = variable[3];   // d = (a + 1) * 2 - 3
	Y[3] = variable[4];   // e = ( (a + 1) * 2 - 3 ) / 4 
	
	// create f : X -> Y and stop tape recording
	CppAD::ADFun<double> f(X, Y);

	// use forward mode to evaluate function at different argument value
	size_t p = 0;
	vector<double> x(n);
	vector<double> y(m);
	x[0] = 1.;
	y    = f.Forward(p, x);

	// check function values
	ok &= (y[0] == x[0] + 1.);
	ok &= (y[1] == (x[0] + 1.) * 2.);
	ok &= (y[2] == (x[0] + 1.) * 2. - 3.);
	ok &= (y[3] == ( (x[0] + 1.) * 2. - 3.) / 4.);

	// Use forward mode (because x is shorter than y) to calculate Jacobian
	p = 1;
	vector<double> dx(n);
	vector<double> dy(m);
	dx[0] = 1.;
	dy    = f.Forward(p, dx);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], 2., 1e-10, 1e-10);
	ok   &= NearEqual(dy[2], 2., 1e-10, 1e-10);
	ok   &= NearEqual(dy[3], .5, 1e-10, 1e-10);

	// Use Jacobian routine (which automatically decides which mode to use)
	dy = f.Jacobian(x);
	ok   &= NearEqual(dy[0], 1., 1e-10, 1e-10);
	ok   &= NearEqual(dy[1], 2., 1e-10, 1e-10);
	ok   &= NearEqual(dy[2], 2., 1e-10, 1e-10);
	ok   &= NearEqual(dy[3], .5, 1e-10, 1e-10);

	return ok;
}

Input File: example/stack_machine.cpp
8.1.11: Using Multiple Levels of AD

8.1.11.a: Background
If f is an ADFun<Base> object, the vectors returned by 5.6.1: f.Forward , and 5.6.2: f.Reverse , have values in the base type (Base) and not AD<Base>. This reflects the fact that operations used to calculate these function values are not recorded by the tape corresponding to AD<Base> operations.

8.1.11.b: Motivation
Suppose that you uses derivatives of one or more inner functions as part of the operations needed to compute an outer function. For example, the derivatives returned by f.Forward might be used as part of Taylor's method for solving ordinary differential equations. In addition, we might want to differentiate the solution of a differential equation with respect to parameters in the equation. This can be accomplished in the following way:
  1. The operations during the calculations of the function defining the differential equation could be preformed using the a class of the form  AD< AD<double> >.
  2. The operations during the calculation of Taylor's method could be preformed using the  AD<double> class.
  3. The results of the solution of the differential equation could then be preformed using the double class.


8.1.11.c: General Solution
Provided that we are currently recording  AD<double> operations, and fin is an ADFun< AD<double> > object, the operations used to compute the vectors returned by fin.Forward, fin.Rev, and fin.RevTwo, will be recorded on the tape corresponding to AD<double> operations.

8.1.11.d: General Procedure

8.1.11.d.a: Start ADBaseTape
The first step is to declare the independent variables using
     Independent(
x)
where x is a 6.7: SimpleVector with elements of type AD<double>. This will start recording a new tape of operations performed using AD<double> class objects.

8.1.11.d.b: Start ADDBaseTape
The next step is to declare the independent variables using
     Independent(
X)
where X is a 6.7: SimpleVector with elements of type CPPAD_TEST_VECTOR< AD< AD<double> > >. This will start recording a new tape of operations performed using AD< AD<double> > class objects.

8.1.11.d.c: Inner Function Calculations
The next step is to calculation the inner functions using AD< AD<double> > class objects.

8.1.11.d.d: Derivative of Inner Function
The next step is to create the ADFun< AD<double> > function object fin. This will also stop recording of operations performed using AD< AD<double> > class objects. The fin object can then be used to calculate the derivatives needed to compute the outer function.

8.1.11.d.e: Outer Function
The next step is to compute the outer function using AD<double> class objects.

8.1.11.d.f: Derivative of Outer Function
The next step is to create the ADFun<double> function object fout. This will also stop the recording of operations performed using AD<double> class objects. The fout object can then be used to calculate the derivatives of the outer function.

8.1.11.e: Example
The file 8.1.11.1: mul_level.cpp contains an example and test of this procedure. It returns true if it succeeds and false otherwise. The file 8.1.8: ode_taylor.cpp is a more complex example use of multiple tapes.
Input File: omh/mul_level.omh
8.1.11.1: Multiple Tapes: Example and Test

8.1.11.1.a: Purpose
This is an example and test of using the AD<double> type, together with the AD< AD<double> > type, for multiple levels of taping. The example computes the value  \[
     \frac{d}{dx} \left[ f^{(1)} (x) * v \right]
\] 
where  f : \R^n \rightarrow \R and  v \in \R^n . The example 5.6.2.2.2: HesTimesDir.cpp computes the same value using only one level of taping (more efficient) and the identity  \[
     \frac{d}{dx} \left[ f^{(1)} (x) * v \right] = f^{(2)} (x) * v
\] 
The example 4.7.2.1: mul_level_adolc.cpp computes the same values using Adolc's type adouble and CppAD's type AD<adouble>.
 

# include <cppad/cppad.hpp>

namespace { // put this function in the empty namespace
	// f(x) = |x|^2 = .5 * ( x[0]^2 + ... + x[n-1]^2 + .5 )
	template <class Type>
	Type f(CPPAD_TEST_VECTOR<Type> &x)
	{	Type sum;

		// check assignment of AD< AD<double> > = double
		sum  = .5;
		sum += .5;

		size_t i = x.size();
		while(i--)
			sum += x[i] * x[i];

		// check computed assignment AD< AD<double> > -= int
		sum -= 1; 
	
		// check double * AD< AD<double> > 
		return .5 * sum;
	} 
}

bool mul_level(void) 
{	bool ok = true;                          // initialize test result

	typedef CppAD::AD<double>   ADdouble;    // for one level of taping
	typedef CppAD::AD<ADdouble> ADDdouble;   // for two levels of taping
	size_t n = 5;                            // dimension for example
	size_t j;                                // a temporary index variable

	CPPAD_TEST_VECTOR<double>       x(n);
	CPPAD_TEST_VECTOR<ADdouble>   a_x(n);
	CPPAD_TEST_VECTOR<ADDdouble> aa_x(n);

	// value of the independent variables
	for(j = 0; j < n; j++)
		a_x[j] = x[j] = double(j); // x[j] = j
	Independent(a_x);                  // a_x is indedendent for ADdouble
	for(j = 0; j < n; j++)
		aa_x[j] = a_x[j];          // track how aa_x depends on a_x
	CppAD::Independent(aa_x);          // aa_x is independent for ADDdouble

	// compute function
	CPPAD_TEST_VECTOR<ADDdouble> aa_f(1);    // scalar valued function
	aa_f[0] = f(aa_x);                 // has only one component

	// declare inner function (corresponding to ADDdouble calculation)
	CppAD::ADFun<ADdouble> a_F(aa_x, aa_f);

	// compute f'(x) 
	size_t p = 1;                        // order of derivative of a_F
	CPPAD_TEST_VECTOR<ADdouble> a_w(1);  // weight vector for a_F
	CPPAD_TEST_VECTOR<ADdouble> a_df(n); // value of derivative
	a_w[0] = 1;                          // weighted function same as a_F
	a_df   = a_F.Reverse(p, a_w);        // gradient of f

	// declare outter function (corresponding to ADdouble calculation)
	CppAD::ADFun<double> df(a_x, a_df);

	// compute the d/dx of f'(x) * v = f''(x) * v
	CPPAD_TEST_VECTOR<double> v(n);
	CPPAD_TEST_VECTOR<double> ddf_v(n);
	for(j = 0; j < n; j++)
		v[j] = double(n - j);
	ddf_v = df.Reverse(p, v);

	// f(x)       = .5 * ( x[0]^2 + x[1]^2 + ... + x[n-1]^2 )
	// f'(x)      = (x[0], x[1], ... , x[n-1])
	// f''(x) * v = ( v[0], v[1],  ... , x[n-1] )
	for(j = 0; j < n; j++)
		ok &= CppAD::NearEqual(ddf_v[j], v[j], 1e-10, 1e-10);

	return ok;
}

Input File: example/mul_level.cpp
8.2: Utility Routines used by CppAD Examples

8.2.a: Contents
Example.cpp: 8.2.1Program That Runs the CppAD Examples
speed_example.cpp: 8.2.2Program That Runs the Speed Examples
LuVecAD: 8.2.3Lu Factor and Solve with Recorded Pivoting

Input File: omh/example_list.omh
8.2.1: Program That Runs the CppAD Examples
 

// system include files used for I/O
# include <iostream>

// C style asserts
# include <cassert>

// CppAD include file
# include <cppad/cppad.hpp>

// external complied tests
extern bool abort_recording(void);
extern bool Abs(void);
extern bool Acos(void);
extern bool Add(void);
extern bool AddEq(void);
extern bool Asin(void);
extern bool Atan(void);
extern bool Atan2(void);
extern bool BenderQuad(void);
extern bool BoolFun(void);
extern bool vectorBool(void);
extern bool CheckNumericType(void);
extern bool CheckSimpleVector(void);
extern bool Compare(void);
extern bool CompareChange(void);
extern bool complex_poly(void);
extern bool CondExp(void);
extern bool CopyAD(void);
extern bool CopyBase(void);
extern bool Cos(void);
extern bool Cosh(void);
extern bool CppAD_vector(void);
extern bool Default(void);
extern bool Div(void);
extern bool DivEq(void);
extern bool Eq(void);
extern bool EqualOpSeq(void);
extern bool Erf(void);
extern bool ErrorHandler(void);
extern bool Exp(void);
extern bool ForOne(void);
extern bool ForTwo(void);
extern bool ForSparseJac(void);
extern bool Forward(void);
extern bool FunCheck(void);
extern bool HesLagrangian(void);
extern bool HesLuDet(void);
extern bool HesMinorDet(void);
extern bool Hessian(void);
extern bool HesTimesDir(void);
extern bool Independent(void);
extern bool Integer(void);
extern bool Interface2C(void);
extern bool interp_onetape(void);
extern bool interp_retape(void);
extern bool JacLuDet(void);
extern bool JacMinorDet(void);
extern bool Jacobian(void);
extern bool Log(void);
extern bool Log10(void);
extern bool LuFactor(void);
extern bool LuInvert(void);
extern bool LuRatio(void);
extern bool LuSolve(void);
extern bool LuVecADOk(void);
extern bool Mul(void);
extern bool MulEq(void);
extern bool mul_level(void);
extern bool mul_level_adolc(void);
extern bool nan(void);
extern bool Near_Equal(void);
extern bool NearEqualExt(void);
extern bool not_complex_ad(void);
extern bool NumericType(void);
extern bool OdeErrControl(void);
extern bool OdeErrMaxabs(void);
extern bool OdeGear(void);
extern bool OdeGearControl(void);
extern bool OdeStiff(void);
extern bool ode_taylor(void);
extern bool ode_taylor_adolc(void);
extern bool Output(void);
extern bool ParVar(void);
extern bool Poly(void);
extern bool Pow(void);
extern bool pow_int(void);
extern bool reverse_any(void);
extern bool reverse_one(void);
extern bool reverse_two(void);
extern bool RevOne(void);
extern bool RevSparseHes(void);
extern bool RevSparseJac(void);
extern bool RevTwo(void);
extern bool RombergMul(void);
extern bool RombergOne(void);
extern bool Rosen34(void);
extern bool Runge45(void);
extern bool SeqProperty(void);
extern bool SimpleVector(void);
extern bool Sin(void);
extern bool Sinh(void);
extern bool sparse_hessian(void);
extern bool sparse_jacobian(void);
extern bool Sqrt(void);
extern bool StackMachine(void);
extern bool Sub(void);
extern bool SubEq(void);
extern bool Tan(void);
extern bool Tanh(void);
extern bool TapeIndex(void);
extern bool TrackNewDel(void);
extern bool UnaryMinus(void);
extern bool UnaryPlus(void);
extern bool Value(void);
extern bool Var2Par(void);
extern bool VecAD(void);

namespace {
	// function that runs one test
	static size_t Run_ok_count    = 0;
	static size_t Run_error_count = 0;
	bool Run(bool TestOk(void), const char *name)
	{	bool ok = true;
		ok &= TestOk();
		if( ok )
		{	std::cout << "Ok:    " << name << std::endl;
			Run_ok_count++;
		}
		else
		{	std::cout << "Error: " << name << std::endl;
			Run_error_count++;
		}
		return ok;
	}
}

// main program that runs all the tests
int main(void)
{	bool ok = true;

	// This line is used by test_one.sh

	// external compiled tests
	ok &= Run( abort_recording,   "abort_recording"  );
	ok &= Run( Abs,               "Abs"              );
	ok &= Run( Acos,              "Acos"             );
	ok &= Run( Add,               "Add"              );
	ok &= Run( AddEq,             "AddEq"            );
	ok &= Run( Asin,              "Asin"             );
	ok &= Run( Atan,              "Atan"             );
	ok &= Run( Atan2,             "Atan2"            );
	ok &= Run( BenderQuad,        "BenderQuad"       );
	ok &= Run( BoolFun,           "BoolFun"          );
	ok &= Run( vectorBool,        "vectorBool"       );
	ok &= Run( CheckNumericType,  "CheckNumericType" );
	ok &= Run( CheckSimpleVector, "CheckSimpleVector");
	ok &= Run( Compare,           "Compare"          );
	ok &= Run( CompareChange,     "CompareChange"    );
	ok &= Run( complex_poly,      "complex_poly"     );
	ok &= Run( CondExp,           "CondExp"          );
	ok &= Run( CopyAD,            "CopyAD"           );
	ok &= Run( CopyBase,          "CopyBase"         );
	ok &= Run( Cos,               "Cos"              );
	ok &= Run( Cosh,              "Cosh"             );
	ok &= Run( CppAD_vector,      "CppAD_vector"     );
	ok &= Run( Default,           "Default"          );
	ok &= Run( Div,               "Div"              );
	ok &= Run( DivEq,             "DivEq"            );
	ok &= Run( Eq,                "Eq"               );
	ok &= Run( EqualOpSeq,        "EqualOpSeq"       );
	ok &= Run( Erf,               "Erf"              );
	ok &= Run( ErrorHandler,      "ErrorHandler"     );
	ok &= Run( Exp,               "Exp"              );
	ok &= Run( ForOne,            "ForOne"           );
	ok &= Run( ForTwo,            "ForTwo"           );
	ok &= Run( Forward,           "Forward"          ); 
	ok &= Run( ForSparseJac,      "ForSparseJac"     );
	ok &= Run( FunCheck,          "FunCheck"         );
	ok &= Run( HesLagrangian,     "HesLagrangian"    );
	ok &= Run( HesLuDet,          "HesLuDet"         );
	ok &= Run( HesMinorDet,       "HesMinorDet"      );
	ok &= Run( Hessian,           "Hessian"          );
	ok &= Run( HesTimesDir,       "HesTimesDir"      );
	ok &= Run( Independent,       "Independent"      );
	ok &= Run( Integer,           "Integer"          );
	ok &= Run( Interface2C,       "Interface2C"      );
	ok &= Run( interp_onetape,    "interp_onetape"   );
	ok &= Run( interp_retape,     "interp_retape"    );
	ok &= Run( JacLuDet,          "JacLuDet"         );
	ok &= Run( JacMinorDet,       "JacMinorDet"      );
	ok &= Run( Jacobian,          "Jacobian"         );
	ok &= Run( Log,               "Log"              );
	ok &= Run( Log10,             "Log10"            );
	ok &= Run( LuFactor,          "LuFactor"         );
	ok &= Run( LuInvert,          "LuInvert"         );
	ok &= Run( LuRatio,           "LuRatio"          );
	ok &= Run( LuSolve,           "LuSolve"          );
	ok &= Run( LuVecADOk,         "LuVecADOk"        );
	ok &= Run( Mul,               "Mul"              );
	ok &= Run( MulEq,             "MulEq"            );
	ok &= Run( mul_level,         "mul_level"        );
	ok &= Run( nan,               "nan"              );
	ok &= Run( Near_Equal,        "Near_Equal"       );
	ok &= Run( NearEqualExt,      "NearEqualExt"     );
	ok &= Run( not_complex_ad,    "not_complex_ad"   );
	ok &= Run( NumericType,       "NumericType"      );
	ok &= Run( OdeErrControl,     "OdeErrControl"    );
	ok &= Run( OdeErrMaxabs,      "OdeErrMaxabs"     );
	ok &= Run( OdeGear,           "OdeGear"          );
	ok &= Run( OdeGearControl,    "OdeGearControl"   );
	ok &= Run( OdeStiff,          "OdeStiff"         );
	ok &= Run( ode_taylor,        "ode_taylor"       );
	ok &= Run( Output,            "Output"           );
	ok &= Run( ParVar,            "ParVar"           );
	ok &= Run( Pow,               "Poly"             );
	ok &= Run( Pow,               "Pow"              );
	ok &= Run( pow_int,           "pow_int"          );
	ok &= Run( reverse_any,       "reverse_any"      );
	ok &= Run( reverse_one,       "reverse_one"      );
	ok &= Run( reverse_two,       "reverse_two"      );
	ok &= Run( RevOne,            "RevOne"           );
	ok &= Run( RevSparseHes,      "RevSparseHes"     );
	ok &= Run( RevSparseJac,      "RevSparseJac"     );
	ok &= Run( RevTwo,            "RevTwo"           );
	ok &= Run( RombergMul,        "RombergMul"       );
	ok &= Run( RombergOne,        "RombergOne"       );
	ok &= Run( Rosen34,           "Rosen34"          );
	ok &= Run( Runge45,           "Runge45"          );
	ok &= Run( SeqProperty,       "SeqProperty"      );
	ok &= Run( SimpleVector,      "SimpleVector"     );
	ok &= Run( Sin,               "Sin"              );
	ok &= Run( Sinh,              "Sinh"             );
	ok &= Run( sparse_hessian,    "sparse_hessian"   );
	ok &= Run( sparse_jacobian,   "sparse_jacobian"  );
	ok &= Run( Sqrt,              "Sqrt"             );
	ok &= Run( StackMachine,      "StackMachine"     );
	ok &= Run( Sub,               "Sub"              );
	ok &= Run( SubEq,             "SubEq"            );
	ok &= Run( Tan,               "Tan"              );
	ok &= Run( Tanh,              "Tanh"             );
	ok &= Run( TapeIndex,         "TapeIndex"        );
	ok &= Run( TrackNewDel,       "TrackNewDel"      );
	ok &= Run( UnaryMinus,        "UnaryMinus"       );
	ok &= Run( UnaryPlus,         "UnaryPlus"        );
	ok &= Run( Value,             "Value"            );
	ok &= Run( Var2Par,           "Var2Par"          );
	ok &= Run( VecAD,             "VecAD"            );

# ifdef CPPAD_ADOLC_EXAMPLES
	ok &= Run( mul_level_adolc,   "mul_level_adolc"  );
	ok &= Run( ode_taylor_adolc,  "ode_taylor_adolc" );
# endif
	

	// check for errors
	using std::cout;
	using std::endl;
	assert( ok || (Run_error_count > 0) );
	if( CPPAD_TRACK_COUNT() == 0 )
	{	Run_ok_count++;
		cout << "Ok:    " << "No memory leak detected" << endl;
	}
	else
	{	ok = false;
		Run_error_count++;
		cout << "Error: " << "memory leak detected" << endl;
	}
	// convert int(size_t) to avoid warning on _MSC_VER systems
	if( ok )
		cout << "All " << int(Run_ok_count) << " tests passed." << endl;
	else	cout << int(Run_error_count) << " tests failed." << endl;

	return static_cast<int>( ! ok );
}

Input File: example/example.cpp
8.2.2: Program That Runs the Speed Examples
 

# include <cppad/cppad.hpp>

// various example routines
extern bool det_of_minor(void);
extern bool det_by_lu(void);
extern bool det_by_minor(void);
extern bool ode_evaluate(void);
extern bool sparse_evaluate(void);
extern bool speed_test(void);

namespace {
	// function that runs one test
	static size_t Run_ok_count    = 0;
	static size_t Run_error_count = 0;
	bool Run(bool TestOk(void), const char *name)
	{	bool ok = true;
		using namespace std;
	
		ok &= TestOk();
	
		if( ok )
		{	std::cout << "Ok:    " << name << std::endl;
			Run_ok_count++;
		}
		else
		{	std::cout << "Error: " << name << std::endl;
			Run_error_count++;
		}
	
		return ok;
	}
}

// main program that runs all the tests
int main(void)
{	bool ok = true;
	using namespace std;

	ok &= Run(det_of_minor,          "det_of_minor"   );
	ok &= Run(det_by_minor,         "det_by_minor"    );
	ok &= Run(det_by_lu,               "det_by_lu"    );
	ok &= Run(ode_evaluate,         "ode_evaluate"    );
	ok &= Run(sparse_evaluate,   "sparse_evaluate"    );
	ok &= Run(speed_test,             "speed_test"    );

	// check for memory leak in previous calculations
	if( CPPAD_TRACK_COUNT() != 0 )
		cout << "Error: memroy leak detected" << endl;

	assert( ok || (Run_error_count > 0) );
	if( ok )
		cout << "All " << int(Run_ok_count) << " tests passed." << endl;
	else	cout << int(Run_error_count) << " tests failed." << endl;

	return static_cast<int>( ! ok );
}


Input File: speed/example/example.cpp
8.2.3: Lu Factor and Solve with Recorded Pivoting

8.2.3.a: Syntax
int LuVecAD(
     size_t 
n,
     size_t 
m,
     VecAD<
double> &Matrix,
     VecAD<
double> &Rhs,
     VecAD<
double> &Result,
     AD<
double> &logdet)

8.2.3.b: Purpose
Solves the linear equation  \[
     Matrix * Result = Rhs
\] 
where Matrix is an  n \times n matrix, Rhs is an  n x m matrix, and Result is an  n x m matrix.

The routine 6.12.1: LuSolve uses an arbitrary vector type, instead of 4.6: VecAD , to hold its elements. The pivoting operations for a ADFun object corresponding to an LuVecAD solution will change to be optimal for the matrix being factored.

It is often the case that LuSolve is faster than LuVecAD when LuSolve uses a simple vector class with 6.7.b: elements of type double , but the corresponding 5: ADFun objects have a fixed set of pivoting operations.

8.2.3.c: Storage Convention
The matrices stored in row major order. To be specific, if  A contains the vector storage for an  n x m matrix,  i is between zero and   n-1 , and  j is between zero and  m-1 ,  \[

     A_{i,j} = A[ i * m + j ]
\] 
(The length of  A must be equal to   n * m  .)

8.2.3.d: n
is the number of rows in Matrix, Rhs, and Result.

8.2.3.e: m
is the number of columns in Rhs and Result. It is ok for m to be zero which is reasonable when you are only interested in the determinant of Matrix.

8.2.3.f: Matrix
On input, this is an  n \times n matrix containing the variable coefficients for the equation we wish to solve. On output, the elements of Matrix have been overwritten and are not specified.

8.2.3.g: Rhs
On input, this is an  n \times m matrix containing the right hand side for the equation we wish to solve. On output, the elements of Rhs have been overwritten and are not specified. If m is zero, Rhs is not used.

8.2.3.h: Result
On input, this is an  n \times m matrix and the value of its elements do not matter. On output, the elements of Rhs contain the solution of the equation we wish to solve (unless the value returned by LuVecAD is equal to zero). If m is zero, Result is not used.

8.2.3.i: logdet
On input, the value of logdet does not matter. On output, it has been set to the log of the determinant of Matrix (but not quite). To be more specific, if signdet is the value returned by LuVecAD, the determinant of Matrix is given by the formula  \[
     det = signdet \exp( logdet )
\] 
This enables LuVecAD to use logs of absolute values.

8.2.3.j: Example
The file 8.2.3.1: LuVecADOk.cpp contains an example and test of LuVecAD. It returns true if it succeeds and false otherwise.
Input File: example/lu_vec_ad.cpp
8.2.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
 

# include <cppad/cppad.hpp>
# include "lu_vec_ad.hpp"
# include <cppad/speed/det_by_minor.hpp>

bool LuVecADOk(void)
{	bool  ok = true;

	using namespace CppAD;
	typedef AD<double> ADdouble;
	typedef CPPAD_TEST_VECTOR<ADdouble> ADVector;

	size_t              n = 3;
	size_t              m = 2;
	double a1[] = {
		3., 0., 0., // (1,1) is first  pivot
		1., 2., 1., // (2,2) is second pivot
		1., 0., .5  // (3,3) is third  pivot
	};
	double a2[] = {
		1., 2., 1., // (1,2) is second pivot
		3., 0., 0., // (2,1) is first  pivot
		1., 0., .5  // (3,3) is third  pivot
	};
	double rhs[] = {
		1., 3.,
		2., 2.,
		3., 1.
	};

	VecAD<double>       Copy    (n * n);
	VecAD<double>       Rhs     (n * m);
	VecAD<double>       Result  (n * m);
	ADdouble            logdet;
	ADdouble            signdet;

	// routine for checking determinants using expansion by minors
	det_by_minor<ADdouble> Det(n);

	// matrix we are computing the determinant of
	CPPAD_TEST_VECTOR<ADdouble> A(n * n);

	// dependent variable values
	CPPAD_TEST_VECTOR<ADdouble> Y(1 + n * m);

	size_t  i;
	size_t  j;
	size_t  k;

	// Original matrix
	for(i = 0; i < n * n; i++)
		A[i] = a1[i];

	// right hand side
	for(j = 0; j < n; j++)
		for(k = 0; k < m; k++)
			Rhs[ j * m + k ] = rhs[ j * m + k ];
		
	// Declare independent variables
	Independent(A);

	// Copy the matrix
	ADdouble index(0);
	for(i = 0; i < n*n; i++)
	{	Copy[index] = A[i];
		index += 1.;
	}

	// Solve the equation
	signdet = LuVecAD(n, m, Copy, Rhs, Result, logdet);

	// Result is the first n * m dependent variables
	index = 0.;
	for(i = 0; i < n * m; i++)
	{	Y[i] = Result[index];
		index += 1.;
	}

	// Determinant is last component of the solution
	Y[ n * m ] = signdet * exp( logdet );

	// construct f: A -> Y
	ADFun<double> f(A, Y);

	// check determinant using minors routine
	ADdouble determinant = Det( A );
	ok &= NearEqual(Y[n * m], determinant, 1e-10, 1e-10);


	// Check solution of Rhs = A * Result
	double sum;
	for(k = 0; k < m; k++)
	{	for(i = 0; i < n; i++)
		{	sum = 0.;
			for(j = 0; j < n; j++)
				sum += a1[i * n + j] * Value( Y[j * m + k] );
			ok &= NearEqual( rhs[i * m + k], sum, 1e-10, 1e-10 );
		}
	}
 
 	CPPAD_TEST_VECTOR<double> y2(1 + n * m);
 	CPPAD_TEST_VECTOR<double> A2(n * n);
 	for(i = 0; i < n * n; i++)
 		A[i] = A2[i] = a2[i];

 
 	y2          = f.Forward(0, A2);
 	determinant = Det(A);
 	ok &= NearEqual(y2[ n * m], Value(determinant), 1e-10, 1e-10);

	// Check solution of Rhs = A2 * Result
	for(k = 0; k < m; k++)
	{	for(i = 0; i < n; i++)
		{	sum = 0.;
			for(j = 0; j < n; j++)
				sum += a2[i * n + j] * y2[j * m + k];
			ok &= NearEqual( rhs[i * m + k], sum, 1e-10, 1e-10 );
		}
	}

	return ok;
}


Input File: example/lu_vec_ad_ok.cpp
8.3: List of All the CppAD Examples
5.4.1: abort_recording.cpp Abort Current Recording: Example and Test
4.4.3.1.1: Abs.cpp AD Absolute Value Function: Example and Test
4.4.2.1: Acos.cpp The AD acos Function: Example and Test
4.4.1.3.1: Add.cpp AD Binary Addition: Example and Test
4.4.1.4.1: AddEq.cpp AD Computed Assignment Addition: Example and Test
4.4.2.2: Asin.cpp The AD asin Function: Example and Test
4.4.2.3: Atan.cpp The AD atan Function: Example and Test
4.4.3.2.1: Atan2.cpp The AD atan2 Function: Example and Test
4.7.2: base_adolc.hpp Enable use of AD<Base> where Base is Adolc's adouble Type
4.7.1: base_complex.hpp Enable use of AD<Base> where Base is std::complex<double>
6.20.1: BenderQuad.cpp BenderQuad: Example and Test
4.5.3.1: BoolFun.cpp AD Boolean Functions: Example and Test
6.6.1: CheckNumericType.cpp The CheckNumericType Function: Example and Test
6.8.1: CheckSimpleVector.cpp The CheckSimpleVector Function: Example and Test
4.5.1.1: Compare.cpp AD Binary Comparison Operators: Example and Test
5.6.1.5.1: CompareChange.cpp CompareChange and Re-Tape: Example and Test
4.7.1.1: ComplexPoly.cpp Complex Polynomial: Example and Test
4.4.4.1: CondExp.cpp Conditional Expressions: Example and Test
4.2.1: CopyAD.cpp AD Copy Constructor: Example and Test
4.2.2: CopyBase.cpp AD Constructor From Base Type: Example and Test
4.4.2.4: Cos.cpp The AD cos Function: Example and Test
4.4.2.5: Cosh.cpp The AD cosh Function: Example and Test
6.23.1: CppAD_vector.cpp CppAD::vector Template Class: Example and Test
4.1.1: Default.cpp Default AD Constructor: Example and Test
9.2.2.4.1: det_by_lu.cpp Determinant Using Lu Factorization: Example and Test
9.2.2.3.1: det_by_minor.cpp Determinant Using Expansion by Minors: Example and Test
9.2.2.2.1: det_of_minor.cpp Determinant of a Minor: Example and Test
4.4.1.3.4: Div.cpp AD Binary Division: Example and Test
4.4.1.4.4: DivEq.cpp AD Computed Assignment Division: Example and Test
4.2.3: Eq.cpp AD Assignment Operator: Example and Test
4.5.5.1: EqualOpSeq.cpp EqualOpSeq: Example and Test
4.4.3.3.1: Erf.cpp The AD erf Function: Example and Test
6.1.1: ErrorHandler.cpp Replacing The CppAD Error Handler: Example and Test
8.2.1: Example.cpp Program That Runs the CppAD Examples
4.4.2.6: Exp.cpp The AD exp Function: Example and Test
5.7.2.1: ForOne.cpp First Order Partial Driver: Example and Test
5.7.5.1: ForTwo.cpp Subset of Second Order Partials: Example and Test
5.6.1.7: Forward.cpp Forward Mode: Example and Test
5.6.3.1.1: ForSparseJac.cpp Forward Mode Jacobian Sparsity: Example and Test
5.8.1: FunCheck.cpp ADFun Check and Re-Tape: Example and Test
3.1: get_started.cpp A Simple Program Using CppAD to Compute Derivatives
8.1.1.2: ipopt_cppad_simple.cpp Nonlinear Programming Using CppAD and Ipopt: Example and Test
8.1.1: ipopt_cppad_nlp Nonlinear Programming Using the CppAD Interface to Ipopt
8.1.1.3: ipopt_cppad_ode Example Simultaneous Solution of Forward and Inverse Problem
5.7.4.2: HesLagrangian.cpp Hessian of Lagrangian and ADFun Default Constructor: Example and Test
8.1.6: HesLuDet.cpp Gradient of Determinant Using LU Factorization: Example and Test
8.1.5: HesMinorDet.cpp Gradient of Determinant Using Expansion by Minors: Example and Test
5.6.2.2.2: HesTimesDir.cpp Hessian Times Direction: Example and Test
5.7.4.1: Hessian.cpp Hessian: Example and Test
5.1.1: Independent.cpp Independent and ADFun Constructor: Example and Test
4.3.2.1: Integer.cpp Convert From AD to Integer: Example and Test
8.1.2: Interface2C.cpp Interfacing to C: Example and Test
4.4.5.2: interp_onetape.cpp Interpolation With Out Retaping: Example and Test
4.4.5.3: interp_retape.cpp Interpolation With Retaping: Example and Test
8.1.4: JacLuDet.cpp Gradient of Determinant Using Lu Factorization: Example and Test
8.1.3: JacMinorDet.cpp Gradient of Determinant Using Expansion by Minors: Example and Test
5.7.1.1: Jacobian.cpp Jacobian: Example and Test
4.4.2.7: Log.cpp The AD log Function: Example and Test
4.4.2.8: Log10.cpp The AD log10 Function: Example and Test
6.12.2.1: LuFactor.cpp LuFactor: Example and Test
6.12.3.1: LuInvert.cpp LuInvert: Example and Test
6.21.1: LuRatio.cpp LuRatio: Example and Test
6.12.1.1: LuSolve.cpp LuSolve With Complex Arguments: Example and Test
8.2.3.1: LuVecADOk.cpp Lu Factor and Solve With Recorded Pivoting: Example and Test
4.4.1.3.3: Mul.cpp AD Binary Multiplication: Example and Test
4.4.1.4.3: MulEq.cpp AD Computed Assignment Multiplication: Example and Test
8.1.11.1: mul_level.cpp Multiple Tapes: Example and Test
4.7.2.1: mul_level_adolc.cpp Using Adolc with Multiple Levels of Taping: Example and Test
5.9.1.2: multi_newton.cpp Multi-Threaded Newton's Method Main Program
6.9.1: nan.cpp nan: Example and Test
4.5.2.1: NearEqualExt.cpp Compare AD with Base Objects: Example and Test
6.2.1: Near_Equal.cpp NearEqual Function: Example and Test
4.7.1.2: not_complex_ad.cpp Not Complex Differentiable: Example and Test
6.5.1: NumericType.cpp The NumericType: Example and Test
6.17.1: OdeErrControl.cpp OdeErrControl: Example and Test
6.17.2: OdeErrMaxabs.cpp OdeErrControl: Example and Test Using Maxabs Argument
6.18.1: OdeGear.cpp OdeGear: Example and Test
6.19.1: OdeGearControl.cpp OdeGearControl: Example and Test
8.1.7: OdeStiff.cpp A Stiff Ode: Example and Test
9.2.2.7.1: ode_evaluate.cpp ode_evaluate: Example and test
8.1.8: ode_taylor.cpp Taylor's Ode Solver: An Example and Test
8.1.9: ode_taylor_adolc.cpp Using Adolc with Taylor's Ode Solver: An Example and Test
4.3.3.1: Output.cpp AD Output Operator: Example and Test
4.5.4.1: ParVar.cpp AD Parameter and Variable Functions: Example and Test
6.11.1: Poly.cpp Polynomial Evaluation: Example and Test
4.4.3.4.1: Pow.cpp The AD Power Function: Example and Test
4.4.3.4.2: pow_int.cpp The Pow Integer Exponent: Example and Test
4.3.4.1: PrintFor.cpp Printing During Forward Mode: Example and Test
5.7.3.1: RevOne.cpp First Order Derivative Driver: Example and Test
5.6.3.3.1: RevSparseHes.cpp Reverse Mode Hessian Sparsity: Example and Test
5.6.3.2.1: RevSparseJac.cpp Reverse Mode Jacobian Sparsity: Example and Test
5.7.6.1: RevTwo.cpp Second Partials Reverse Driver: Example and Test
5.6.2.3.1: reverse_any.cpp Any Order Reverse Mode: Example and Test
5.6.2.1.1: reverse_one.cpp First Order Reverse Mode: Example and Test
5.6.2.2.1: reverse_two.cpp Second Order Reverse ModeExample and Test
6.14.1: RombergMul.cpp One Dimensional Romberg Integration: Example and Test
6.13.1: RombergOne.cpp One Dimensional Romberg Integration: Example and Test
6.16.1: Rosen34.cpp Rosen34: Example and Test
6.15.1: Runge45.cpp Runge45: Example and Test
5.5.1: SeqProperty.cpp ADFun Sequence Properties: Example and Test
6.7.1: SimpleVector.cpp Simple Vector Template Class: Example and Test
4.4.2.9: Sin.cpp The AD sin Function: Example and Test
4.4.2.10: Sinh.cpp The AD sinh Function: Example and Test
9.2.2.8.1: sparse_evaluate.cpp sparse_evaluate: Example and test
5.7.8.1: sparse_hessian.cpp Sparse Hessian: Example and Test
5.7.7.1: sparse_jacobian.cpp Sparse Jacobian: Example and Test
6.4.1: speed_program.cpp Example Use of SpeedTest
6.3.1: speed_test.cpp speed_test: Example and test
4.4.2.11: Sqrt.cpp The AD sqrt Function: Example and Test
8.1.10: StackMachine.cpp Example Differentiating a Stack Machine Interpreter
4.4.1.3.2: Sub.cpp AD Binary Subtraction: Example and Test
4.4.1.4.2: SubEq.cpp AD Computed Assignment Subtraction: Example and Test
4.4.2.12: Tan.cpp The AD tan Function: Example and Test
4.4.2.13: Tanh.cpp The AD tanh Function: Example and Test
4.4.5.1: TapeIndex.cpp Taping Array Index Operation: Example and Test
6.24.1: TrackNewDel.cpp Tracking Use of New and Delete: Example and Test
4.4.1.2.1: UnaryMinus.cpp AD Unary Minus Operator: Example and Test
4.4.1.1.1: UnaryPlus.cpp AD Unary Plus Operator: Example and Test
4.3.1.1: Value.cpp Convert From AD to its Base Type: Example and Test
4.3.5.1: Var2Par.cpp Convert an AD Variable to a Parameter: Example and Test
4.6.1: VecAD.cpp AD Vectors that Record Index Operations: Example and Test
6.23.2: vectorBool.cpp CppAD::vectorBool Class: Example and Test

Input File: omh/example_list.omh
8.4: Choosing The Vector Testing Template Class

8.4.a: Syntax
CPPAD_TEST_VECTOR<Scalar>

8.4.b: Introduction
Many of the CppAD 8: examples and tests use the CPPAD_TEST_VECTOR template class to pass information. The default definition for this template class is 6.23: CppAD::vector .

8.4.c: MS Windows
The include path for boost is not defined in the Windows project files. If we are using Microsofts compiler, the following code overrides the setting of CPPAD_BOOSTVECTOR:
 
// The next 7 lines are C++ source code.
# ifdef _MSC_VER
# if CPPAD_BOOSTVECTOR
# undef  CPPAD_BOOSTVECTOR
# define CPPAD_BOOSTVECTOR 0
# undef  CPPAD_CPPADVECTOR
# define CPPAD_CPPADVECTOR 1
# endif
# endif


8.4.d: CppAD::vector
By default CPPAD_CPPADVECTOR is true and CPPAD_TEST_VECTOR is defined by the following source code
 
// The next 3 line are C++ source code.
# if CPPAD_CPPADVECTOR
# define CPPAD_TEST_VECTOR CppAD::vector
# endif
You can replace this definition of the preprocessor symbol CPPAD_TEST_VECTOR by any other 6.7: SimpleVector template class. This will test using your replacement template vector class with CppAD.

8.4.e: std::vector
If you specify --with-stdvector on the 2.1.d: configure command line during CppAD installation, CPPAD_STDVECTOR is true and CPPAD_TEST_VECTOR is defined by the following source code
 
// The next 4 lines are C++ source code.
# if CPPAD_STDVECTOR
# include <vector>
# define CPPAD_TEST_VECTOR std::vector
# endif
In this case CppAD will use std::vector for its examples and tests. Use of CppAD::vector, std::vector, and std::valarray with CppAD is always tested to some degree. Specifying --with-stdvector will increase the amount of std::vector testing.

8.4.f: boost::numeric::ublas::vector
If you specify a value for BoostDir on the configure command line during CppAD installation, CPPAD_BOOSTVECTOR is true and CPPAD_TEST_VECTOR is defined by the following source code
 
// The next 4 lines are C++ source code.
# if CPPAD_BOOSTVECTOR
# include <boost/numeric/ublas/vector.hpp>
# define CPPAD_TEST_VECTOR boost::numeric::ublas::vector
# endif
In this case CppAD will use Ublas vectors for its examples and tests. Use of CppAD::vector, std::vector, and std::valarray with CppAD is always tested to some degree. Specifying BoostDir will increase the amount of Ublas vector testing.

8.4.g: Deprecated
The preprocessor symbol CppADvector is defined to have the same value as CPPAD_TEST_VECTOR but its use is deprecated
 
# define CppADvector CPPAD_TEST_VECTOR

Input File: cppad/local/test_vector.hpp
9: Appendix

9.a: Contents
Faq: 9.1Frequently Asked Questions and Answers
speed: 9.2Speed Test Routines
Theory: 9.3The Theory of Derivative Calculations
glossary: 9.4Glossary
Bib: 9.5Bibliography
Bugs: 9.6Know Bugs and Problems Using CppAD
WishList: 9.7The CppAD Wish List
whats_new: 9.8Changes and Additions to CppAD
include_deprecated: 9.9Deprecated Include Files
License: 9.10Your License for the CppAD Software

Input File: omh/appendix.omh
9.1: Frequently Asked Questions and Answers

9.1.a: Assignment and Independent
Why does the code sequence
     Independent(
u);
     
v = u[0];
behave differently from the code sequence
     
v = u[0];
     Independent(
u);
Before the call to 5.1: Independent , u[0] is a 9.4.h: parameter and after the call it is a variable. Thus in the first case, v is a variable and in the second case it is a parameter.

9.1.b: Bugs
What should I do if I suspect that there is a bug in CppAD ?

  1. The first step is to search this page for mention of some feature that perhaps you interpreting as a bug (and is not). If this does not solve your problem, continue to the next step.
  2. The second step is to check the 9.8: whats_new messages from the date of the release that you are using to the current date. If the bug has been mentioned and fixed, then 2: install the current version of CppAD. If this does not solve your problem, continue to the next step.
  3. Send an e-mail message to the mailing list cppad@list.coin-or.org (http://list.coin-or.org/mailman/listinfo/cppad) with a description of the bug. Attaching a small source code sample program that demonstrates the bug is always helpful. The smaller the program, the better the bug report.


9.1.c: CompareChange
If you attempt to use the 5.6.1.5: CompareChange function when NDEBUG is true, you will get an error message stating that CompareChange is not a member of the 5: ADFun template class.

9.1.d: Complex Types
Which of the following complex types is better:
     AD< std::complex<
Base> >
     std::complex< AD<
Base> >
Some functions are real differentiable than are not complex differentiable (for example, the 4.4.3.1.f: complex abs function ). If you have to differentiate such functions, you should use
     std::complex< AD<
Base >
If you are sure that you will not need to take any real partials of complex valued function, it is more efficient to use
     AD< std::complex<
Base> >

9.1.e: Exceptions
Why, in all the examples, do you pass back a boolean variable instead of throwing an exception ?

The examples are also used to test the correctness of CppAD and to check your installation. For these two uses, it is helpful to run all the tests and to know which ones failed. The actual code in CppAD uses the 6.1: ErrorHandler utility to signal exceptions. Specifications for redefining this action are provided.

9.1.f: Independent Variables
Is it possible to evaluate the same tape recording with different values for the independent variables ?

Yes (see 5.6.1.1: ForwardZero ).

9.1.g: Math Functions
Are there plans to add more math functions to CppAD ?

Yes. The 4.4.2: std_math_ad and 4.4.3: MathOther section contains a list of the math functions included so far. Contact the mailing list cppad@list.coin-or.org (http://list.coin-or.org/mailman/listinfo/cppad) if you need a math function that is has not yet been included.

9.1.h: Matrix Inverse
Is it possible to differentiate (with respect to the matrix elements) the computation of the inverse of a matrix where the computation of the inverse uses pivoting ?

The example routine 6.12.1: LuSolve can be used to do this because the inverse is a special case of the solution of linear equations. The examples 8.1.4: JacLuDet.cpp and 8.1.6: HesLuDet.cpp use LuSolve to compute derivatives of the determinant with respect to the components of the matrix.

9.1.i: Mode: Forward or Reverse
When evaluating derivatives, one always has a choice between forward and reverse mode. How does one decide which mode to use ?

In general, the best mode depends on the number of domain and range components in the function that your are differentiating. Each call to 5.6.1: Forward computes the derivative of all the range directions with respect to one domain direction. Each call to 5.6.2: Reverse computes the derivative of one range direction with respect to all the domain directions. The times required for (speed of) calls Forward and Reverse are about equal. The 5.5.f: Parameter function can be used to quickly determine that some range directions have derivative zero.

9.1.j: Namespace

9.1.j.a: Test Vector Preprocessor Symbol
Why do you use CPPAD_TEST_VECTOR instead of a namespace for the CppAD 8.4: test_vector class ?

The preprocessor symbol 8.4: CPPAD_TEST_VECTOR determines which 6.7: SimpleVector template class is used for extensive testing. The default definition for CPPAD_TEST_VECTOR is the 6.23: CppAD::vector template class, but it can be changed. Note that all the preprocessor symbols that are defined or used by CppAD begin with either CPPAD (some old deprecated symbols begin with CppAD).

9.1.j.b: Using
Why do I have trouble when the following command
 
	using namespace CppAD
is at the global level (not within a function or some other limited scope).

Some versions of # include <cmath> for gcc and Visual C++ define the standard math functions, (for example double sqrt(double x)) at the global level. It is necessary to put your using commands within the scope of a function, or some other limited scope, in order to shadow these improper global definitions.

9.1.k: Speed
How do I get the best speed performance out of CppAD ?

You should compile your code with optimization, without debugging, and with the preprocessor symbol NDEBUG defined. (The 9.2.5: speed_cppad tests do this.) Note that defining NDEBUG will turn off all of the error checking and reporting that is done using 6.1: ErrorHandler .

9.1.l: Tape Storage: Disk or Memory
Does CppAD store the tape on disk or in memory ?

CppAD uses memory to store a different tape for recording operations for each AD<Base> type that is used. If you have a very large number calculations that are recorded on a tape, the tape will keep growing to hold the necessary information. Eventually, virtual memory may be used to store the tape and the calculations may slow down because of necessary disk access.
Input File: omh/faq.omh
9.2: Speed Test Routines

9.2.a: Purpose
CppAD has a set of speed tests that are used to determine if certain changes improve its execution speed and to compare the C++ AD packages Adolc (http://www.math.tu-dresden.de/~adol-c/) , CppAD (http://www.coin-or.org/CppAD/) , Fadbad (http://www.imm.dtu.dk/fadbad.html/) and Sacado (http://trilinos.sandia.gov/packages/sacado/) . This section explains how you can run these tests on your computer.

9.2.b: Windows
The speed test routines have not yet been compiled or tested using the MS Windows C++ compiler. Under Windows, you can use Cygwin (http://www.cygwin.com) or, MinGW with MSYS (http://www.mingw.org) to run these speed tests.

9.2.c: Contents
9.2.1: Speed Testing Main Program
9.2.2: Speed Testing Utilities
9.2.3: Speed Test Functions in Double
9.2.4: Speed Test Derivatives Using Adolc
9.2.5: Speed Test Derivatives Using CppAD
9.2.6: Speed Test Derivatives Using Fadbad
9.2.7: Speed Test Derivatives Using Sacado

Input File: omh/speed.omh
9.2.1: Speed Testing Main Program

9.2.1.a: Syntax
speed/package/package option seed retape

9.2.1.b: Purpose
A version of this program runs the correctness tests or the speed tests for one AD package identified by package .

9.2.1.c: package

9.2.1.c.a: AD Package
The command line argument package specifies one of the following AD packages: 9.2.4: adolc , 9.2.5: cppad , 9.2.6: fadbad , 9.2.7: sacado .

9.2.1.c.b: double
The value package can be double in which case the function values (instead of derivatives) are computed using double precision operations. This enables one to compare the speed of computing function values in double to the speed of the derivative computations. (It is often useful to divide the speed of the derivative computation by the speed of the function evaluation in double.)

9.2.1.c.c: profile
In the special case where package is profile, the CppAD package is compiled and run with profiling to aid in determining where it is spending most of its time.

9.2.1.d: option
It the argument option specifies which test to run and has the following possible values: 9.2.1.d.a: correct , 9.2.1.d.b: speed , 9.2.1.2: det_minor , 9.2.1.1: det_lu , 9.2.1.5: ode , 9.2.1.3: poly , 9.2.1.4: sparse_hessian .

9.2.1.d.a: correct
If option is equal to correct, all of the correctness tests are run.

9.2.1.d.b: speed
If option is equal to speed, all of the speed tests are run.

9.2.1.e: seed
The command line argument seed is a positive integer. The random number simulator 9.2.2.1: uniform_01 is initialized with the call
     uniform_01(
seed)
before any of the testing routines (listed above) are called.

9.2.1.f: retape
The command line argument retape is either true or false. If it is true, the AD operation sequence is retaped for every test repetition of each speed test. If it is false, and the particular test has a fixed operation sequence, the AD package is allowed to use one taping of the operation sequence for all the repetitions of that speed test.

9.2.1.g: Correctness Results
An output line is generated for each correctness test stating of the test passed or failed.

9.2.1.h: Speed Results
For each speed test, corresponds to three lines of output. The name of the package and test are printed on the first line, the vector of problem sizes are printed on the next line, and the rates corresponding to the different problem sizes are printed on the third line. The rate is the number of times per second that the calculation was repeated.

9.2.1.i: Contents
link_det_lu: 9.2.1.1Speed Testing Gradient of Determinant Using Lu Factorization
link_det_minor: 9.2.1.2Speed Testing Gradient of Determinant by Minor Expansion
link_poly: 9.2.1.3Speed Testing Second Derivative of a Polynomial
link_sparse_hessian: 9.2.1.4Speed Testing Sparse Hessian
link_ode: 9.2.1.5Speed Testing Gradient of Ode Solution

Input File: speed/main.cpp
9.2.1.1: Speed Testing Gradient of Determinant Using Lu Factorization

9.2.1.1.a: Prototype
extern bool link_det_lu(
     size_t                 
size      , 
     size_t                 
repeat    , 
     CppAD::vector<double> &
matrix    ,
     CppAD::vector<double> &
gradient 
);

9.2.1.1.b: Purpose
Each 9.2.1.c: package must define a version of this routine as specified below. This is used by the 9.2.1: speed_main program to run the corresponding speed and correctness tests.

9.2.1.1.c: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_det_lu should be false.

9.2.1.1.d: size
The argument size is the number of rows and columns in the matrix.

9.2.1.1.e: repeat
The argument repeat is the number of different matrices that the gradient (or determinant) is computed for.

9.2.1.1.f: retape
For this test, the operation sequence changes for each repetition. Thus the argument 9.2.1.f: retape is not present because an AD package can not use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.1.g: matrix
The argument matrix is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the last matrix that the gradient (or determinant) is computed for.

9.2.1.1.h: gradient
The argument gradient is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the gradient of the determinant of matrix with respect to its elements.

9.2.1.1.h.a: double
In the case where package is double, only the first element of gradient is used and it is actually the determinant value (the gradient value is not computed).
Input File: speed/link_det_lu.cpp
9.2.1.2: Speed Testing Gradient of Determinant by Minor Expansion

9.2.1.2.a: Prototype
extern bool link_det_minor(
     size_t                 
size      , 
     size_t                 
repeat    , 
     bool                   
retape    ,
     CppAD::vector<double> &
matrix    ,
     CppAD::vector<double> &
gradient 
);

9.2.1.2.b: Purpose
Each 9.2.1.c: package must define a version of this routine as specified below. This is used by the 9.2.1: speed_main program to run the corresponding speed and correctness tests.

9.2.1.2.c: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_det_minor should be false.

9.2.1.2.d: size
The argument size is the number of rows and columns in the matrix.

9.2.1.2.e: repeat
The argument repeat is the number of different matrices that the gradient (or determinant) is computed for.

9.2.1.2.f: retape

9.2.1.2.f.a: true
If retape is true, the operation sequence is considered to change for each repetition. Thus an AD package can not use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.2.f.b: false
If retape is false, the operation sequence is known to be the same for each repetition. Thus an AD package may use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.2.g: matrix
The argument matrix is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the last matrix that the gradient (or determinant) is computed for.

9.2.1.2.h: gradient
The argument gradient is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the gradient of the determinant of matrix with respect to its elements.

9.2.1.2.h.a: double
In the case where package is double, only the first element of gradient is used and it is actually the determinant value (the gradient value is not computed).
Input File: speed/link_det_minor.cpp
9.2.1.3: Speed Testing Second Derivative of a Polynomial

9.2.1.3.a: Prototype
extern bool link_poly(
     size_t                 
size    , 
     size_t                 
repeat  , 
     bool                   
retape  ,
     CppAD::vector<double> &
a       ,
     CppAD::vector<double> &
z       ,
     CppAD::vector<double> &
ddp      
);

9.2.1.3.b: Purpose
Each 9.2.1.c: package must define a version of this routine as specified below. This is used by the 9.2.1: speed_main program to run the corresponding speed and correctness tests.

9.2.1.3.c: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_poly should be false.

9.2.1.3.d: size
The argument size is the order of the polynomial (the number of coefficients in the polynomial).

9.2.1.3.e: repeat
The argument repeat is the number of different argument values that the second derivative (or just the polynomial) will be computed at.

9.2.1.3.f: retape

9.2.1.3.f.a: true
If retape is true, the operation sequence is considered to change for each repetition. Thus an AD package can not use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.3.f.b: false
If retape is false, the operation sequence is known to be the same for each repetition. Thus an AD package may use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.3.g: a
The argument a is a vector with size*size elements. The input value of its elements does not matter. The output value of its elements is the coefficients of the polynomial that is differentiated (i-th element is coefficient of order i ).

9.2.1.3.h: z
The argument z is a vector with one element. The input value of the element does not matter. The output of its element is the polynomial argument value were the last second derivative (or polynomial value) was computed.

9.2.1.3.i: ddp
The argument ddp is a vector with one element. The input value of its element does not matter. The output value of its element is the second derivative of the polynomial with respect to it's argument value.

9.2.1.3.i.a: double
In the case where package is double, the output value of the element of ddp is the polynomial value (the second derivative is not computed).
Input File: speed/link_poly.cpp
9.2.1.4: Speed Testing Sparse Hessian

9.2.1.4.a: Prototype
extern bool link_sparse_hessian(
     size_t                 
repeat    ,
     CppAD::vector<double> &
x         ,
     CppAD::vector<size_t> &
i         ,
     CppAD::vector<size_t> &
j         , 
     CppAD::vector<double> &
hessian
);

9.2.1.4.b: f
Given a first index vector  i and a second index vector  j , the corresponding function  f : \R^n \rightarrow \R  is defined by 9.2.2.8: sparse_evaluate and the index vectors i and j . The only non-zero entries in the Hessian of this function have the form  \[
     \DD{f}{x[[k]]}{x[j[k]]}
\] 
for some  k  between zero and  \ell-1  .

9.2.1.4.c: repeat
The argument repeat is the number of different functions  f(x) that the Hessian is computed for. Each function corresponds to a randomly chosen index vectors, i.e., for each repetition a random choice is made for  i[k] and  j[k] for  k = 0 , \ldots , \ell-1 .

9.2.1.4.d: retape
For this test, the operation sequence changes for each repetition. Thus the argument 9.2.1.f: retape is not present because an AD package can not use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.4.e: x
The argument x has prototype
        CppAD::vector<double> &
x
The size of the vector x determines and is equal to the value of  n . The input value of the elements of x does not matter. On output, it has been set to the argument value for which the function, or its derivative, is being evaluated. The value of this vector need not change with each repetition.

9.2.1.4.f: i
The size of the vector i determines and is equal to the value of  \ell . The input value of the elements of i does not matter. On output, it has been set the first index vector for the last repetition. All the elements of i must are between zero and  n-1 .

9.2.1.4.g: j
The argument j is a vector with size  \ell . The input value of its elements does not matter. On output, it has been set the second index vector for the last repetition. All the elements of i must are between zero and  n-1 .

9.2.1.4.h: hessian
The argument hessian is a vector with  n \times n elements. The input value of its elements does not matter. The output value of its elements is the Hessian of the function  f(x) that corresponds to output values of i , j , and x . To be more specific, for  k = 0 , \ldots , n-1 ,  m = 0 , \ldots , n-1 ,  \[
     \DD{f}{x[k]}{x[m]} (x) = hessian [ k * n + m ]
\] 


9.2.1.4.h.a: double
In the case where package is double, only the first element of hessian is used and it is actually the value of  f(x) (  f^{(2)} (x) is not computed).
Input File: speed/link_sparse_hessian.cpp
9.2.1.5: Speed Testing Gradient of Ode Solution

9.2.1.5.a: Prototype
extern bool link_ode(
     size_t                 
size      ,
     size_t                 
repeat    ,
     bool                   
retape    ,
     CppAD::vector<double> &
x         ,
     CppAD::vector<double> &
gradient
);

9.2.1.5.b: Purpose
Each 9.2.1.c: package must define a version of this routine as specified below. This is used by the 9.2.1: speed_main program to run the corresponding speed and correctness tests.

9.2.1.5.c: f
The function  f : \R^n \rightarrow \R  that is define and computed by evaluating 9.2.2.7: ode_evaluate .

9.2.1.5.d: Return Value
If this speed test is not yet supported by a particular package , the corresponding return value for link_ode should be false.

9.2.1.5.e: size
The argument size is the number of variables in the ordinary differential equation; i.e.,  n = size .

9.2.1.5.f: repeat
The argument repeat is the number of different functions  f(x) that the gradient is computed for.

9.2.1.5.g: retape

9.2.1.5.g.a: true
If retape is true, the operation sequence is considered to change for each repetition. Thus an AD package can not use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.5.g.b: false
If retape is false, the operation sequence is known to be the same for each repetition. Thus an AD package may use one recording of the operation sequence to compute the gradient for all of the repetitions.

9.2.1.5.h: x
The argument x is a vector with  n elements. The input value of the elements of x does not matter. On output, it has been set to the argument value for which the function, or its derivative, is being evaluated. The value of this vector must change with each repetition.

9.2.1.5.i: gradient
The argument gradient is a vector with  n elements. The input value of its elements does not matter. The output value of its elements is the gradient of the function  f(x) that corresponds to output values of i , j and x . To be more specific, for  j = 0 , \ldots , n-1 ,  \[
     \D{f}{x[j]} (x) = gradient [ j ]
\] 


9.2.1.5.i.a: double
In the case where package is double, only the first element of gradient is modified and it is set to the function value.
Input File: speed/link_ode.cpp
9.2.2: Speed Testing Utilities

9.2.2.a: Speed Main Program
9.2.1: speed_main Speed Testing Main Program

9.2.2.b: Speed Utility Routines
9.2.2.4: det_by_lu Determinant Using Expansion by Lu Factorization
9.2.2.3: det_by_minor Determinant Using Expansion by Minors
9.2.2.2: det_of_minor Determinant of a Minor
9.2.2.5: det_33 Check Determinant of 3 by 3 matrix
9.2.2.6: det_grad_33 Check Gradient of Determinant of 3 by 3 matrix
9.2.2.8: sparse_evaluate Evaluate a Function That Has a Sparse Hessian
9.2.2.1: uniform_01 Simulate a [0,1] Uniform Random Variate

9.2.2.c: Library Routines
6.12.2: LuFactor LU Factorization of A Square Matrix
6.12.3: LuInvert Invert an LU Factored Equation
6.12.1: LuSolve Compute Determinant and Solve Linear Equations
6.11: Poly Evaluate a Polynomial or its Derivative

9.2.2.d: Source Code
9.2.2.4.2: det_by_lu.hpp Source: det_by_lu
9.2.2.3.2: det_by_minor.hpp Source: det_by_minor
9.2.2.6.1: det_grad_33.hpp Source: det_grad_33
9.2.2.2.2: det_of_minor.hpp Source: det_of_minor
6.12.2.2: lu_factor.hpp Source: LuFactor
6.12.3.2: lu_invert.hpp Source: LuInvert
6.12.1.2: lu_solve.hpp Source: LuSolve
6.11.2: poly.hpp Source: Poly
9.2.2.8.2: sparse_evaluate.hpp Source: sparse_evaluate
9.2.2.1.1: uniform_01.hpp Source: uniform_01

Input File: omh/speed_utility.omh
9.2.2.1: Simulate a [0,1] Uniform Random Variate

9.2.2.1.a: Syntax
# include <cppad/speed/uniform_01.hpp>
uniform_01(seed)
uniform_01(nx)

9.2.2.1.b: Purpose
This routine is used to create random values for speed testing purposes.

9.2.2.1.c: Inclusion
The template function uniform_01 is defined in the CppAD namespace by including the file cppad/speed/uniform_01.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.1.d: seed
The argument seed has prototype
     size_t 
seed
It specifies a seed for the uniform random number generator.

9.2.2.1.e: n
The argument n has prototype
     size_t 
n
It specifies the number of elements in the random vector x.

9.2.2.1.f: x
The argument x has prototype
     
Vector &x
. The input value of the elements of x does not matter. Upon return, the elements of x are set to values randomly sampled over the interval [0,1].

9.2.2.1.g: Vector
If y is a double value, the object x must support the syntax
     
x[i] = y
where i has type size_t with value less than or equal  n-1 . This is the only requirement of the type Vector.

9.2.2.1.h: Source Code
The file 9.2.2.1.1: uniform_01.hpp constraints the source code for this template function.
Input File: cppad/speed/uniform_01.hpp
9.2.2.1.1: Source: uniform_01
# ifndef CPPAD_UNIFORM_01_INCLUDED
# define CPPAD_UNIFORM_01_INCLUDED
 
# include <cstdlib>

namespace CppAD {
	inline void uniform_01(size_t seed)
	{	std::srand( (unsigned int) seed); }

	template <class Vector>
	void uniform_01(size_t n, Vector &x)
	{	static double factor = 1. / double(RAND_MAX);
		while(n--)
			x[n] = std::rand() * factor;
	}
}
# endif

Input File: omh/uniform_01_hpp.omh
9.2.2.2: Determinant of a Minor

9.2.2.2.a: Syntax
# include <cppad/speed/det_of_minor.hpp>
d = det_of_minor(amnrc)

9.2.2.2.b: Inclusion
The template function det_of_minor is defined in the CppAD namespace by including the file cppad/speed/det_of_minor.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.2.c: Purpose
This template function returns the determinant of a minor of the matrix  A using expansion by minors. The elements of the  n \times n minor  M of the matrix  A are defined, for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , n-1 , by  \[
     M_{i,j} = A_{R(i), C(j)}
\]
where the functions  R(i) is defined by the 9.2.2.2.h: argument r and  C(j) is defined by the 9.2.2.2.i: argument c .

This template function is for example and testing purposes only. Expansion by minors is chosen as an example because it uses a lot of floating point operations yet does not require much source code (on the order of m factorial floating point operations and about 70 lines of source code including comments). This is not an efficient method for computing a determinant; for example, using an LU factorization would be better.

9.2.2.2.d: Determinant of A
If the following conditions hold, the minor is the entire matrix  A and hence det_of_minor will return the determinant of  A :
  1.  n = m .
  2. for  i = 0 , \ldots , m-1 ,  r[i] = i+1 , and  r[m] = 0 .
  3. for  j = 0 , \ldots , m-1 ,  c[j] = j+1 , and  c[m] = 0 .


9.2.2.2.e: a
The argument a has prototype
     const std::vector<
Scalar>& a
and is a vector with size  m * m (see description of 9.2.2.2.k: Scalar below). The elements of the  m \times m matrix  A are defined, for  i = 0 , \ldots , m-1 and  j = 0 , \ldots , m-1 , by  \[
     A_{i,j} = a[ i * m + j]
\] 


9.2.2.2.f: m
The argument m has prototype
     size_t 
m
and is the size of the square matrix  A .

9.2.2.2.g: n
The argument n has prototype
     size_t 
n
and is the size of the square minor  M .

9.2.2.2.h: r
The argument r has prototype
     std::vector<size_t>& 
r
and is a vector with  m + 1 elements. This vector defines the function  R(i) which specifies the rows of the minor  M . To be specific, the function  R(i) for  i = 0, \ldots , n-1 is defined by  \[
\begin{array}{rcl}
     R(0)   & = & r[m]
     \\
     R(i+1) & = & r[ R(i) ]
\end{array}
\] 
All the elements of r must have value less than or equal m. The elements of vector r are modified during the computation, and restored to their original value before the return from det_of_minor.

9.2.2.2.i: c
The argument c has prototype
     std::vector<size_t>& 
c
and is a vector with  m + 1 elements This vector defines the function  C(i) which specifies the rows of the minor  M . To be specific, the function  C(i) for  j = 0, \ldots , n-1 is defined by  \[
\begin{array}{rcl}
     C(0)   & = & c[m]
     \\
     C(j+1) & = & c[ C(j) ]
\end{array}
\] 
All the elements of c must have value less than or equal m. The elements of vector c are modified during the computation, and restored to their original value before the return from det_of_minor.

9.2.2.2.j: d
The result d has prototype
     
Scalar d
and is equal to the determinant of the minor  M .

9.2.2.2.k: Scalar
If x and y are objects of type Scalar and i is an object of type int, the Scalar must support the following operations:
Syntax Description Result Type
Scalar x default constructor for Scalar object.
x = i set value of x to current value of i
x = y set value of x to current value of y
x + y value of x plus y Scalar
x - y value of x minus y Scalar
x * y value of x times value of y Scalar

9.2.2.2.l: Example
The file 9.2.2.2.1: det_of_minor.cpp contains an example and test of det_of_minor.hpp. It returns true if it succeeds and false otherwise.

9.2.2.2.m: Source Code
The file 9.2.2.2.2: det_of_minor.hpp contains the source for this template function.
Input File: cppad/speed/det_of_minor.hpp
9.2.2.2.1: Determinant of a Minor: Example and Test
 
# include <vector>
# include <cstddef>
# include <cppad/speed/det_of_minor.hpp>

bool det_of_minor()
{	bool   ok = true;
	size_t i;

	// dimension of the matrix A
	size_t m = 3;
	// index vectors set so minor is the entire matrix A
	std::vector<size_t> r(m + 1);
	std::vector<size_t> c(m + 1);
	for(i= 0; i < m; i++)
	{	r[i] = i+1;
		c[i] = i+1;
	}	
	r[m] = 0;
	c[m] = 0;
	// values in the matrix A
	double  data[] = {
		1., 2., 3., 
		3., 2., 1., 
		2., 1., 2.
	};
	// construct vector a with the values of the matrix A
	std::vector<double> a(data, data + 9);

	// evaluate the determinant of A
	size_t n   = m; // minor has same dimension as A
	double det = CppAD::det_of_minor(a, m, n, r, c);

	// check the value of the determinant of A
	ok &= (det == (double) (1*(2*2-1*1) - 2*(3*2-1*2) + 3*(3*1-2*2)) );

	// minor where row 0 and column 1 are removed
	r[m] = 1;  // skip row index 0 by starting at row index 1
	c[0] = 2;  // skip column index 1 by pointing from index 0 to index 2
	// evaluate determinant of the minor 
	n   = m - 1; // dimension of the minor
	det = CppAD::det_of_minor(a, m, m-1, r, c);

	// check the value of the determinant of the minor
	ok &= (det == (double) (3*2-1*2) );

	return ok;
}

Input File: speed/example/det_of_minor.cpp
9.2.2.2.2: Source: det_of_minor
# ifndef CPPAD_DET_OF_MINOR_INCLUDED
# define CPPAD_DET_OF_MINOR_INCLUDED
 
namespace CppAD { // BEGIN CppAD namespace
template <class Scalar> 
Scalar det_of_minor( 
	const std::vector<Scalar>& a  , 
	size_t                     m  , 
	size_t                     n  , 
	std::vector<size_t>&       r  , 
	std::vector<size_t>&       c  )
{	
	const size_t R0 = r[m]; // R(0)
	size_t       Cj = c[m]; // C(j)    (case j = 0)
	size_t       Cj1 = m;   // C(j-1)  (case j = 0)

	// check for 1 by 1 case
	if( n == 1 ) return a[ R0 * m + Cj ];

	// initialize determinant of the minor M
	Scalar detM;
	detM = 0;

	// initialize sign of factor for next sub-minor
	int s = 1;

	// remove row with index 0 in M from all the sub-minors of M
	r[m] = r[R0];

	// for each column of M
	for(size_t j = 0; j < n; j++)
	{	// element with index (0,j) in the minor M
		Scalar M0j = a[ R0 * m + Cj ];

		// remove column wht index j in M to form next sub-minor S of M
		c[Cj1] = c[Cj];

		// compute determinant of the current sub-minor S
		Scalar detS = det_of_minor(a, m, n - 1, r, c);

		// restore column Cj to representaion of M as a minor of A
		c[Cj1] = Cj;

		// include this sub-minor term in the summation
		if( s > 0 )
			detM = detM + M0j * detS;
		else	detM = detM - M0j * detS;

		// advance to next column of M
		Cj1 = Cj;
		Cj  = c[Cj];
		s   = - s;		
	}

	// restore row zero to the minor representation for M
	r[m] = R0;

	// return the determinant of the minor M
	return detM;
}
} // END CppAD namespace
# endif

Input File: omh/det_of_minor_hpp.omh
9.2.2.3: Determinant Using Expansion by Minors

9.2.2.3.a: Syntax
# include <cppad/speed/det_by_minor.hpp>
det_by_minor<Scalardet(n)
d = det(a)

9.2.2.3.b: Inclusion
The template class det_by_minor is defined in the CppAD namespace by including the file cppad/speed/det_by_minor.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.3.c: Constructor
The syntax
     det_by_minor<
Scalardet(n)
constructs the object det which can be used for evaluating the determinant of n by n matrices using expansion by minors.

9.2.2.3.d: Scalar
The type Scalar must satisfy the same conditions as in the function 9.2.2.2.k: det_of_minor .

9.2.2.3.e: n
The argument n has prototype
     size_t 
n

9.2.2.3.f: det
The syntax
     
d = det(a)
returns the determinant of the matrix A using expansion by minors.

9.2.2.3.f.a: a
The argument a has prototype
     const 
Vector &a
It must be a Vector with length  n * n and with elements of type Scalar. The elements of the  n \times n matrix  A are defined, for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , n-1 , by  \[
     A_{i,j} = a[ i * m + j]
\] 


9.2.2.3.f.b: d
The return value d has prototype
     
Scalar d
It is equal to the determinant of  A .

9.2.2.3.g: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than  n * n . This must return a Scalar value corresponding to the i-th element of the vector y. This is the only requirement of the type Vector.

9.2.2.3.h: Example
The file 9.2.2.3.1: det_by_minor.cpp contains an example and test of det_by_minor.hpp. It returns true if it succeeds and false otherwise.

9.2.2.3.i: Source Code
The file 9.2.2.3.2: det_by_minor.hpp contains the source for this template function.
Input File: cppad/speed/det_by_minor.hpp
9.2.2.3.1: Determinant Using Expansion by Minors: Example and Test
 

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_minor.hpp>

bool det_by_minor()
{	bool ok = true;

	// dimension of the matrix
	size_t n = 3;

	// construct the determinat object
	CppAD::det_by_minor<double> Det(n);

	double  a[] = {
		1., 2., 3.,  // a[0] a[1] a[2]
		3., 2., 1.,  // a[3] a[4] a[5]
		2., 1., 2.   // a[6] a[7] a[8]
	};
	CPPAD_TEST_VECTOR<double> A(9);
	size_t i;
	for(i = 0; i < 9; i++)
		A[i] = a[i];


	// evaluate the determinant
	double det = Det(A);

	double check;
	check = a[0]*(a[4]*a[8] - a[5]*a[7])
	      - a[1]*(a[3]*a[8] - a[5]*a[6])
	      + a[2]*(a[3]*a[7] - a[4]*a[6]);

	ok = det == check;

	return ok;
}


Input File: speed/example/det_by_minor.cpp
9.2.2.3.2: Source: det_by_minor
# ifndef CPPAD_DET_BY_MINOR_INCLUDED
# define CPPAD_DET_BY_MINOR_INCLUDED
 
# include <cppad/cppad.hpp>
# include <cppad/speed/det_of_minor.hpp>

// BEGIN CppAD namespace
namespace CppAD {

template <class Scalar>
class det_by_minor {
private:
	size_t              m_;

	// made mutable because modified and then restored
	mutable std::vector<size_t> r_;
	mutable std::vector<size_t> c_;

	// make mutable because its value does not matter
	mutable std::vector<Scalar> a_;
public:
	det_by_minor(size_t m) : m_(m) , r_(m + 1) , c_(m + 1), a_(m * m)
	{
		size_t i;

		// values for r and c that correspond to entire matrix
		for(i = 0; i < m; i++)
		{	r_[i] = i+1;
			c_[i] = i+1;
		}
		r_[m] = 0;
		c_[m] = 0;
	}

	template <class Vector>
	inline Scalar operator()(const Vector &x) const
	{	size_t i = m_ * m_;
		while(i--)
			a_[i] = x[i];
		return det_of_minor(a_, m_, m_, r_, c_); 
	}

};

} // END CppAD namespace
# endif

Input File: omh/det_by_minor_hpp.omh
9.2.2.4: Determinant Using Expansion by Lu Factorization

9.2.2.4.a: Syntax
# include <cppad/speed/det_by_lu.hpp>
det_by_lu<Scalardet(n)
d = det(a)

9.2.2.4.b: Inclusion
The template class det_by_lu is defined in the CppAD namespace by including the file cppad/speed/det_by_lu.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.4.c: Constructor
The syntax
     det_by_lu<
Scalardet(n)
constructs the object det which can be used for evaluating the determinant of n by n matrices using LU factorization.

9.2.2.4.d: Scalar
The type Scalar can be any 6.5: NumericType

9.2.2.4.e: n
The argument n has prototype
     size_t 
n

9.2.2.4.f: det
The syntax
     
d = det(a)
returns the determinant of the matrix  A using LU factorization.

9.2.2.4.f.a: a
The argument a has prototype
     const 
Vector &a
It must be a Vector with length  n * n and with It must be a Vector with length  n * n and with elements of type Scalar. The elements of the  n \times n matrix  A are defined, for  i = 0 , \ldots , n-1 and  j = 0 , \ldots , n-1 , by  \[
     A_{i,j} = a[ i * m + j]
\] 


9.2.2.4.f.b: d
The return value d has prototype
     
Scalar d

9.2.2.4.g: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than  n * n . This must return a Scalar value corresponding to the i-th element of the vector y. This is the only requirement of the type Vector.

9.2.2.4.h: Example
The file 9.2.2.4.1: det_by_lu.cpp contains an example and test of det_by_lu.hpp. It returns true if it succeeds and false otherwise.

9.2.2.4.i: Source Code
The file 9.2.2.4.2: det_by_lu.hpp contains the source for this template function.
Input File: cppad/speed/det_by_lu.hpp
9.2.2.4.1: Determinant Using Lu Factorization: Example and Test
 

# include <cppad/cppad.hpp>
# include <cppad/speed/det_by_lu.hpp>

bool det_by_lu()
{	bool ok = true;

	// dimension of the matrix
	size_t n = 3;

	// construct the determinat object
	CppAD::det_by_lu<double> Det(n);

	double  a[] = {
		1., 2., 3.,  // a[0] a[1] a[2]
		3., 2., 1.,  // a[3] a[4] a[5]
		2., 1., 2.   // a[6] a[7] a[8]
	};
	CPPAD_TEST_VECTOR<double> A(9);
	size_t i;
	for(i = 0; i < 9; i++)
		A[i] = a[i];


	// evaluate the determinant
	double det = Det(A);

	double check;
	check = a[0]*(a[4]*a[8] - a[5]*a[7])
	      - a[1]*(a[3]*a[8] - a[5]*a[6])
	      + a[2]*(a[3]*a[7] - a[4]*a[6]);

	ok = CppAD::NearEqual(det, check, 1e-10, 1e-10);

	return ok;
}


Input File: speed/example/det_by_lu.cpp
9.2.2.4.2: Source: det_by_lu
# ifndef CPPAD_DET_BY_LU_INCLUDED
# define CPPAD_DET_BY_LU_INCLUDED
 
# include <cppad/cppad.hpp>
# include <complex>

// BEGIN CppAD namespace
namespace CppAD {

// The AD complex case is used by examples by not used by speed tests 
// Must define a specializatgion of LeqZero,AbsGeq for the ADComplex case
typedef std::complex<double>     Complex;
typedef CppAD::AD<Complex>     ADComplex;
CPPAD_BOOL_UNARY(Complex,  LeqZero )
CPPAD_BOOL_BINARY(Complex, AbsGeq )

template <class Scalar>
class det_by_lu {
private:
	const size_t m;
	const size_t n;
	CppADvector<Scalar> A;
	CppADvector<Scalar> B;
	CppADvector<Scalar> X;
public:
	det_by_lu(size_t n_) : m(0), n(n_), A(n_ * n_)
	{	}

	template <class Vector>
	inline Scalar operator()(const Vector &x)
	{
		using CppAD::exp;

		Scalar         logdet;
		Scalar         det;
		int          signdet;
		size_t       i;

		// copy matrix so it is not overwritten
		for(i = 0; i < n * n; i++)
			A[i] = x[i];
 
		// comput log determinant
		signdet = CppAD::LuSolve(
			n, m, A, B, X, logdet);

# if 0
		// Do not do this for speed test because it makes floating 
		// point operation sequence very simple.
		if( signdet == 0 )
			det = 0;
		else	det =  Scalar( signdet ) * exp( logdet );
# endif

		// convert to determinant
		det     = Scalar( signdet ) * exp( logdet ); 

# ifdef FADBAD
		// Fadbad requires tempories to be set to constants
		for(i = 0; i < n * n; i++)
			A[i] = 0;
# endif

		return det;
	}
};
} // END CppAD namespace
# endif

Input File: omh/det_by_lu_hpp.omh
9.2.2.5: Check Determinant of 3 by 3 matrix

9.2.2.5.a: Syntax
# include <cppad/speed/det_33.hpp>
ok = det_33(xd)

9.2.2.5.b: Purpose
This routine can be used to check a method for computing the determinant of a matrix.

9.2.2.5.c: Inclusion
The template function det_33 is defined in the CppAD namespace by including the file cppad/speed/det_33.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.5.d: x
The argument x has prototype
     const 
Vector &x
. It contains the elements of the matrix  X in row major order; i.e.,  \[
     X_{i,j} = x [ i * 3 + j ]
\] 


9.2.2.5.e: d
The argument d has prototype
     const 
Vector &d
. It is tested to see if d[0] it is equal to  \det ( X ) .

9.2.2.5.f: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than 9. This must return a double value corresponding to the i-th element of the vector y. This is the only requirement of the type Vector. (Note that only the first element of the vector d is used.)

9.2.2.5.g: ok
The return value ok has prototype
     bool 
ok
It is true, if the determinant d[0] passes the test and false otherwise.

9.2.2.5.h: Source Code
The file 9.2.2.5.1: det_33.hpp contains the source code for this template function.
Input File: cppad/speed/det_33.hpp
9.2.2.5.1: Source: det_33
# ifndef CPPAD_DET_33_INCLUDED
# define CPPAD_DET_33_INCLUDED
 
# include <cppad/near_equal.hpp>
namespace CppAD {
template <class Vector>
	bool det_33(const Vector &x, const Vector &d)
	{	bool ok = true;
	
		// use expansion by minors to compute the determinant by hand
		double check = 0.;
		check += x[0] * ( x[4] * x[8] - x[5] * x[7] );
		check -= x[1] * ( x[3] * x[8] - x[5] * x[6] );
		check += x[2] * ( x[3] * x[7] - x[4] * x[6] );

		ok &= CppAD::NearEqual(check, d[0], 1e-10, 1e-10);
		
		return ok;
	}
}
# endif

Input File: omh/det_33_hpp.omh
9.2.2.6: Check Gradient of Determinant of 3 by 3 matrix

9.2.2.6.a: Syntax
# include <cppad/speed/det_grad_33.hpp>
ok = det_grad_33(xg)

9.2.2.6.b: Purpose
This routine can be used to check a method for computing the gradient of the determinant of a matrix.

9.2.2.6.c: Inclusion
The template function det_grad_33 is defined in the CppAD namespace by including the file cppad/speed/det_grad_33.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.6.d: x
The argument x has prototype
     const 
Vector &x
. It contains the elements of the matrix  X in row major order; i.e.,  \[
     X_{i,j} = x [ i * 3 + j ]
\] 


9.2.2.6.e: g
The argument g has prototype
     const 
Vector &g
. It contains the elements of the gradient of  \det ( X ) in row major order; i.e.,  \[
     \D{\det (X)}{X(i,j)} = g [ i * 3 + j ]
\] 


9.2.2.6.f: Vector
If y is a Vector object, it must support the syntax
     
y[i]
where i has type size_t with value less than 9. This must return a double value corresponding to the i-th element of the vector y. This is the only requirement of the type Vector.

9.2.2.6.g: ok
The return value ok has prototype
     bool 
ok
It is true, if the gradient g passes the test and false otherwise.

9.2.2.6.h: Source Code
The file 9.2.2.6.1: det_grad_33.hpp contains the source code for this template function.
Input File: cppad/speed/det_grad_33.hpp
9.2.2.6.1: Source: det_grad_33
# ifndef CPPAD_DET_GRAD_33_INCLUDED
# define CPPAD_DET_GRAD_33_INCLUDED
 
# include <cppad/near_equal.hpp>
namespace CppAD {
template <class Vector>
	bool det_grad_33(const Vector &x, const Vector &g)
	{	bool ok = true;
	
		// use expansion by minors to compute the derivative by hand
		double check[9];
		check[0] = + ( x[4] * x[8] - x[5] * x[7] );
		check[1] = - ( x[3] * x[8] - x[5] * x[6] );
		check[2] = + ( x[3] * x[7] - x[4] * x[6] );
		//
		check[3] = - ( x[1] * x[8] - x[2] * x[7] );
		check[4] = + ( x[0] * x[8] - x[2] * x[6] );
		check[5] = - ( x[0] * x[7] - x[1] * x[6] );
		//
		check[6] = + ( x[1] * x[5] - x[2] * x[4] );
		check[7] = - ( x[0] * x[5] - x[2] * x[3] );
		check[8] = + ( x[0] * x[4] - x[1] * x[3] ); 
		//
		size_t i;
		for(i = 0; i < 3 * 3; i++)
			ok &= CppAD::NearEqual(check[i], g[i], 1e-10, 1e-10);
		
		return ok;
	}
}
# endif

Input File: omh/det_grad_33_hpp.omh
9.2.2.7: Evaluate a Function Defined in Terms of an ODE

9.2.2.7.a: Syntax
# include <cppad/speed/ode_evaluate.hpp>
ode_evaluate(xmfm)

9.2.2.7.b: Purpose
This routine evaluates a function that is defined by the following initial value problem:  \[
\begin{array}{rcl}
     y(x, 0)                & = & b(x)
     \\
     \partial_t y ( x , t ) & = & g[ x , y(x,t) , t ]
\end{array}
\] 
where  b : \R^n \rightarrow \R^n and  g : \R^n \times \R^n \times \R \rightarrow \R are not any further specified. A numerical method is used to solve the ode and obtain an accurate approximation for  y(x, 1) . This in turn is used to compute values and gradients for the function  f : \R^n \rightarrow \R defined by  \[
     f(x) = y_n ( x , 1)
\] 


9.2.2.7.c: Inclusion
The template function ode_evaluate is defined in the CppAD namespace by including the file cppad/speed/ode_evaluate.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.7.d: Float
The type Float must be a 6.5: NumericType . In addition, if y and z are Float objects,
     
y = exp(z)
must set the y equal the exponential of z , i.e., the derivative of y with respect to z is equal to y .

9.2.2.7.d.a: Operation Sequence
The functions  b(x) ,  g(x, y, t) and the ODE solver are chosen so that the Float operation sequence does not depend on the value of  x .

9.2.2.7.e: x
The argument x has prototype
     const CppAD::vector<
Float> &x
It contains he argument value for which the function, or its derivative, is being evaluated. The value  n is determined by the size of the vector x .

9.2.2.7.f: m
The argument m has prototype
     size_t 
m
It is either zero or one and specifies the order of the derivative of  f(x) , with respect to  x , that is being evaluated.

9.2.2.7.g: m = 1
In the case where  m = 1 , the following extended system is solved:  \[
\begin{array}{rcl}
y(x, 0)                & = & b(x)
\\
\partial_t y ( x , t ) & = & g[ x , y(x,t) , t ]
\\
y_x (x, 0)             & = & b^{(1)} (x)
\\
partial_t y_x (x,  t)  & = & \partial_x g[ x , y(x,t) , t ] 
                         +   \partial_y g[ x , y(x,t) , t ] y_x
\end{array}
\] 


9.2.2.7.h: fm
The argument fm has prototype
     CppAD::vector<
Float> &fm
The input value of the elements of fm does not matter.

9.2.2.7.h.a: Function
If m is zero, fm has size equal to one and fm[0] is the value of  y(x, 1) .

9.2.2.7.h.b: Gradient
If m is one, fm has size equal to n and for  j = 0 , \ldots , n-1  \[
     \D{y}{x[j]} (x, 1) = fm [ j ]
\] 


9.2.2.7.i: Example
The file 9.2.2.7.1: ode_evaluate.cpp contains an example and test of ode_evaluate.hpp. It returns true if it succeeds and false otherwise.

9.2.2.7.j: Source Code
The file 9.2.2.7.2: ode_evaluate.hpp contains the source code for this template function.
Input File: cppad/speed/ode_evaluate.hpp
9.2.2.7.1: ode_evaluate: Example and test
 
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/cppad.hpp>

bool ode_evaluate(void)
{	using CppAD::NearEqual;
	using CppAD::AD;

	bool ok = true;

	size_t n = 3;
	CppAD::vector<double>       x(n);
	CppAD::vector<double>       ym(n);
	CppAD::vector< AD<double> > X(n);
	CppAD::vector< AD<double> > Ym(1);

	// choose x
	size_t j;
	for(j = 0; j < n; j++)
	{	x[j] = double(j + 1);
		X[j] = x[j];
	}

	// declare independent variables
	Independent(X);

	// evaluate function
	size_t m = 0;
	CppAD::ode_evaluate(X, m, Ym);

	// evaluate derivative
	m = 1;
	CppAD::ode_evaluate(x, m, ym);

	// use AD to evaluate derivative
	CppAD::ADFun<double>   F(X, Ym);
	CppAD::vector<double>  dy(n);
	dy = F.Jacobian(x);

	for(j = 0; j < n; j++)
		ok &= NearEqual(ym[j], dy[j] , 1e-10, 1e-10);
 
	return ok;
}

Input File: speed/example/ode_evaluate.cpp
9.2.2.7.2: Source: ode_evaluate
# ifndef CPPAD_ODE_EVALUATE_INCLUDED
# define CPPAD_ODE_EVALUATE_INCLUDED
 
# include <cppad/vector.hpp>
# include <cppad/runge_45.hpp>

namespace CppAD {  // BEGIN CppAD namespace

template <class Float>
class ode_evaluate_fun {
private:
	const size_t m_;
	const CppAD::vector<Float> x_;
public:
	ode_evaluate_fun(size_t m, const CppAD::vector<Float> &x) 
	: m_(m), x_(x)
	{ }
	void Ode(
		const Float                  &t, 
		const CppAD::vector<Float>   &z, 
		CppAD::vector<Float>         &h) 
	{
		if( m_ == 0 )
			ode_y(t, z, h);
		if( m_ == 1 )
			ode_z(t, z, h);
	}
	void ode_y(
		const Float                  &t, 
		const CppAD::vector<Float>   &y, 
		CppAD::vector<Float>         &g) 
	{	// y_t = g(t, x, y)
		CPPAD_ASSERT_UNKNOWN( y.size() == x_.size() );

		size_t i, n = x_.size();
		Float yi1 = Float(1);
		for(i = 0; i < n; i++)
		{	g[i]  = Float(int(i+1)) * x_[i] * yi1;
			yi1   = y[i];
		}

		// solution  for this equation is
		// y_0 (t) = x_0 * t
		// y_1 (t) = x_1 * x_0 * t^2 
		// y_2 (t) = x_2 * x_1 * x_0 * t^3
		// ...
	}
	void ode_z(
		const Float                  &t , 
		const CppAD::vector<Float>   &z , 
		CppAD::vector<Float>         &h ) 
	{	// z    = [ y ; y_x ]
		// z_t  = h(t, x, z) = [ y_t , y_x_t ]
		size_t i, j, n = x_.size();
		CPPAD_ASSERT_UNKNOWN( z.size() == n + n * n );

		// y_t
		Float zi1 = Float(1);
		for(i = 0; i < n; i++)
		{	h[i] = Float(int(i+1)) * x_[i] * zi1;
			for(j = 0; j < n; j++)
				h[n + i * n + j] = 0.;
			zi1 = z[i];
		}
		size_t ij;
		Float gi_xi, gi_yi1, yi1_xj;

		// y0_x0_t
		h[n] += 1.;

		// yi_xj_t
		for(i = 1; i < n; i++)
		{	// partial g[i] w.r.t. x[i]
			gi_xi  = Float(int(i+1)) * z[i-1];
			ij     = n + i * n + i;	
			h[ij] += gi_xi;
			// partial g[i] w.r.t y[i-1] 
			gi_yi1 = Float(int(i+1)) * x_[i];
			// multiply by partial y[i-1] w.r.t x[j];
			for(j = 0; j < n; j++)
			{	ij     = n + (i-1) * n + j;
				yi1_xj = z[ij];
				ij     = n + i * n + j;
				h[ij] += gi_yi1 * yi1_xj;
			} 
		}
	}
};

template <class Float>
void ode_evaluate(
	CppAD::vector<Float> &x  , 
	size_t m                 , 
	CppAD::vector<Float> &fm )
{
	typedef CppAD::vector<Float> Vector;

	size_t n = x.size();
	size_t ell;
	CPPAD_ASSERT_KNOWN( m == 0 || m == 1,
		"ode_evaluate: m is not zero or one"
	);
	CPPAD_ASSERT_KNOWN( 
		((m==0) & (fm.size()==1) ) || ((m==1) & (fm.size()==n)),
		"ode_evaluate: the size of fm is not correct"
	);
	if( m == 0 )
		ell = n;
	else	ell = n + n * n;

	// set up the case we are integrating
	size_t M  = 10;
	Float  ti = 0.;
	Float  tf = 1.;
	Vector yi(ell);
	Vector yf(ell);

	size_t i;
	for(i = 0; i < ell; i++)
		yi[i] = Float(0);

	// construct ode equation
	ode_evaluate_fun<Float> f(m, x);

	// solve differential equation
	yf = Runge45(f, M, ti, tf, yi);

	if( m == 0 )
		fm[0] = yf[n-1];
	else
	{	for(i = 0; i < n; i++)
			fm[i] = yf[n + (n-1) * n + i];
	}
	return;
}

} // END CppAD namespace
# endif

Input File: omh/ode_evaluate.omh
9.2.2.8: Evaluate a Function That Has a Sparse Hessian

9.2.2.8.a: Syntax
# include <cppad/speed/sparse_evaluate.hpp>
sparse_evaluate(xijmfm)

9.2.2.8.b: Purpose
This routine evaluates  f(x) ,  f^{(1)} (x) , or  f^{(2)} (x) where the Hessian  f^{(2)} (x) is sparse. The function  f : \R^n \rightarrow \R depends on the index vectors i and j . The only non-zero entries in the Hessian of this function have the form \[ \DD{f}{x[k]]}{x[j[k]} \] for some \( k \) between zero and \( \ell-1 \).

9.2.2.8.c: Inclusion
The template function sparse_evaluate is defined in the CppAD namespace by including the file cppad/speed/sparse_evaluate.hpp (relative to the CppAD distribution directory). It is only intended for example and testing purposes, so it is not automatically included by : cppad.hpp .

9.2.2.8.d: Float
The type Float must be a 6.5: NumericType . In addition, if y and z are Float objects,
     
y = exp(z)
must set the y equal the exponential of z , i.e., the derivative of y with respect to z is equal to y .

9.2.2.8.e: x
The argument x has prototype
     const CppAD::vector<
Float> &x
It contains he argument value for which the function, or its derivative, is being evaluated. We use  n to denote the size of the vector x .

9.2.2.8.f: i
The argument i has prototype
      const CppAD::vector<size_t> &
i
It specifies one of the first index of  x for each non-zero Hessian term (see 9.2.2.8.b: purpose above). All the elements of i must be between zero and n-1 . We use  \ell to denote the size of the vector i .

9.2.2.8.g: j
The argument j has prototype
      const CppAD::vector<size_t> &
j
and is a vector with size  \ell . It specifies one of the second index of  x for each non-zero Hessian term. All the elements of j must be between zero and n-1 .

9.2.2.8.h: m
The argument m has prototype
     size_t 
m
It is between zero and two and specifies the order of the derivative of  f that is being evaluated, i.e.,  f^{(m)} (x) is evaluated.

9.2.2.8.i: fm
The argument fm has prototype
     CppAD::vector<
Float> &fm
The input value of the elements of fm does not matter.

9.2.2.8.i.a: Function
If m is zero, fm has size one and fm[0] is the value of  f(x) .

9.2.2.8.i.b: Gradient
If m is one, fm has size n and for  j = 0 , \ldots , n-1  \[
     \D{f}{x[j]} = fm [ j ]
\] 


9.2.2.8.i.c: Hessian
If m is two, fm has size n * n and for  k = 0 , \ldots , n-1 ,  m = 0 , \ldots , n-1  \[
     \DD{f}{x[k]}{x[m]} = fm [ k * n + m ]
\] 


9.2.2.8.j: Example
The file 9.2.2.8.1: sparse_evaluate.cpp contains an example and test of sparse_evaluate.hpp. It returns true if it succeeds and false otherwise.

9.2.2.8.k: Source Code
The file 9.2.2.8.2: sparse_evaluate.hpp contains the source code for this template function.
Input File: cppad/speed/sparse_evaluate.hpp
9.2.2.8.1: sparse_evaluate: Example and test
 
# include <cppad/speed/sparse_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/cppad.hpp>

bool sparse_evaluate(void)
{	using CppAD::NearEqual;
	using CppAD::AD;

	bool ok = true;

	size_t n   = 3;
	size_t ell = 5;
	CppAD::vector<size_t>     i(ell);
	CppAD::vector<size_t>     j(ell);
	CppAD::vector<double>       x(n);
	CppAD::vector<double>       ym(n);
	CppAD::vector< AD<double> > X(n);
	CppAD::vector< AD<double> > Ym(1);

	// choose x
	size_t k;
	for(k = 0; k < n; k++)
	{	x[k] = double(k + 1);
		X[k] = x[k];
	}

	// choose i, j
	for(k = 0; k < ell; k++)
	{	i[k] = k % n;
		j[k] = (ell - k) % n;
	}

	// declare independent variables
	Independent(X);

	// evaluate function
	size_t m = 0;
	CppAD::sparse_evaluate(X, i, j, m, Ym);

	// evaluate derivative
	m = 1;
	CppAD::sparse_evaluate(x, i, j, m, ym);

	// use AD to evaluate derivative
	CppAD::ADFun<double>   F(X, Ym);
	CppAD::vector<double>     dy(n);
	dy = F.Jacobian(x);

	for(k = 0; k < n; k++)
		ok &= NearEqual(ym[k], dy[k] , 1e-10, 1e-10);
 
	return ok;
}

Input File: speed/example/sparse_evaluate.cpp
9.2.2.8.2: Source: sparse_evaluate
# ifndef CPPAD_SPARSE_EVALUATE_INCLUDED
# define CPPAD_SPARSE_EVALUATE_INCLUDED
 
# include <cppad/local/cppad_assert.hpp>
# include <cppad/check_numeric_type.hpp>
# include <cppad/vector.hpp>

namespace CppAD {
	template <class Float>
	void sparse_evaluate(
		const CppAD::vector<Float>  &x  ,
		const CppAD::vector<size_t> &i  , 
		const CppAD::vector<size_t> &j  , 
		size_t                       m  ,
		CppAD::vector<Float>       &fm  )
	{
		// check numeric type specifications
		CheckNumericType<Float>();

		size_t k;
		size_t n    = x.size();
		size_t size = 1;
		for(k = 0; k < m; k++)
			size *= n;
		CPPAD_ASSERT_KNOWN(
			fm.size() == size,
			"sparse_evaluate: size of fm not equal n^m"
		);
		for(k = 0; k < size; k++)
			fm[k] = Float(0);

		size_t ell = i.size();
		Float t;
		Float dt_i;
		Float dt_j;
		for(k = 0; k < ell; k++)
		{	t    = exp( x[i[k]] * x[j[k]] );	
			dt_i = t * x[j[k]];
			dt_j = t * x[i[k]];
			switch(m)
			{	case 0:
				fm[0] += t;
				break;

				case 1:
				fm[i[k]] += dt_i;
				fm[j[k]] += dt_j;
				break;

				case 2:
				fm[i[k] * n + i[k]] += dt_i * x[j[k]];
				fm[i[k] * n + j[k]] += t + dt_j * x[j[k]];
				fm[j[k] * n + i[k]] += t + dt_i * x[i[k]];
				fm[j[k] * n + j[k]] += dt_j * x[i[k]];
				break;
			}
		}
			
	}
}
# endif

Input File: omh/sparse_evaluate.omh
9.2.3: Speed Test Functions in Double

9.2.3.a: Purpose
CppAD has a set of speed tests for just calculating functions (in double precision instead of an AD type). This section links to the source code the function value speed tests.

9.2.3.b: Speed
To run these tests, you must include the --with-Speed command line option during the 2.1.d: install configure command. After the Unix install 2.1.u: make command, you can then run the function value speed tests with the following commands (relative to the distribution directory):
     speed/double/double correct 
seed
where seed is a positive integer see for the random number generator 9.2.2.1: uniform_01 . This will check that the speed tests have been built correctly. You can run the command
     speed/double/double speed 
seed
to see the results of all the speed tests. See 9.2.1: speed_main for more options.

9.2.3.c: C++ Compiler Flags
The C++ compiler flags used to build the double speed tests are
 
     AM_CXXFLAGS   = -O2 -DNDEBUG $(CXX_FLAGS) -DDOUBLE
where CXX_FLAGS is specified by the 2.1.d: configure command.

9.2.3.d: Contents
9.2.3.1: Double Speed: Determinant by Minor Expansion
9.2.3.2: Double Speed: Determinant Using Lu Factorization
9.2.3.3: Double Speed: Ode Solution
9.2.3.4: Double Speed: Evaluate a Polynomial
9.2.3.5: Double Speed: Sparse Hessian

Input File: omh/speed_double.omh
9.2.3.1: Double Speed: Determinant by Minor Expansion

9.2.3.1.a: link_det_minor
Routine that computes the gradient of determinant using CppAD:
 
# include <cppad/vector.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_det_minor(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &det      )
{
	// -----------------------------------------------------
	// setup
	CppAD::det_by_minor<double>   Det(size);
	size_t n = size * size; // number of independent variables
	
	// ------------------------------------------------------
	while(repeat--)
	{	// get the next matrix
		CppAD::uniform_01(n, matrix);

		// computation of the determinant
		det[0] = Det(matrix);
	}
	return true;
}

Input File: speed/double/det_minor.cpp
9.2.3.2: Double Speed: Determinant Using Lu Factorization

9.2.3.2.a: Specifications
See 9.2.1.1: link_det_lu .

9.2.3.2.b: Implementation
 
# include <cppad/vector.hpp>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_det_lu(
	size_t                           size     , 
	size_t                           repeat   , 
	CppAD::vector<double>           &matrix   ,
	CppAD::vector<double>           &det      )
{
	// -----------------------------------------------------
	// setup
	CppAD::det_by_lu<double>  Det(size);
	size_t n = size * size; // number of independent variables
	
	// ------------------------------------------------------

	while(repeat--)
	{	// get the next matrix
		CppAD::uniform_01(n, matrix);

		// computation of the determinant
		det[0] = Det(matrix);
	}
	return true;
}

Input File: speed/double/det_lu.cpp
9.2.3.3: Double Speed: Ode Solution

9.2.3.3.a: link_ode
Routine that computes the gradient of determinant using CppAD:
 
# include <cstring>
# include <cppad/vector.hpp>
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>

// value can be true of false
# define DOUBLE_COMPUTE_GRADIENT 0

bool link_ode(
	size_t                     size       ,
	size_t                     repeat     ,
	bool                       retape     ,
	CppAD::vector<double>      &x         ,
	CppAD::vector<double>      &gradient
)
{	// -------------------------------------------------------------
	// setup

	size_t n = size;
	assert( x.size() == n );

	size_t m = 0;
	CppAD::vector<double> f(1);
# if DOUBLE_COMPUTE_GRADIENT
	m = 1;
	f.resize(n);
# endif

	while(repeat--)
	{ 	// choose next x value
		uniform_01(n, x);

		// evaluate function
		CppAD::ode_evaluate(x, m, f);

	}
	gradient[0] = f[0];
# if DOUBLE_COMPUTE_GRADIENT
	gradient    = f;
# endif
	return true;
}

Input File: speed/double/ode.cpp
9.2.3.4: Double Speed: Evaluate a Polynomial

9.2.3.4.a: link_poly
Routine that computes the second derivative of a polynomial using CppAD:
 
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_poly(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &a        ,  // coefficients of polynomial
	CppAD::vector<double>     &z        ,  // polynomial argument value
	CppAD::vector<double>     &p        )  // second derivative w.r.t z  
{
	// -----------------------------------------------------
	// setup

	// ------------------------------------------------------
	while(repeat--)
	{	// get the next argument value
		CppAD::uniform_01(1, z);

		// evaluate the polynomial at the new argument value
		p[0] = CppAD::Poly(0, a, z[0]);
	}
	return true;
}

Input File: speed/double/poly.cpp
9.2.3.5: Double Speed: Sparse Hessian

9.2.3.5.a: link_sparse_hessian
Routine that computes function values for 9.2.2.8: sparse_evaluate
 
# include <cppad/vector.hpp>
# include <cppad/speed/uniform_01.hpp>

// must include cmath before sparse_evaluate so that exp is defined for double
# include <cmath>
# include <cppad/speed/sparse_evaluate.hpp>

bool link_sparse_hessian(
	size_t                     repeat   , 
	CppAD::vector<double>     &x        ,
	CppAD::vector<size_t>     &i        ,
	CppAD::vector<size_t>     &j        ,
	CppAD::vector<double>     &hessian  )
{
	// -----------------------------------------------------
	// setup
	using CppAD::vector;
	size_t order = 0;        // derivative order corresponding to function
	size_t n     = x.size(); // argument space dimension
	size_t ell   = i.size(); // size of index vectors
	vector<double> y(1);     // function value

	// temporaries
	size_t k;
	vector<double> tmp(2 * ell);

	// choose a value for x
	CppAD::uniform_01(n, x);
	
	// ------------------------------------------------------

	while(repeat--)
	{
		// get the next set of indices
		CppAD::uniform_01(2 * ell, tmp);
		for(k = 0; k < ell; k++)
		{	i[k] = size_t( n * tmp[k] );
			i[k] = std::min(n-1, i[k]);
			//
			j[k] = size_t( n * tmp[k + ell] );
			j[k] = std::min(n-1, j[k]);
		}

		// computation of the function
		CppAD::sparse_evaluate(x, i, j, order, y);
	}
	hessian[0] = y[0];

	return true;
}

Input File: speed/double/sparse_hessian.cpp
9.2.4: Speed Test Derivatives Using Adolc

9.2.4.a: Purpose
CppAD has a set of speed tests that are used to compare Adolc with other AD packages. This section links to the source code the Adolc speed tests (any suggestions to make the Adolc results faster are welcome).

9.2.4.b: AdolcDir
To run these tests, you must include the configure command line option
     ADOLC_DIR=
AdolcDir
during 2.1.o: installation . After the Unix install 2.1.u: make command, you can then run the Adolc speed tests with the following commands (relative to the distribution directory):
     speed/adolc/adolc correct 
seed
where seed is a positive integer see for the random number generator 9.2.2.1: uniform_01 . This will check that the speed tests have been built correctly. You can run the command
     speed/adolc/adolc speed 
seed
to see the results of all the speed tests. See 9.2.1: speed_main for more options.

9.2.4.c: C++ Compiler Flags
The C++ compiler flags used to build the Adolc speed tests are
 
     AM_CXXFLAGS   = -O2 -DNDEBUG $(CXX_FLAGS) -DADOLC
where CXX_FLAGS is specified by the 2.1.d: configure command.

9.2.4.d: Contents
9.2.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
9.2.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
9.2.4.3: Adolc Speed: Ode
9.2.4.4: Adolc Speed: Second Derivative of a Polynomial
9.2.4.5: Adolc Speed: Sparse Hessian

Input File: omh/speed_adolc.omh
9.2.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion

9.2.4.1.a: link_det_minor
Routine that computes the gradient of determinant using Adolc:
 
# include <cppad/vector.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/interfaces.h>

bool link_det_minor(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &gradient )
{
	// -----------------------------------------------------
	// setup
	int tag  = 0;         // tape identifier
	int keep = 1;         // keep forward mode results in buffer
	int m    = 1;         // number of dependent variables
	int n    = size*size; // number of independent variables
	double f;             // function value
	int j;                // temporary index

	// object for computing determinant
	typedef adouble    ADScalar;
	typedef ADScalar*  ADVector;
	CppAD::det_by_minor<ADScalar> Det(size);

	// AD value of determinant
	ADScalar   detA;

	// AD version of matrix
	ADVector   A = 0;
	A            = CPPAD_TRACK_NEW_VEC(n, A);
	
	// vectors of reverse mode weights 
	double *u = 0;
	u         = CPPAD_TRACK_NEW_VEC(m, u);
	u[0] = 1.;

	// vector with matrix value
	double *mat = 0;
	mat         = CPPAD_TRACK_NEW_VEC(n, mat);

	// vector to receive gradient result
	double *grad = 0;
	grad         = CPPAD_TRACK_NEW_VEC(n, grad);


	if( retape ) while(repeat--)
	{
		// choose a matrix
		CppAD::uniform_01(n, mat);

		// declare independent variables
		trace_on(tag, keep);
		for(j = 0; j < n; j++)
			A[j] <<= mat[j];

		// AD computation of the determinant
		detA = Det(A);

		// create function object f : A -> detA
		detA >>= f;
		trace_off();

		// get the next matrix
		CppAD::uniform_01(n, mat);

		// evaluate the determinant at the new matrix value
		zos_forward(tag, m, n, keep, mat, &f); 

		// evaluate and return gradient using reverse mode
		fos_reverse(tag, m, n, u, grad);
	}
	else
	{
		// choose a matrix
		CppAD::uniform_01(n, mat);

		// declare independent variables
		trace_on(tag, keep);
		for(j = 0; j < n; j++)
			A[j] <<= mat[j];

		// AD computation of the determinant
		detA = Det(A);

		// create function object f : A -> detA
		detA >>= f;
		trace_off();

		while(repeat--)
		{	// get the next matrix
			CppAD::uniform_01(n, mat);

			// evaluate the determinant at the new matrix value
			zos_forward(tag, m, n, keep, mat, &f); 

			// evaluate and return gradient using reverse mode
			fos_reverse(tag, m, n, u, grad);
		}
	}

	// return matrix and gradient
	for(j = 0; j < n; j++)
	{	matrix[j] = mat[j];
		gradient[j] = grad[j];
	}

	// tear down
	CPPAD_TRACK_DEL_VEC(grad);
	CPPAD_TRACK_DEL_VEC(mat);
	CPPAD_TRACK_DEL_VEC(u);
	CPPAD_TRACK_DEL_VEC(A);
	return true;
}

Input File: speed/adolc/det_minor.cpp
9.2.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization

9.2.4.2.a: Specifications
See 9.2.1.1: link_det_lu .

9.2.4.2.b: Implementation
 
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>

# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/interfaces.h>

bool link_det_lu(
	size_t                     size     , 
	size_t                     repeat   , 
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &gradient )
{
	// -----------------------------------------------------
	// setup
	int tag  = 0;         // tape identifier
	int keep = 1;         // keep forward mode results in buffer
	int m    = 1;         // number of dependent variables
	int n    = size*size; // number of independent variables
	double f;             // function value
	int j;                // temporary index

	// object for computing determinant
	typedef adouble    ADScalar;
	typedef ADScalar*  ADVector;
	CppAD::det_by_lu<ADScalar> Det(size);

	// AD value of determinant
	ADScalar   detA;

	// AD version of matrix
	ADVector   A = 0;
	A            = CPPAD_TRACK_NEW_VEC(n, A);
	
	// vectors of reverse mode weights 
	double *u = 0;
	u         = CPPAD_TRACK_NEW_VEC(m, u);
	u[0] = 1.;

	// vector with matrix value
	double *mat = 0;
	mat         = CPPAD_TRACK_NEW_VEC(n, mat);

	// vector to receive gradient result
	double *grad = 0;
	grad         = CPPAD_TRACK_NEW_VEC(n, grad);
	// ------------------------------------------------------
	while(repeat--)
	{	// get the next matrix
		CppAD::uniform_01(n, mat);

		// declare independent variables
		trace_on(tag, keep);
		for(j = 0; j < n; j++)
			A[j] <<= mat[j];

		// AD computation of the determinant
		detA = Det(A);

		// create function object f : A -> detA
		detA >>= f;
		trace_off();

		// evaluate and return gradient using reverse mode
		fos_reverse(tag, m, n, u, grad);
	}
	// ------------------------------------------------------

	// return matrix and gradient
	for(j = 0; j < n; j++)
	{	matrix[j] = mat[j];
		gradient[j] = grad[j];
	}
	// tear down
	CPPAD_TRACK_DEL_VEC(grad);
	CPPAD_TRACK_DEL_VEC(mat);
	CPPAD_TRACK_DEL_VEC(u);
	CPPAD_TRACK_DEL_VEC(A);

	return true;
}

Input File: speed/adolc/det_lu.cpp
9.2.4.3: Adolc Speed: Ode
Indicate that this test is not available:
 

// The adolc version of this test is not yet available

bool link_ode(
	size_t                     size       ,
	size_t                     repeat     ,
	bool                       retape     ,
	CppAD::vector<double>      &x         ,
	CppAD::vector<double>      &gradient
)
{
	return false;
}

Input File: speed/adolc/ode.cpp
9.2.4.4: Adolc Speed: Second Derivative of a Polynomial

9.2.4.4.a: link_poly
Routine that computes the second derivative of a polynomial using Adolc:
 
# include <vector>

# include <cppad/speed/uniform_01.hpp>
# include <cppad/poly.hpp>
# include <cppad/vector.hpp>

# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/interfaces.h>

bool link_poly(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &a        ,  // coefficients of polynomial
	CppAD::vector<double>     &z        ,  // polynomial argument value
	CppAD::vector<double>     &ddp      )  // second derivative w.r.t z  
{
	// -----------------------------------------------------
	// setup
	int tag  = 0;  // tape identifier
	int keep = 1;  // keep forward mode results in buffer
	int m    = 1;  // number of dependent variables
	int n    = 1;  // number of independent variables
	int d    = 2;  // order of the derivative
	double f;      // function value
	int i;         // temporary index

	// choose a vector of polynomial coefficients
	CppAD::uniform_01(size, a);

	// AD copy of the polynomial coefficients
	std::vector<adouble> A(size);
	for(i = 0; i < int(size); i++)
		A[i] = a[i];

	// domain and range space AD values
	adouble Z, P;

	// allocate arguments to hos_forward
	double *x0 = 0;
	x0         = CPPAD_TRACK_NEW_VEC(n, x0);
	double *y0 = 0;
	y0         = CPPAD_TRACK_NEW_VEC(m, y0);
	double **x = 0;
	x          = CPPAD_TRACK_NEW_VEC(n, x);
	double **y = 0;
	y          = CPPAD_TRACK_NEW_VEC(m, y);
	for(i = 0; i < n; i++)
	{	x[i] = 0;
		x[i] = CPPAD_TRACK_NEW_VEC(d, x[i]);
	}
	for(i = 0; i < m; i++)
	{	y[i] = 0;
		y[i] = CPPAD_TRACK_NEW_VEC(d, y[i]);
	}
	// Taylor coefficient for argument
	x[0][0] = 1.;  // first order
	x[0][1] = 0.;  // second order
	

	if( retape ) while(repeat--)
	{	// choose an argument value
		CppAD::uniform_01(1, z);

		// declare independent variables
		trace_on(tag, keep);
		Z <<= z[0]; 

		// AD computation of the function value 
		P = CppAD::Poly(0, A, Z);

		// create function object f : Z -> P
		P >>= f;
		trace_off();

		// get the next argument value
		CppAD::uniform_01(1, z);
		x0[0] = z[0];

		// evaluate the polynomial at the new argument value
		hos_forward(tag, m, n, d, keep, x0, x, y0, y);

		// second derivative is twice second order Taylor coef
		ddp[0] = 2. * y[0][1];
	}
	else
	{
		// choose an argument value
		CppAD::uniform_01(1, z);

		// declare independent variables
		trace_on(tag, keep);
		Z <<= z[0]; 

		// AD computation of the function value 
		P = CppAD::Poly(0, A, Z);

		// create function object f : Z -> P
		P >>= f;
		trace_off();
		while(repeat--)
		{	// get the next argument value
			CppAD::uniform_01(1, z);
			x0[0] = z[0];

			// evaluate the polynomial at the new argument value
			hos_forward(tag, m, n, d, keep, x0, x, y0, y);

			// second derivative is twice second order Taylor coef
			ddp[0] = 2. * y[0][1];
		}
	}
	// ------------------------------------------------------
	// tear down
	CPPAD_TRACK_DEL_VEC(x0);
	CPPAD_TRACK_DEL_VEC(y0);
	for(i = 0; i < n; i++)
		CPPAD_TRACK_DEL_VEC(x[i]);
	for(i = 0; i < m; i++)
		CPPAD_TRACK_DEL_VEC(y[i]);
	CPPAD_TRACK_DEL_VEC(x);
	CPPAD_TRACK_DEL_VEC(y);

	return true;
}

Input File: speed/adolc/poly.cpp
9.2.4.5: Adolc Speed: Sparse Hessian

9.2.4.5.a: Operation Sequence
Note that the 9.4.g.b: operation sequence depends on the vectors i and j. Hence we use a different 5: ADFun object for each choice of i and j.

9.2.4.5.b: link_sparse_hessian
Routine that computes the gradient of determinant using Adolc:
 
# include <cppad/vector.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/track_new_del.hpp>
# include <cppad/speed/sparse_evaluate.hpp>

# include <adolc/adouble.h>
# include <adolc/taping.h>
# include <adolc/drivers/drivers.h>

bool link_sparse_hessian(
	size_t                     repeat   , 
	CppAD::vector<double>     &x_arg    ,
	CppAD::vector<size_t>     &i        ,
	CppAD::vector<size_t>     &j        ,
	CppAD::vector<double>     &h        )
{
	// -----------------------------------------------------
	// setup
	size_t k, m;
	size_t order = 0;         // derivative order corresponding to function
	size_t tag  = 0;          // tape identifier
	size_t keep = 1;          // keep forward mode results in buffer
	size_t n = x_arg.size();  // number of independent variables
	size_t ell = i.size();    // number of indices in i and j
	double f;                 // function value

	typedef CppAD::vector<double>  DblVector;
	typedef CppAD::vector<adouble> ADVector;
	typedef CppAD::vector<size_t>  SizeVector;

	ADVector   X(n);    // AD domain space vector
	double       *x;    // double domain space vector
	double      **H;    // Hessian 
	ADVector   Y(1);    // AD range space value
	DblVector tmp(2 * ell);       // double temporary vector

	x = 0;
	x = CPPAD_TRACK_NEW_VEC(n, x);
	H = 0;
	H = CPPAD_TRACK_NEW_VEC(n, H);
	for(k = 0; k < n; k++)
	{	H[k] = 0;
		H[k] = CPPAD_TRACK_NEW_VEC(n, H[k]);
	}

	// choose a value for x 
	CppAD::uniform_01(n, x);
	for(k = 0; k < n; k++)
		x_arg[k] = x[k];

	// ------------------------------------------------------
	while(repeat--)
	{
		// get the next set of indices
		CppAD::uniform_01(2 * ell, tmp);
		for(k = 0; k < ell; k++)
		{	i[k] = size_t( n * tmp[k] );
			i[k] = std::min(n-1, i[k]);
			//
			j[k] = size_t( n * tmp[k + ell] );
			j[k] = std::min(n-1, j[k]);
		}

		// declare independent variables
		trace_on(tag, keep);
		for(k = 0; k < n; k++)
			X[k] <<= x[k];

		// AD computation of f(x)
		CppAD::sparse_evaluate(X, i, j, order, Y);

		// create function object f : X -> Y
		Y[0] >>= f;
		trace_off();

		// evaluate and return the hessian of f
		hessian(int(tag), int(n), x, H);
	}
	for(k = 0; k < n; k++)
	{	for(m = 0; m <= k; m++)
		{	h[ k * n + m] = H[k][m];
			h[ m * n + k] = H[k][m];
		}
		CPPAD_TRACK_DEL_VEC(H[k]);
	}
	CPPAD_TRACK_DEL_VEC(H);
	CPPAD_TRACK_DEL_VEC(x);
	return true;
}

Input File: speed/adolc/sparse_hessian.cpp
9.2.5: Speed Test Derivatives Using CppAD

9.2.5.a: Purpose
CppAD has a set of speed tests that are used to determine if certain changes improve its execution speed (and to compare CppAD with other AD packages). This section links to the source code the CppAD speed tests (any suggestions to make the CppAD results faster are welcome).

9.2.5.b: Speed
To run these tests, you must include the --with-Speed command line option during the 2.1.d: install configure command. After the Unix install 2.1.u: make command, you can then run the CppAD speed tests with the following commands (relative to the distribution directory):
     speed/cppad/cppad correct 
seed
where seed is a positive integer see for the random number generator 9.2.2.1: uniform_01 . This will check that the speed tests have been built correctly. You can run the command
     speed/cppad/cppad speed 
seed
to see the results of all the speed tests. See 9.2.1: speed_main for more options.

9.2.5.c: C++ Compiler Flags
The C++ compiler flags used to build the CppAD speed tests are
 
     AM_CXXFLAGS   = -O2 -DNDEBUG $(CXX_FLAGS) -DCPPAD
where CXX_FLAGS is specified by the 2.1.d: configure command.

9.2.5.d: Contents
9.2.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
9.2.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
9.2.5.3: CppAD Speed: Gradient of Ode Solution
9.2.5.4: CppAD Speed: Second Derivative of a Polynomial
9.2.5.5: CppAD Speed: Sparse Hessian

Input File: omh/speed_cppad.omh
9.2.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion

9.2.5.1.a: link_det_minor
Routine that computes the gradient of determinant using CppAD:
 
# include <cppad/vector.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_det_minor(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &gradient )
{
	// -----------------------------------------------------
	// setup

	// object for computing determinant
	typedef CppAD::AD<double>       ADScalar; 
	typedef CppAD::vector<ADScalar> ADVector; 
	CppAD::det_by_minor<ADScalar>   Det(size);

	size_t i;               // temporary index
	size_t m = 1;           // number of dependent variables
	size_t n = size * size; // number of independent variables
	ADVector   A(n);        // AD domain space vector
	ADVector   detA(m);     // AD range space vector
	
	// vectors of reverse mode weights 
	CppAD::vector<double> w(1);
	w[0] = 1.;

	if(retape) while(repeat--)
	{
		// choose a matrix
		CppAD::uniform_01(n, matrix);
		for( i = 0; i < size * size; i++)
			A[i] = matrix[i];
	
		// declare independent variables
		Independent(A);
	
		// AD computation of the determinant
		detA[0] = Det(A);
	
		// create function object f : A -> detA
		CppAD::ADFun<double> f(A, detA);
	
		// get the next matrix
		CppAD::uniform_01(n, matrix);
	
		// evaluate the determinant at the new matrix value
		f.Forward(0, matrix);
	
		// evaluate and return gradient using reverse mode
		gradient = f.Reverse(1, w);
	}
	else
	{
		// choose a matrix
		CppAD::uniform_01(n, matrix);
		for( i = 0; i < size * size; i++)
			A[i] = matrix[i];
	
		// declare independent variables
		Independent(A);
	
		// AD computation of the determinant
		detA[0] = Det(A);
	
		// create function object f : A -> detA
		CppAD::ADFun<double> f(A, detA);
	
		// ------------------------------------------------------
		while(repeat--)
		{	// get the next matrix
			CppAD::uniform_01(n, matrix);
	
			// evaluate the determinant at the new matrix value
			f.Forward(0, matrix);
	
			// evaluate and return gradient using reverse mode
			gradient = f.Reverse(1, w);
		}
	}
	return true;
}

Input File: speed/cppad/det_minor.cpp
9.2.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization

9.2.5.2.a: Specifications
See 9.2.1.1: link_det_lu .

9.2.5.2.b: Implementation
 
# include <cppad/vector.hpp>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_det_lu(
	size_t                           size     , 
	size_t                           repeat   , 
	CppAD::vector<double>           &matrix   ,
	CppAD::vector<double>           &gradient )
{
	// -----------------------------------------------------
	// setup
	typedef CppAD::AD<double>           ADScalar; 
	typedef CppAD::vector<ADScalar>     ADVector; 
	CppAD::det_by_lu<ADScalar>          Det(size);

	size_t i;               // temporary index
	size_t m = 1;           // number of dependent variables
	size_t n = size * size; // number of independent variables
	ADVector   A(n);        // AD domain space vector
	ADVector   detA(m);     // AD range space vector
	
	// vectors of reverse mode weights 
	CppAD::vector<double> w(1);
	w[0] = 1.;

	// ------------------------------------------------------

	while(repeat--)
	{	// get the next matrix
		CppAD::uniform_01(n, matrix);
		for( i = 0; i < n; i++)
			A[i] = matrix[i];

		// declare independent variables
		Independent(A);

		// AD computation of the determinant
		detA[0] = Det(A);

		// create function object f : A -> detA
		CppAD::ADFun<double> f(A, detA);

		// evaluate and return gradient using reverse mode
		gradient = f.Reverse(1, w);
	}
	return true;
}

Input File: speed/cppad/det_lu.cpp
9.2.5.3: CppAD Speed: Gradient of Ode Solution

9.2.5.3.a: link_ode
Routine that computes the gradient of determinant using CppAD:
 
# include <cstring>
# include <cppad/cppad.hpp>
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cassert>

bool link_ode(
	size_t                     size       ,
	size_t                     repeat     ,
	bool                       retape     ,
	CppAD::vector<double>      &x         ,
	CppAD::vector<double>      &gradient
)
{	// -------------------------------------------------------------
	// setup
	typedef CppAD::AD<double>       ADScalar;
	typedef CppAD::vector<ADScalar> ADVector;
	typedef CppAD::vector<double>   DblVector;

	size_t j;
	size_t m = 0;
	size_t n = size;
	assert( x.size() == n );

	ADVector  X(n);
	ADVector  Y(1);
	DblVector w(1);
	w[0] = 1.;

	if( retape ) while(repeat--)
	{ 	// choose next x value
		uniform_01(n, x);
		for(j = 0; j < n; j++)
			X[j] = x[j];

		// declare the independent variable vector
		Independent(X);

		// evaluate function
		CppAD::ode_evaluate(X, m, Y);

		// create function object f : X -> Y
		CppAD::ADFun<double>   F(X, Y);

		// use reverse mode to compute gradient
		gradient = F.Reverse(1, w);
	}
	else
	{	// choose any x value
		for(j = 0; j < n; j++)
			X[j] = 0.;

		// declare the independent variable vector
		Independent(X);

		// evaluate function
		CppAD::ode_evaluate(X, m, Y);

		// create function object f : X -> Y
		CppAD::ADFun<double>   F(X, Y);

		while(repeat--)
		{ 	// choose next x value
			uniform_01(n, x);
			// zero order forward mode to evaluate function at x
			F.Forward(0, x);
			// first order reverse mode to compute gradient
			gradient = F.Reverse(1, w);
		}
	}

	return true;
}

Input File: speed/cppad/ode.cpp
9.2.5.4: CppAD Speed: Second Derivative of a Polynomial

9.2.5.4.a: link_poly
Routine that computes the second derivative of a polynomial using CppAD:
 
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_poly(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &a        ,  // coefficients of polynomial
	CppAD::vector<double>     &z        ,  // polynomial argument value
	CppAD::vector<double>     &ddp      )  // second derivative w.r.t z  
{
	// -----------------------------------------------------
	// setup
	typedef CppAD::AD<double>     ADScalar; 
	typedef CppAD::vector<ADScalar> ADVector; 

	size_t i;      // temporary index
	size_t m = 1;  // number of dependent variables
	size_t n = 1;  // number of independent variables
	ADVector Z(n); // AD domain space vector
	ADVector P(m); // AD range space vector

	// choose the polynomial coefficients
	CppAD::uniform_01(size, a);

	// AD copy of the polynomial coefficients
	ADVector A(size);
	for(i = 0; i < size; i++)
		A[i] = a[i];

	// forward mode first and second differentials
	CppAD::vector<double> p(1), dp(1), dz(1), ddz(1);
	dz[0]  = 1.;
	ddz[0] = 0.;

	CppAD::ADFun<double> f;

	if( retape ) while(repeat--)
	{
		// choose an argument value
		CppAD::uniform_01(1, z);
		Z[0] = z[0];

		// declare independent variables
		Independent(Z);

		// AD computation of the function value 
		P[0] = CppAD::Poly(0, A, Z[0]);

		// create function object f : A -> detA
		f.Dependent(Z, P);

		// pre-allocate memory for three forward mode calculations
		f.capacity_taylor(3);

		// get the next argument value
		CppAD::uniform_01(1, z);

		// evaluate the polynomial at the new argument value
		p = f.Forward(0, z);

		// evaluate first order Taylor coefficient
		dp = f.Forward(1, dz);

		// second derivative is twice second order Taylor coef
		ddp     = f.Forward(2, ddz);
		ddp[0] *= 2.;
	}
	else
	{
		// choose an argument value
		CppAD::uniform_01(1, z);
		Z[0] = z[0];

		// declare independent variables
		Independent(Z);

		// AD computation of the function value 
		P[0] = CppAD::Poly(0, A, Z[0]);

		// create function object f : A -> detA
		f.Dependent(Z, P);

		while(repeat--)
		{	// sufficient memory is allocated by second repetition

			// get the next argument value
			CppAD::uniform_01(1, z);

			// evaluate the polynomial at the new argument value
			p = f.Forward(0, z);

			// evaluate first order Taylor coefficient
			dp = f.Forward(1, dz);

			// second derivative is twice second order Taylor coef
			ddp     = f.Forward(2, ddz);
			ddp[0] *= 2.;
		}
	}
	return true;
}

Input File: speed/cppad/poly.cpp
9.2.5.5: CppAD Speed: Sparse Hessian

9.2.5.5.a: Operation Sequence
Note that the 9.4.g.b: operation sequence depends on the vectors i and j. Hence we use a different 5: ADFun object for each choice of i and j.

9.2.5.5.b: Sparse Hessian
If the preprocessor symbol CPPAD_USE_SPARSE_HESSIAN is true, the routine 5.7.8: SparseHessian is used for the calculation. Otherwise, the routine 5.7.4: Hessian is used.

9.2.5.5.c: link_sparse_hessian
Routine that computes the gradient of determinant using CppAD:
 
# include <cppad/cppad.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/speed/sparse_evaluate.hpp>

// value can be true or false
# define CPPAD_USE_SPARSE_HESSIAN  1

bool link_sparse_hessian(
	size_t                     repeat   , 
	CppAD::vector<double>     &x        ,
	CppAD::vector<size_t>     &i        ,
	CppAD::vector<size_t>     &j        ,
	CppAD::vector<double>     &hessian  )
{
	// -----------------------------------------------------
	// setup
	using CppAD::AD;
	typedef CppAD::vector<double>       DblVector;
	typedef CppAD::vector< AD<double> > ADVector;
	typedef CppAD::vector<size_t>       SizeVector;

	size_t order = 0;         // derivative order corresponding to function
	size_t m = 1;             // number of dependent variables
	size_t n = x.size();      // number of independent variables
	size_t ell = i.size();    // number of indices in i and j
	ADVector   X(n);          // AD domain space vector
	ADVector   Y(m);          // AD range space vector
	DblVector  w(m);          // double range space vector
	DblVector tmp(2 * ell);   // double temporary vector

	
	// choose a value for x 
	CppAD::uniform_01(n, x);
	size_t k;
	for(k = 0; k < n; k++)
		X[k] = x[k];

	// weights for hessian calculation (only one component of f)
	w[0] = 1.;

	// ------------------------------------------------------
	while(repeat--)
	{
		// get the next set of indices
		CppAD::uniform_01(2 * ell, tmp);
		for(k = 0; k < ell; k++)
		{	i[k] = size_t( n * tmp[k] );
			i[k] = std::min(n-1, i[k]);
			//
			j[k] = size_t( n * tmp[k + ell] );
			j[k] = std::min(n-1, j[k]);
		}

		// declare independent variables
		Independent(X);	

		// AD computation of f(x)
		CppAD::sparse_evaluate< AD<double> >(X, i, j, order, Y);

		// create function object f : X -> Y
		CppAD::ADFun<double> f(X, Y);

		// evaluate and return the hessian of f
# if CPPAD_USE_SPARSE_HESSIAN
		hessian = f.SparseHessian(x, w);
# else
		hessian = f.Hessian(x, w);
# endif
	}
	return true;
}

Input File: speed/cppad/sparse_hessian.cpp
9.2.6: Speed Test Derivatives Using Fadbad

9.2.6.a: Purpose
CppAD has a set of speed tests that are used to compare Fadbad with other AD packages. This section links to the source code the Fadbad speed tests (any suggestions to make the Fadbad results faster are welcome).

9.2.6.b: FadbadDir
To run these tests, you must include the configure command line option
     FADBAD_DIR=
FadbadDir
during 2.1.p: installation . After the Unix install 2.1.u: make command, you can then run the Fadbad speed tests with the following commands (relative to the distribution directory):
     speed/fadbad/fadbad correct 
seed
where seed is a positive integer see for the random number generator 9.2.2.1: uniform_01 . This will check that the speed tests have been built correctly. You can run the command
     speed/fadbad/fadbad speed 
seed
to see the results of all the speed tests. See 9.2.1: speed_main for more options.

9.2.6.c: C++ Compiler Flags
The C++ compiler flags used to build the Fadbad speed tests are
 
     # Note that Fadbad will not compile with the -pedantic-errors flag,
     # so we are leaving CXX_FLAGS out of this complilation.
     AM_CXXFLAGS   = -O2 -DNDEBUG -Wno-deprecated -DFADBAD


9.2.6.d: Contents
9.2.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
9.2.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
9.2.6.3: Fadbad Speed: Ode
9.2.6.4: Fadbad Speed: Second Derivative of a Polynomial
9.2.6.5: Fadbad Speed: Sparse Hessian

Input File: omh/speed_fadbad.omh
9.2.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion

9.2.6.1.a: link_det_minor
Routine that computes the gradient of determinant using Fadbad:
 
# include <FADBAD++/badiff.h>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/vector.hpp>

bool link_det_minor(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &gradient )
{
	// -----------------------------------------------------
	// setup

	// object for computing determinant
	typedef fadbad::B<double>       ADScalar; 
	typedef CppAD::vector<ADScalar> ADVector; 
	CppAD::det_by_minor<ADScalar>   Det(size);

	size_t i;                // temporary index
	size_t m = 1;            // number of dependent variables
	size_t n = size * size;  // number of independent variables
	ADScalar   detA;         // AD value of the determinant
	ADVector   A(n);         // AD version of matrix 
	
	// ------------------------------------------------------
	while(repeat--)
       {	// get the next matrix
		CppAD::uniform_01(n, matrix);

		// set independent variable values
		for(i = 0; i < n; i++)
			A[i] = matrix[i];

		// compute the determinant
		detA = Det(A);

		// create function object f : A -> detA
		detA.diff(0, m);  // index 0 of m dependent variables

		// evaluate and return gradient using reverse mode
		for(i =0; i < n; i++)
			gradient[i] = A[i].d(0); // partial detA w.r.t A[i]
	}
	// ---------------------------------------------------------
	return true;
}

Input File: speed/fadbad/det_minor.cpp
9.2.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization

9.2.6.2.a: Specifications
See 9.2.1.1: link_det_lu .

9.2.6.2.b: Implementation
 
# include <FADBAD++/badiff.h>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/vector.hpp>

bool link_det_lu(
	size_t                     size     , 
	size_t                     repeat   , 
	CppAD::vector<double>      &matrix   ,
	CppAD::vector<double>      &gradient )
{
	// -----------------------------------------------------
	// setup

	// object for computing determinant
	typedef fadbad::B<double>       ADScalar; 
	typedef CppAD::vector<ADScalar> ADVector; 
	CppAD::det_by_lu<ADScalar>      Det(size);

	size_t i;                // temporary index
	size_t m = 1;            // number of dependent variables
	size_t n = size * size;  // number of independent variables
	ADScalar   detA;         // AD value of the determinant
	ADVector   A(n);         // AD version of matrix 
	
	// ------------------------------------------------------
	while(repeat--)
       {	// get the next matrix
		CppAD::uniform_01(n, matrix);

		// set independent variable values
		for(i = 0; i < n; i++)
			A[i] = matrix[i];

		// compute the determinant
		detA = Det(A);

		// create function object f : A -> detA
		detA.diff(0, m);  // index 0 of m dependent variables

		// evaluate and return gradient using reverse mode
		for(i =0; i < n; i++)
			gradient[i] = A[i].d(0); // partial detA w.r.t A[i]
	}
	// ---------------------------------------------------------
	return true;
}

Input File: speed/fadbad/det_lu.cpp
9.2.6.3: Fadbad Speed: Ode
Indicate that this test is not available:
 

// The fadbad version of this test is not yet available

# include <cstring>
# include <cppad/vector.hpp>

bool link_ode(
	size_t                     size       ,
	size_t                     repeat     ,
	bool                       retape     ,
	CppAD::vector<double>      &x         ,
	CppAD::vector<double>      &gradient
)
{
	return false;
}

Input File: speed/fadbad/ode.cpp
9.2.6.4: Fadbad Speed: Second Derivative of a Polynomial

9.2.6.4.a: link_poly
Routine that computes the derivative of a polynomial using Fadbad:
 
# include <cppad/vector.hpp>
# include <cppad/poly.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <FADBAD++/tadiff.h>

bool link_poly(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &a        ,  // coefficients of polynomial
	CppAD::vector<double>     &z        ,  // polynomial argument value
	CppAD::vector<double>     &ddp      )  // second derivative w.r.t z  
{
	// -----------------------------------------------------
	// setup
	size_t i;             // temporary index     
	fadbad::T<double>  Z; // domain space AD value
	fadbad::T<double>  P; // range space AD value

	// choose the polynomial coefficients
	CppAD::uniform_01(size, a);

	// AD copy of the polynomial coefficients
	CppAD::vector< fadbad::T<double> > A(size);
	for(i = 0; i < size; i++)
		A[i] = a[i];

	// ------------------------------------------------------
	while(repeat--)
	{	// get the next argument value
		CppAD::uniform_01(1, z);

		// independent variable value
		Z    = z[0]; // argument value
		Z[1] = 1;    // argument first order Taylor coefficient

		// AD computation of the dependent variable
		P = CppAD::Poly(0, A, Z);

		// Taylor-expand P to degree one
		P.eval(2);

		// second derivative is twice second order Taylor coefficient
		ddp[0] = 2. * P[2];

		// Free DAG corresponding to P does not seem to improve speed.
		// Probably because it gets freed the next time P is assigned.
		// P.reset();
	}
	// ------------------------------------------------------
	return true;
}

Input File: speed/fadbad/poly.cpp
9.2.6.5: Fadbad Speed: Sparse Hessian
Indicate that this test is not available:
 

// The fadbad version of this test is not yet available 
bool link_sparse_hessian(
        size_t                     repeat     ,
        CppAD::vector<double>      &x         ,
        CppAD::vector<size_t>      &i         ,
        CppAD::vector<size_t>      &j         ,
        CppAD::vector<double>      &hessian
)
{
	return false;
}

Input File: speed/fadbad/sparse_hessian.cpp
9.2.7: Speed Test Derivatives Using Sacado

9.2.7.a: Purpose
CppAD has a set of speed tests that are used to compare Sacado with other AD packages. This section links to the source code the Sacado speed tests (any suggestions to make the Sacado results faster are welcome).

9.2.7.b: SacadoDir
To run these tests, you must include the configure command line option
     SACADO_DIR=
SacadoDir
during 2.1.q: installation . After the Unix install 2.1.u: make command, you can then run the Sacado speed tests with the following commands (relative to the distribution directory):
     speed/sacado/sacado correct 
seed
This will check that the speed tests have been built correctly. where seed is a positive integer see for the random number generator 9.2.2.1: uniform_01 . This will check that the speed tests have been built correctly. You can run the command
     speed/sacado/sacado speed 
seed
to see the results of all the speed tests. See 9.2.1: speed_main for more options.

9.2.7.c: C++ Compiler Flags
The C++ compiler flags used to build the Sacado speed tests are
 
     AM_CXXFLAGS   = -O2 -DNDEBUG  $(CXX_FLAGS) -DSACADO \
      	-DRAD_EQ_ALIAS -DRAD_AUTO_AD_Const
where CXX_FLAGS is specified by the 2.1.d: configure command.

9.2.7.d: Contents
9.2.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
9.2.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
9.2.7.3: Sacado Speed: Gradient of Ode Solution
9.2.7.4: Sacado Speed: Second Derivative of a Polynomial
9.2.7.5: Sacado Speed: Sparse Hessian

Input File: omh/speed_sacado.omh
9.2.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion

9.2.7.1.a: link_det_minor
Routine that computes the gradient of determinant using Sacado:
 
# include <vector>
# include <Sacado.hpp>
# include <cppad/speed/det_by_minor.hpp>
# include <cppad/speed/uniform_01.hpp>

bool link_det_minor(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &gradient )
{
	// -----------------------------------------------------
	// setup

	// object for computing determinant
	typedef Sacado::Rad::ADvar<double>    ADScalar; 
	typedef CppAD::vector<ADScalar>        ADVector; 
	CppAD::det_by_minor<ADScalar>         Det(size);

	size_t i;                // temporary index
	size_t n = size * size;  // number of independent variables
	ADScalar   detA;         // AD value of the determinant
	ADVector   A(n);         // AD version of matrix 
	
	// ------------------------------------------------------
	while(repeat--)
       {	// get the next matrix
		CppAD::uniform_01(n, matrix);

		// set independent variable values
		for(i = 0; i < n; i++)
			A[i] = matrix[i];

		// compute the determinant
		detA = Det(A);

		// Compute the gradient of detA
		ADScalar::Gradcomp();

		// return gradient using reverse mode
		for(i =0; i < n; i++)
			gradient[i] = A[i].adj(); // partial detA w.r.t A[i]
	}
	// ---------------------------------------------------------
	return true;
}

Input File: speed/sacado/det_minor.cpp
9.2.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization

9.2.7.2.a: Specifications
See 9.2.1.1: link_det_lu .

9.2.7.2.b: Implementation
 
# include <Sacado.hpp>
# include <cppad/speed/det_by_lu.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/vector.hpp>

bool link_det_lu(
	size_t                     size     , 
	size_t                     repeat   , 
	CppAD::vector<double>     &matrix   ,
	CppAD::vector<double>     &gradient )
{
	// -----------------------------------------------------
	// setup

	// object for computing determinant
	typedef Sacado::Rad::ADvar<double> ADScalar; 
	typedef CppAD::vector<ADScalar>      ADVector; 
	CppAD::det_by_lu<ADScalar>         Det(size);

	size_t i;                // temporary index
	size_t n = size * size;  // number of independent variables
	ADScalar   detA;         // AD value of the determinant
	ADVector   A(n);         // AD version of matrix 
	
	// ------------------------------------------------------
	while(repeat--)
	{	// get the next matrix
		CppAD::uniform_01(n, matrix);

		// set independent variable values
		for(i = 0; i < n; i++)
			A[i] = matrix[i];

		// compute the determinant
		detA = Det(A);

		// compute the gradient of detA
		ADScalar::Gradcomp();

		// evaluate and return gradient using reverse mode
		for(i =0; i < n; i++)
			gradient[i] = A[i].adj(); // partial detA w.r.t A[i]
	}
	// ---------------------------------------------------------
	return true;
}

Input File: speed/sacado/det_lu.cpp
9.2.7.3: Sacado Speed: Gradient of Ode Solution

9.2.7.3.a: link_ode
 

# include <cstring>
# include <cppad/vector.hpp>

# define ODE_TEST_AVAILABLE 0

# if ! ODE_TEST_AVAILABLE
bool link_ode(
	size_t                     size       ,
	size_t                     repeat     ,
	bool                       retape     ,
	CppAD::vector<double>      &x         ,
	CppAD::vector<double>      &gradient
)
{
	return false;
}
# else

// There appears to be a problem with the way Sacado is used below
// because the following generates a segmentation fault.  

# include <Sacado.hpp>
# include <cppad/speed/ode_evaluate.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <cppad/vector.hpp>
# include <cassert>

bool link_ode(
	size_t                     size       ,
	size_t                     repeat     ,
	bool                       retape     ,
	CppAD::vector<double>      &x         ,
	CppAD::vector<double>      &gradient
)
{	// -------------------------------------------------------------
	// setup

	// object for computing determinant
	typedef Sacado::Rad::ADvar<double>   ADScalar; 
	typedef CppAD::vector<ADScalar>      ADVector; 

	size_t j;
	size_t m = 0;
	size_t n = size;
	assert( x.size() == n );
	ADVector  X(n);
	ADVector  Y(1);
	ADScalar  last;
	
	// ------------------------------------------------------
	while(repeat--)
	{	// choose next x value
		CppAD::uniform_01(n, x);

		// set independent variable values
		for(j = 0; j < n; j++)
			X[j] = x[j];

		// evaluate function
		CppAD::ode_evaluate(X, m, Y);

		// make sure function value is last assignment
		last = Y[0];

		// compute the gradient using reverse mode
		ADScalar::Gradcomp();

		// evaluate return gradient 
		for(j =0; j < n; j++)
			gradient[j] = X[j].adj();
	}
	// ---------------------------------------------------------
	return true;
}
# endif


Input File: speed/sacado/ode.cpp
9.2.7.4: Sacado Speed: Second Derivative of a Polynomial

9.2.7.4.a: link_poly
Routine that computes the derivative of a polynomial using Sacado:
 
# include <cppad/vector.hpp>
# include <cppad/poly.hpp>
# include <cppad/speed/uniform_01.hpp>
# include <Sacado.hpp>

bool link_poly(
	size_t                     size     , 
	size_t                     repeat   , 
	bool                       retape   ,
	CppAD::vector<double>     &a        ,  // coefficients of polynomial
	CppAD::vector<double>     &z        ,  // polynomial argument value
	CppAD::vector<double>     &ddp      )  // second derivative w.r.t z  
{
	// -----------------------------------------------------
	// setup
	typedef Sacado::Tay::Taylor<double>  ADScalar;
	CppAD::vector<ADScalar>              A(size);

	size_t i;               // temporary index     
	ADScalar   Z;           // domain space AD value
	ADScalar   P;           // range space AD value 
	unsigned int order = 2; // order of Taylor coefficients
	Z.resize(order+1, false);
	P.resize(order+1, false);

	// choose the polynomial coefficients
	CppAD::uniform_01(size, a);

	// AD copy of the polynomial coefficients
	for(i = 0; i < size; i++)
		A[i] = a[i];

	// ------------------------------------------------------
	while(repeat--)
	{	// get the next argument value
		CppAD::uniform_01(1, z);

		// independent variable value
		Z.fastAccessCoeff(0)   = z[0]; // argument value
		Z.fastAccessCoeff(1)   = 1.;   // first order coefficient
		Z.fastAccessCoeff(2)   = 0.;   // second order coefficient

		// AD computation of the dependent variable
		P = CppAD::Poly(0, A, Z);

		// second derivative is twice second order Taylor coefficient
		ddp[0] = 2. * P.fastAccessCoeff(2);
	}
	// ------------------------------------------------------
	return true;
}

Input File: speed/sacado/poly.cpp
9.2.7.5: Sacado Speed: Sparse Hessian
Indicate that this test is not available:
 

// The sacado version of this test is not yet available 
extern bool link_sparse_hessian(
        size_t                     repeat     ,
        CppAD::vector<double>      &x         ,
        CppAD::vector<size_t>      &i         ,
        CppAD::vector<size_t>      &j         ,
        CppAD::vector<double>      &hessian
)
{
	return false;
}

Input File: speed/sacado/sparse_hessian.cpp
9.3: The Theory of Derivative Calculations

9.3.a: Contents
9.3.1: The Theory of Forward Mode
9.3.2: The Theory of Reverse Mode
9.3.3: An Important Reverse Mode Identity

Input File: omh/theory.omh
9.3.1: The Theory of Forward Mode

9.3.1.a: Taylor Notation
In Taylor notation, each variable corresponds to a function of a single argument which we denote by t (see Section 10.2 of 9.5.c: Evaluating Derivatives ). Here and below  X(t) ,  Y(t) , and Z(t) are scalar valued functions and the corresponding p-th order Taylor coefficients row vectors are  x ,  y and  z ; i.e.,  \[
\begin{array}{lcr}
X(t) & = & x^{(0)} + x^{(1)} * t + \cdots + x^{(p)} * t^p + o( t^p ) \\
Y(t) & = & y^{(0)} + y^{(1)} * t + \cdots + y^{(p)} * t^p + o( t^p ) \\
Z(t) & = & z^{(0)} + z^{(1)} * t + \cdots + z^{(p)} * t^p + o( t^p ) 
\end{array}
\] 
For the purposes of this section, we are given  x and  y and need to determine  z .

9.3.1.b: Binary Operators

9.3.1.b.a: Addition
 \[
\begin{array}{rcl}
Z(t)   
& = & X(t)   + Y(t)  
\\
\sum_{j=0}^p z^{(j)} * t^j   
& = & \sum_{j=0}^p x^{(j)} * t^j + \sum_{j=0}^p y^{(j)} * t^j  + o( t^p )
\\
z^{(j)} & = & x^{(j)} + y^{(j)}
\end{array} 
\] 


9.3.1.b.b: Subtraction
 \[
\begin{array}{rcl}
Z(t)   
& = & X(t) - Y(t)  
\\
\sum_{j=0}^p z^{(j)} * t^j   
& = & \sum_{j=0}^p x^{(j)} * t^j - \sum_{j=0}^p y^{(j)} * t^j  + o( t^p )
\\
z^{(j)} & = & x^{(j)} - y^{(j)}
\end{array} 
\] 


9.3.1.b.c: Multiplication
 \[
\begin{array}{rcl}
Z(t)   
& = & X(t) * Y(t)  
\\
\sum_{j=0}^p z^{(j)} * t^j   
& = & \left( \sum_{j=0}^p x^{(j)} * t^j \right)

\left( \sum_{j=0}^p y^{(j)} * t^j \right) + o( t^p )
\\
z^{(j)} & = & \sum_{k=0}^j x^{(j-k)} * y^{(k)}
\end{array} 
\] 


9.3.1.b.d: Division
 \[
\begin{array}{rcl}
Z(t)   
& = & X(t) / Y(t)  
\\

& = & z * y
\\
\sum_{j=0}^p x^{(j)} * t^j   
& = & 
\left( \sum_{j=0}^p z^{(j)} * t^j \right)

\left( \sum_{j=0}^p y^{(j)} * t^j \right) 

o( t^p )
\\
x^{(j)} & = & \sum_{k=0}^j z^{(j-k)} y^{(k)}
\\
z^{(j)} & = & \frac{1}{y^{(0)}} \left( x^{(j)} - \sum_{k=1}^j z^{(j-k)} y^{(k)} \right)
\end{array}

\] 


9.3.1.c: Standard Math Functions
Suppose that  F  is a standard math function and  \[
     Z(t) = F[ X(t) ]
\]


9.3.1.c.a: Differential Equation
All of the standard math functions satisfy a differential equation of the form  \[
     B(u) * F^{(1)} (u) - A(u) * F (u)  = D(u)
\] 
We use  a ,  b and  d to denote the p-th order Taylor coefficient row vectors for  A [ X (t) ]  ,  B [ X (t) ] and  D [ X (t) ]  respectively. We assume that these coefficients are known functions of  x , the p-th order Taylor coefficients for  X(t) .

9.3.1.c.b: Taylor Coefficients Recursion Formula
Our problem here is to express  z , the p-th order Taylor coefficient row vector for  Z(t) , in terms of these other known coefficients. It follows from the formulas above that  \[
\begin{array}{rcl}
Z^{(1)} (t) 
& = & F^{(1)} [ X(t) ] * X^{(1)} (t) 
\\
B[ X(t) ] * Z^{(1)} (t) 
& = & \{ D[ X(t) ] + A[ X(t) ] * Z(t) \} * X^{(1)} (t)
\\
B[ X(t) ] * Z^{(1)} (t) & = & E(t) * X^{(1)} (t)
\end{array}
\] 
where we define  \[
E(t) =  D[X(t)] + A[X(t)] * Z(t) 
\] 
We can compute the value of  z^{(0)} using the formula  \[
     z^{(0)} = F ( x^{(0)} )
\]
Suppose by induction (on  j ) that we are given the Taylor coefficients of  E(t) up to order  j-1 ; i.e.,  e^{(k)} for  k = 0 , \ldots , j-1 and the coefficients  z^{(k)} for  k = 0 , \ldots , j . We can compute  e^{(j)} using the formula  \[
     e^{(j)} = d^{(j)} + \sum_{k=0}^j a^{(j-k)} * z^{(k)}
\] 
We need to complete the induction by finding formulas for  z^{(j+1)} . It follows for the formula for the 9.3.1.b.c: multiplication operator that   \[
\begin{array}{rcl}
\left( \sum_{k=0}^j b^{(k)} t^k \right)
*
\left( \sum_{k=1}^{j+1} k z^{(k)} * t^{k-1} \right)
& = & 
\left( \sum_{k=0}^j e^{(k)} * t^k \right) 
*
\left( \sum_{k=1}^{j+1} k x^{(k)} * t^{k-1} \right)
+
o( t^p )
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} 
     - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)}  
\right)
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} 
     - \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\end{array}
\] 
This completes the induction that computes  e^{(j)} and  z^{(j+1)} .

9.3.1.c.c: Recursion Formula for Specific Cases
9.3.1.1: Exponential Function Forward Taylor Polynomial Theory
9.3.1.2: Logarithm Function Forward Taylor Polynomial Theory
9.3.1.3: Square Root Function Forward Taylor Polynomial Theory
9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
9.3.1.5: Arctangent Function Forward Taylor Polynomial Theory
9.3.1.6: Arcsine Function Forward Taylor Polynomial Theory
9.3.1.7: Arccosine Function Forward Taylor Polynomial Theory

Input File: omh/forward_theory.omh
9.3.1.1: Exponential Function Forward Taylor Polynomial Theory
If  F(x) = \exp(x)   \[
     1 * F^{(1)} (x) - 1 * F (x)  = 0
\] 
and in the 9.3.1.c.a: standard math function differential equation ,  A(x) = 1 ,  B(x) = 1 , and  D(x) = 0 . We use  a ,  b ,  d , and  z to denote the Taylor coefficients for  A [ X (t) ]  ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively. It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
z^{(0)} & = & \exp ( x^{(0)} )
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)}
\\
& = & z^{(j)}
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} 
     - \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\\
& = & \frac{1}{j+1} 
     \sum_{k=1}^{j+1} k x^{(k)} z^{(j+1-k)} 
\end{array}
\] 

Input File: omh/exp_forward.omh
9.3.1.2: Logarithm Function Forward Taylor Polynomial Theory
If  F(x) = \log(x)   \[
     x * F^{(1)} (x) - 0 * F (x)  = 1
\] 
and in the 9.3.1.c.a: standard math function differential equation ,  A(x) = 0 ,  B(x) = x , and  D(x) = 1 . We use  a ,  b ,  d , and  z to denote the Taylor coefficients for  A [ X (t) ]  ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively. It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
z^{(0)} & = & \log ( x^{(0)} )
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)}
\\
& = & \left\{ \begin{array}{ll}
     1 & {\rm if} \; j = 0 \\
     0 & {\rm otherwise}
\end{array} \right.
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} 
     - \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\\
& = & \frac{1}{j+1} \frac{1}{ x^{(0)} } 
\left(
     (j+1) x^{(j+1) }
     - \sum_{k=1}^j k z^{(k)} x^{(j+1-k)}  
\right)
\end{array}
\] 

Input File: omh/log_forward.omh
9.3.1.3: Square Root Function Forward Taylor Polynomial Theory
If  F(x) = \sqrt{x}   \[
     F(x) * F^{(1)} (x) - 0 * F (x)  = 1/2
\] 
and in the 9.3.1.c.a: standard math function differential equation ,  A(x) = 0 ,  B(x) = F(x) , and  D(x) = 1/2 . We use  a ,  b ,  d , and  z to denote the Taylor coefficients for  A [ X (t) ]  ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively. It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
z^{(0)} & = & \sqrt { x^{(0)} }
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)}
\\
& = & \left\{ \begin{array}{ll}
     1/2 & {\rm if} \; j = 0 \\
     0   & {\rm otherwise}
\end{array} \right.
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} 
     - \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\\
& = & \frac{1}{j+1} \frac{1}{ z^{(0)} } 
\left(
     \frac{j+1}{2} x^{(j+1) }
     - \sum_{k=1}^j k z^{(k)} z^{(j+1-k)}  
\right)
\end{array}
\] 

Input File: omh/sqrt_forward.omh
9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory

9.3.1.4.a: Differential Equation
The 9.3.1.c.a: standard math function differential equation is  \[
     B(u) * F^{(1)} (u) - A(u) * F (u)  = D(u)
\] 
In this sections we consider forward mode for the following choices:
       F(u)  \sin(u)  \cos(u)  \sinh(u)  \cosh(u)
 A(u)  0  0  0  0
 B(u)  1  1  1  1
 D(u)  \cos(u)  - \sin(u)  \cosh(u)  \sinh(u)
We use  a ,  b ,  d and  f for the Taylor coefficients of  A [ X (t) ] ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively. It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
f^{(0)} & = & D ( x^{(0)} )
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * f^{(k)}
\\
& = & d^{(j)}
\\
f^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=1}^{j+1} k x^{(k)} e^{(j+1-k)} 
     - \sum_{k=1}^j k f^{(k)}  b^{(j+1-k)} 
\right)
\\
& = & \frac{1}{j+1} 
     \sum_{k=1}^{j+1} k x^{(k)} d^{(j+1-k)} 
\end{array}
\] 
The formula above generates the order  j+1 coefficient of  F[ X(t) ] from the lower order coefficients for  X(t) and  D[ X(t) ] .
Input File: omh/sin_cos_forward.omh
9.3.1.5: Arctangent Function Forward Taylor Polynomial Theory
If  F(x) = \arctan(x)   \[
     (1 + x * x ) * F^{(1)} (x) - 0 * F (x)  = 1
\] 
and in the 9.3.1.c.a: standard math function differential equation ,  A(x) = 0 ,  B(x) = 1 + x * x  , and  D(x) = 1 . We use  a ,  b ,  d and  z to denote the Taylor coefficients for  A [ X (t) ]  ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively. It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
z^{(0)} & = & \arctan ( x^{(0)} )
\\
b^{(j)}
& = &  \left\{ \begin{array}{ll}
     1 + x^{(0)} * x^{(0)}          & {\rm if} \; j = 0 \\
     \sum_{k=0}^j x^{(k)} x^{(j-k)} & {\rm otherwise}
\end{array} \right.
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)}
\\
& = & \left\{ \begin{array}{ll}
     1 & {\rm if} \; j = 0 \\
     0 & {\rm otherwise}
\end{array} \right.
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} 
     - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)}  
\right)
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     (j+1) x^{(j+1)}
     - \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\end{array}
\] 

Input File: omh/atan_forward.omh
9.3.1.6: Arcsine Function Forward Taylor Polynomial Theory
If  F(x)  = \arcsin(x)  it follows that  \[
     \sqrt{ 1 - x * x } * F^{(1)} (x) - 0 * F (u)  = 1
\] 
and in the 9.3.1.c.a: standard math function differential equation ,  A(x) = 0 ,  B(x) = \sqrt{1 - x * x } , and  D(x) = 1 . We use  a ,  b ,  d and  z to denote the Taylor coefficients for  A [ X (t) ]  ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively.

We define  Q(x) = 1 - x * x and let  q be the corresponding Taylor coefficients for  Q[ X(t) ] . It follows that  \[
q^{(j)} = \left\{ \begin{array}{ll}
     1 - x^{(0)} * x^{(0)}            & {\rm if} \; j = 0 \\
     - \sum_{k=0}^j x^{(k)} x^{(j-k)} & {\rm otherwise}
\end{array} \right.
\] 
It follows that  B[ X(t) ] = \sqrt{ Q[ X(t) ] } and from the equations for the 9.3.1.3: square root that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
b^{(0)}   & = & \sqrt{ q^{(0)} }
\\
b^{(j+1)} & = &
     \frac{1}{j+1} \frac{1}{ b^{(0)} } 
     \left(
          \frac{j+1}{2} q^{(j+1) }
          - \sum_{k=1}^j k b^{(k)} b^{(j+1-k)}  
     \right)
\end{array}
\] 
It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
z^{(0)} & = & \arcsin ( x^{(0)} )
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)}
\\
& = & \left\{ \begin{array}{ll}
     1 & {\rm if} \; j = 0 \\
     0 & {\rm otherwise}
\end{array} \right.
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} 
     - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)}  
\right)
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     (j+1) x^{(j+1)}
     - \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\end{array}
\] 

Input File: omh/asin_forward.omh
9.3.1.7: Arccosine Function Forward Taylor Polynomial Theory
If  F(x)  = \arccos(x)  it follows that  \[
     \sqrt{ 1 - x * x } * F^{(1)} (x) - 0 * F (u)  = -1
\] 
and in the 9.3.1.c.a: standard math function differential equation ,  A(x) = 0 ,  B(x) = \sqrt{1 - x * x } , and  D(x) = -1 . We use  a ,  b ,  d and  z to denote the Taylor coefficients for  A [ X (t) ]  ,  B [ X (t) ] ,  D [ X (t) ]  , and  F [ X(t) ]  respectively.

We define  Q(x) = 1 - x * x and let  q be the corresponding Taylor coefficients for  Q[ X(t) ] . It follows that  \[
q^{(j)} = \left\{ \begin{array}{ll}
     1 - x^{(0)} * x^{(0)}            & {\rm if} \; j = 0 \\
     - \sum_{k=0}^j x^{(k)} x^{(j-k)} & {\rm otherwise}
\end{array} \right.
\] 
It follows that  B[ X(t) ] = \sqrt{ Q[ X(t) ] } and from the equations for the 9.3.1.3: square root that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
b^{(0)}   & = & \sqrt{ q^{(0)} }
\\
b^{(j+1)} & = &
     \frac{1}{j+1} \frac{1}{ b^{(0)} } 
     \left(
          \frac{j+1}{2} q^{(j+1) }
          - \sum_{k=1}^j k b^{(k)} b^{(j+1-k)}  
     \right)
\end{array}
\] 
It now follows from the general 9.3.1.c.b: Taylor coefficients recursion formula that for  j = 0 , 1, \ldots ,  \[
\begin{array}{rcl}
z^{(0)} & = & \arccos ( x^{(0)} )
\\
e^{(j)} 
& = & d^{(j)} + \sum_{k=0}^{j} a^{(j-k)} * z^{(k)}
\\
& = & \left\{ \begin{array}{ll}
     -1 & {\rm if} \; j = 0 \\
     0 & {\rm otherwise}
\end{array} \right.
\\
z^{(j+1)} & = & \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     \sum_{k=0}^j e^{(k)} (j+1-k) x^{(j+1-k)} 
     - \sum_{k=1}^j b^{(k)} (j+1-k) z^{(j+1-k)}  
\right)
\\
z^{(j+1)} & = & - \frac{1}{j+1} \frac{1}{ b^{(0)} } 
\left(
     (j+1) x^{(j+1)}
     + \sum_{k=1}^j k z^{(k)}  b^{(j+1-k)} 
\right)
\end{array}
\] 

Input File: omh/acos_forward.omh
9.3.2: The Theory of Reverse Mode

9.3.2.a: Taylor Notation
In Taylor notation, each variable corresponds to a function of a single argument which we denote by t (see Section 10.2 of 9.5.c: Evaluating Derivatives ). Here and below  X(t) ,  Y(t) , and Z(t) are scalar valued functions and the corresponding p-th order Taylor coefficients row vectors are  x ,  y and  z ; i.e.,  \[
\begin{array}{lcr}
X(t) & = & x^{(0)} + x^{(1)} * t + \cdots + x^{(p)} * t^p + O( t^{p+1} ) \\
Y(t) & = & y^{(0)} + y^{(1)} * t + \cdots + y^{(p)} * t^p + O( t^{p+1} ) \\
Z(t) & = & z^{(0)} + z^{(1)} * t + \cdots + z^{(p)} * t^p + O( t^{p+1} ) 
\end{array}
\] 
For the purposes of this discussion, we are given the p-th order Taylor coefficient row vectors  x ,  y , and  z . In addition, we are given the partial derivatives of a scalar valued function  \[
     G ( z^{(j)} , \ldots , z^{(0)}, x, y)
\] 
We need to compute the partial derivatives of the scalar valued function  \[
     H ( z^{(j-1)} , \ldots , z^{(0)}, x, y)  = 
     G ( z^{(j)}, z^{(j-1)} , \ldots , z^{(0)}, x , y )
\] 
where  z^{(j)} is expressed as a function of the j-1-th order Taylor coefficient row vector for  Z and the vectors  x ,  y ; i.e.,  z^{(j)} above is a shorthand for  \[
     z^{(j)} ( z^{(j-1)} , \ldots , z^{(0)}, x, y )
\] 
If we do not provide a formula for a partial derivative of  H , then that partial derivative has the same value as for the function  G .

9.3.2.b: Binary Operators

9.3.2.b.a: Addition
The forward mode formula for 9.3.1.b.a: addition is  \[
     z^{(j)} =  x^{(j)} + y^{(j)}
\] 
If follows that for  k = 0 , \ldots , j and  l = 0 , \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ x^{(k)} } & = &
\D{G}{ x^{(k)} }  + \D{G}{ z^{(k)} } \\
\\
\D{H}{ y^{(k)} } & = &
\D{G}{ y^{(k)} }  + \D{G}{ z^{(k)} } 
\\
\D{H}{ z^{(l)} } & = & \D{G}{ z^{(l)} }  
\end{array} 
\] 


9.3.2.b.b: Subtraction
The forward mode formula for 9.3.1.b.b: subtraction is  \[
     z^{(j)} =  x^{(j)} - y^{(j)}
\] 
If follows that for  k = 0 , \ldots , j  \[
\begin{array}{rcl}
\D{H}{ x^{(k)} } & = &
\D{G}{ x^{(k)} }  - \D{G}{ z^{(k)} } \\
\\
\D{H}{ y^{(k)} } & = &
\D{G}{ y^{(k)} }  - \D{G}{ z^{(k)} } 
\end{array} 
\] 


9.3.2.b.c: Multiplication
The forward mode formula for 9.3.1.b.c: multiplication is  \[
     z^{(j)} = \sum_{k=0}^j x^{(j-k)} * y^{(k)}
\] 
If follows that for  k = 0 , \ldots , j and  l = 0 , \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ x^{(j-k)} } & = &
\D{G}{ x^{(j-k)} }  +
\sum_{k=0}^j \D{G}{ z^{(j)} } y^{(k)}  
\\
\D{H}{ y^{(k)} } & = &
\D{G}{ y^{(k)} }  +
\sum_{k=0}^j \D{G}{ z^{(j)} } x^{(j-k)}  
\end{array} 
\] 


9.3.2.b.d: Division
The forward mode formula for 9.3.1.b.d: division is  \[
z^{(j)} = 
\frac{1}{y^{(0)}} 
\left( 
     x^{(j)} - \sum_{k=1}^j z^{(j-k)} y^{(k)} 
\right)
\] 
If follows that for  k = 1 , \ldots , j  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = &
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} 
\\
\D{H}{ z^{(j-k)} } & = &
\D{G}{ z^{(j-k)} }  - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} y^{(k)}
\\
\D{H}{ y^{(k)} } & = &
\D{G}{ y^{(k)} }  - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} z^{(j-k)}
\\
\D{H}{ y^{(0)} } & = &
\D{G}{ y^{(0)} }  - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} \frac{1}{y^{(0)}} 
\left( 
     x^{(j)} - \sum_{k=1}^j z^{(j-k)} y^{(k)} 
\right)
\\
& = &
\D{G}{ y^{(0)} }  - \D{G}{ z^{(j)} } \frac{1}{y^{(0)}} z^{(j)}
\end{array}
\] 


9.3.2.c: Standard Math Functions
The standard math functions have only one argument. Hence we are given the partial derivatives of a scalar valued function  \[
     G ( z^{(j)} , \ldots , z^{(0)}, x)
\] 
We need to compute the partial derivatives of the scalar valued function  \[
     H ( z^{(j-1)} , \ldots , z^{(0)}, x)  = 
     G ( z^{(j)}, z^{(j-1)} , \ldots , z^{(0)}, x)
\] 
where  z^{(j)} is expressed as a function of the j-1-th order Taylor coefficient row vector for  Z and the vector  x ; i.e.,  z^{(j)} above is a shorthand for  \[
     z^{(j)} ( z^{(j-1)} , \ldots , z^{(0)}, x )
\] 
9.3.2.1: Exponential Function Reverse Mode Theory
9.3.2.2: Logarithm Function Reverse Mode Theory
9.3.2.3: Square Root Function Reverse Mode Theory
9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
9.3.2.5: Arctangent Function Reverse Mode Theory
9.3.2.6: Arcsine Function Reverse Mode Theory
9.3.2.7: Arccosine Function Reverse Mode Theory

Input File: omh/reverse_theory.omh
9.3.2.1: Exponential Function Reverse Mode Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . The forward mode formulas for the 9.3.1.1: exponential function are  \[
     z^{(j)}  =  \exp ( x^{(0)} ) 
\] 
if  j = 0 , and  \[
     z^{(j)}  = \frac{1}{j} 
          \sum_{k=1}^{j} k x^{(k)} z^{(j-k)} 
\] 
for the case  j = 0 , and for  j > 0 ,  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } z^{(j)}
\end{array}
\] 
If  j > 0 , then for  k = 1 , \ldots , j  \[
\begin{array}{rcl}
\D{H}{ x^{(k)} } & = & 
\D{G}{ x^{(k)} }  + \D{G}{ z^{(j)} } \frac{1}{j}  k z^{(j-k)}
\\
\D{H}{ z^{(j-k)} } & = & 
\D{G}{ z^{(j-k)} }  + \D{G}{ z^{(j)} } \frac{1}{j}  k x^{(k)}
\end{array}
\] 

Input File: omh/exp_reverse.omh
9.3.2.2: Logarithm Function Reverse Mode Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . The forward mode formulas for the 9.3.1.2: logarithm function are  \[
     z^{(j)}  =  \log ( x^{(0)} ) 
\] 
for the case  j = 0 , and for  j > 0 ,  \[
z^{(j)} 
=  \frac{1}{ x^{(0)} } \frac{1}{j} 
\left(
     j x^{(j)}
     - \sum_{k=1}^{j-1} k z^{(k)} x^{(j-k)}  
\right)
\] 
otherwise. If  j = 0 , we have the relation  \[
\D{H}{ x^{(j)} } = 
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } \frac{1}{ x^{(0)} }
\] 
If  j > 0 , then for  k = 1 , \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ x^{(0)} } & = &
\D{G}{ x^{(0)} } - \D{G}{ z^{(j)} } \frac{1}{ x^{(0)} } 
\frac{1}{ x^{(0)} } \frac{1}{j} 
\left(
     j x^{(j)}
     - \sum_{m=1}^{j-1} m z^{(m)} x^{(j-m)}  
\right)
\\
& = &
\D{G}{ x^{(0)} } - \D{G}{ z^{(j)} } \frac{1}{ x^{(0)} } z^{(j)}
\\
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} } + \D{G}{ z^{(j)} } \frac{1}{ x^{(0)} } 
\\
\D{H}{ x^{(j-k)} } & = & 
\D{G}{ x^{(j-k)} }  - 
     \D{G}{ z^{(j)} } \frac{1}{ x^{(0)} } \frac{1}{j}  k z^{(k)}
\\
\D{H}{ z^{(k)} } & = & 
\D{G}{ z^{(k)} }  - 
     \D{G}{ z^{(j)} } \frac{1}{ x^{(0)} } \frac{1}{j}  k x^{(j-k)}
\end{array}
\] 

Input File: omh/log_reverse.omh
9.3.2.3: Square Root Function Reverse Mode Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . The forward mode formulas for the 9.3.1.3: square root function are  \[
     z^{(j)}  =  \sqrt { x^{(0)} } 
\] 
for the case  j = 0 , and for  j > 0 ,  \[
z^{(j)}  =  \frac{1}{j} \frac{1}{ z^{(0)} } 
\left(
     \frac{j}{2} x^{(j) }
     - \sum_{\ell=1}^{j-1} \ell z^{(\ell)} z^{(j-\ell)}  
\right)
\] 
If  j = 0 , we have the relation  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} }
\\
& = &
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } \frac{1}{2 z^{(0)} }
\end{array}
\] 
If  j > 0 , then for  k = 1, \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ z^{(0)} } & = & 
\D{G}{ z^{(0)} }  + \D{G} { z^{(j)} } \D{ z^{(j)} }{ z^{(0)} } 
\\
& = &
\D{G}{ z^{(0)} }  - 
\D{G}{ z^{(j)} }  \frac{ z^{(j)} }{ z^{(0)} }
\\
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} }
\\
& = &
\D{G}{ x^{(j)} }  + \D{G}{ z^{(j)} } \frac{1}{ 2 z^{(0)} } 
\\
\D{H}{ z^{(k)} } & = & 
\D{G}{ z^{(k)} }  + \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} }
\\
& = &
\D{G}{ z^{(k)} }  - \D{G}{ z^{(j)} } \frac{ z^{(j-k)} }{ z^{(0)} }
\end{array}
\] 

Input File: omh/sqrt_reverse.omh
9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . In addition, we use the following definitions for  s and  c and the integer  \ell
Coefficients  s  c  \ell
Trigonometric Case  \sin [ X(t) ]  \cos [ X(t) ] 1
Hyperbolic Case  \sinh [ X(t) ]  \cosh [ X(t) ] -1
We use the value  \[
     z^{(j)} = ( s^{(j)} , c^{(j)} )
\] 
in the definition for  G and  H . The forward mode formulas for the 9.3.1.4: sine and cosine functions are  \[
\begin{array}{rcl}
s^{(j)}  & = & \frac{1 + \ell}{2} \sin ( x^{(0)} ) 
           +   \frac{1 - \ell}{2} \sinh ( x^{(0)} ) 
\\
c^{(j)}  & = & \frac{1 + \ell}{2} \cos ( x^{(0)} ) 
           +   \frac{1 - \ell}{2} \cosh ( x^{(0)} ) 
\end{array}
\] 
for the case  j = 0 , and for  j > 0 ,  \[
\begin{array}{rcl}
s^{(j)} & = & \frac{1}{j} 
     \sum_{k=1}^{j} k x^{(k)} c^{(j-k)}  \\
c^{(j)} & = & \ell \frac{1}{j} 
     \sum_{k=1}^{j} k x^{(k)} s^{(j-k)} 
\end{array}
\] 
If  j = 0 , we have the relation  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} }  
+ \D{G}{ s^{(j)} } c^{(0)}
+ \ell \D{G}{ c^{(j)} } s^{(0)}
\end{array}
\] 
If  j > 0 , then for  k = 1, \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ x^{(k)} } & = & 
\D{G}{ x^{(k)} }  
+ \D{G}{ s^{(j)} } \frac{1}{j} k c^{(j-k)}
+ \ell \D{G}{ c^{(j)} } \frac{1}{j} k s^{(j-k)}
\\
\D{H}{ s^{(j-k)} } & = & 
\D{G}{ s^{(j-k)} } + \ell \D{G}{ c^{(j)} } k x^{(k)}
\\
\D{H}{ c^{(j-k)} } & = & 
\D{G}{ c^{(j-k)} } + \D{G}{ s^{(j)} } k x^{(k)}
\end{array}
\] 

Input File: omh/sin_cos_reverse.omh
9.3.2.5: Arctangent Function Reverse Mode Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . In addition, we use  b for the p-th order Taylor coefficient row vectors corresponding to  1 + X(t) * X(t) and replace  z^{(j)} by  \[
     ( z^{(j)} , b^{(j)} )
\] 
in the definition for  G and  H . The forward mode formulas for the 9.3.1.5: arctangent function are  \[
\begin{array}{rcl}
     z^{(j)}  & = & \arctan ( x^{(0)} ) \\
     b^{(j)}  & = & 1 + x^{(0)} x^{(0)}
\end{array}
\] 
for the case  j = 0 , and for  j > 0 ,  \[
\begin{array}{rcl}
b^{(j)} & = &  
     \sum_{k=0}^j x^{(k)} x^{(j-k)} 
\\
z^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } 
\left(
     j x^{(j)}
     - \sum_{k=1}^{j-1} k z^{(k)}  b^{(j-k)} 
\right)
\end{array}
\] 
If  j = 0 , we have the relation  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} }  
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(0)} }
\\
& = &
\D{G}{ x^{(j)} }  
+ \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} }
+ \D{G}{ b^{(j)} } 2 x^{(0)}
\end{array}
\] 
If  j > 0 , then for  k = 1, \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ b^{(0)} } & = & 
\D{G}{ b^{(0)} } 
- \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(0)} }
\\
& = &
\D{G}{ b^{(0)} } 
- \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ b^{(0)} } 
\\
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(j)} }
\\
& = &
\D{G}{ x^{(j)} } 
+ \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} }
+ \D{G}{ b^{(j)} } 2 x^{(0)}
\\
\D{H}{ x^{(0)} } & = & 
\D{G}{ x^{(0)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(0)} }
\\
& = & 
\D{G}{ x^{(0)} } + 
\D{G}{ b^{(j)} } 2 x^{(j)}
\\
\D{H}{ x^{(k)} } & = & 
\D{G}{ x^{(k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ x^{(k)} }
\\
& = & 
\D{G}{ x^{(k)} } 
+ \D{G}{ b^{(j)} } 2 x^{(j-k)}
\\
\D{H}{ z^{(k)} } & = & 
\D{G}{ z^{(k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ z^{(k)} }
\\
& = & 
\D{G}{ z^{(k)} } 
- \D{G}{ z^{(j)} } \frac{k b^{(j-k)} }{ j b^{(0)} }
\\
\D{H}{ b^{(j-k)} } & = & 
\D{G}{ b^{(j-k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(j-k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(j-k)} }
\\
& = & 
\D{G}{ b^{(j-k)} } 
- \D{G}{ z^{(j)} } \frac{k z^{(k)} }{ j b^{(0)} } 
\end{array}
\] 

Input File: omh/atan_reverse.omh
9.3.2.6: Arcsine Function Reverse Mode Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . In addition, we use  q and  b for the p-th order Taylor coefficient row vectors corresponding to functions  \[
\begin{array}{rcl}
     Q(t) & = & 1 - X(t) * X(t) \\
     B(t) & = & \sqrt{ Q(t) }
\end{array}
\] 
and replace  z^{(j)} by  \[
     ( z^{(j)} , b^{(j)} )
\] 
in the definition for  G and  H . The forward mode formulas for the 9.3.1.6: asin function are  \[
\begin{array}{rcl}
     q^{(0)}  & = & 1 - x^{(0)} x^{(0)} \\
     b^{(j)}  & = & \sqrt{ q^{(0)} }    \\
     z^{(j)}  & = & \arcsin ( x^{(0)} )
\end{array}
\] 
for the case  j = 0 , and for  j > 0 ,  \[
\begin{array}{rcl}
q^{(j)} & = &  
     - \sum_{k=0}^j x^{(k)} x^{(j-k)} 
\\
b^{(j)} & = &
     \frac{1}{j} \frac{1}{ b^{(0)} } 
     \left(
          \frac{j}{2} q^{(j)}
          - \sum_{k=1}^{j-1} k b^{(k)} b^{(j-k)}  
     \right)
\\
z^{(j)} & = & \frac{1}{j} \frac{1}{ b^{(0)} } 
\left(
     j x^{(j)}
     - \sum_{k=1}^{j-1} k z^{(k)}  b^{(j-k)} 
\right)
\end{array}
\] 
If  j = 0 , we have the relation  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(0)} }  
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(0)} } \D{ q^{(0)} }{ x^{(0)} }
\\
& = &
\D{G}{ x^{(j)} }  
+ \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} }
- \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} }
\end{array}
\] 
If  j > 0 , then for  k = 1, \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ b^{(0)} } & = & 
\D{G}{ b^{(0)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(0)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(0)} }
\\
& = &
\D{G}{ b^{(0)} } 
- \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ b^{(0)} } 
- \D{G}{ b^{(j)} } \frac{ b^{(j)} }{ b^{(0)} }
\\
\D{H}{ x^{(0)} } & = & 
\D{G}{ x^{(0)} } 
+
\D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(0)} }
\\
& = & 
\D{G}{ x^{(0)} } 
- \D{G}{ b^{(j)} } \frac{ x^{(j)} }{ b^{(0)} }
\\
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(j)} }
\\
& = & 
\D{G}{ x^{(j)} } 
+ \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } 
- \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} }
\\
\D{H}{ b^{(j - k)} } & = & 
\D{G}{ b^{(j - k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(j - k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(j - k)} }
\\
& = &
\D{G}{ b^{(j - k)} } 
- \D{G}{ z^{(j)} } \frac{k z^{(k)} }{j b^{(0)} }
- \D{G}{ b^{(j)} } \frac{ b^{(k)} }{ b^{(0)} }
\\
\D{H}{ x^{(k)} } & = & 
\D{G}{ x^{(k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(k)} }
\\
& = & 
\D{G}{ x^{(k)} } 
- \D{G}{ b^{(j)} } \frac{ x^{(j-k)} }{ b^{(0)} }
\\
\D{H}{ z^{(k)} } & = & 
\D{G}{ z^{(k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ z^{(k)} }
\\
& = &
\D{G}{ z^{(k)} } 
- \D{G}{ z^{(j)} } \frac{k b^{(j-k)} }{ j b^{(0)} } 
\end{array}
\] 

Input File: omh/asin_reverse.omh
9.3.2.7: Arccosine Function Reverse Mode Theory
We use the reverse theory 9.3.2.c: standard math function definition for the functions  H and  G . In addition, we use  q and  b for the p-th order Taylor coefficient row vectors corresponding to functions  \[
\begin{array}{rcl}
     Q(t) & = & 1 - X(t) * X(t) \\
     B(t) & = & \sqrt{ Q(t) }
\end{array}
\] 
and replace  z^{(j)} by  \[
     ( z^{(j)} , b^{(j)} )
\] 
in the definition for  G and  H . The forward mode formulas for the 9.3.1.7: acos function are  \[
\begin{array}{rcl}
     q^{(0)}  & = & 1 - x^{(0)} x^{(0)} \\
     b^{(j)}  & = & \sqrt{ q^{(0)} }    \\
     z^{(j)}  & = & \arccos ( x^{(0)} )
\end{array}
\] 
for the case  j = 0 , and for  j > 0 ,  \[
\begin{array}{rcl}
q^{(j)} & = &  
     - \sum_{k=0}^j x^{(k)} x^{(j-k)} 
\\
b^{(j)} & = &
     \frac{1}{j} \frac{1}{ b^{(0)} } 
     \left(
          \frac{j}{2} q^{(j)}
          - \sum_{k=1}^{j-1} k b^{(k)} b^{(j-k)}  
     \right)
\\
z^{(j)} & = & - \frac{1}{j} \frac{1}{ b^{(0)} } 
\left(
     j x^{(j)}
     + \sum_{k=1}^{j-1} k z^{(k)}  b^{(j-k)} 
\right)
\end{array}
\] 
If  j = 0 , we have the relation  \[
\begin{array}{rcl}
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(0)} }  
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(0)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(0)} } \D{ q^{(0)} }{ x^{(0)} }
\\
& = &
\D{G}{ x^{(j)} }  
- \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} }
- \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} }
\end{array}
\] 
If  j > 0 , then for  k = 1, \ldots , j-1  \[
\begin{array}{rcl}
\D{H}{ b^{(0)} } & = & 
\D{G}{ b^{(0)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(0)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(0)} }
\\
& = &
\D{G}{ b^{(0)} } 
- \D{G}{ z^{(j)} } \frac{ z^{(j)} }{ b^{(0)} } 
- \D{G}{ b^{(j)} } \frac{ b^{(j)} }{ b^{(0)} }
\\
\D{H}{ x^{(0)} } & = & 
\D{G}{ x^{(0)} } 
+
\D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(0)} }
\\
& = & 
\D{G}{ x^{(0)} } 
- \D{G}{ b^{(j)} } \frac{ x^{(j)} }{ b^{(0)} }
\\
\D{H}{ x^{(j)} } & = & 
\D{G}{ x^{(j)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(j)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(j)} }
\\
& = & 
\D{G}{ x^{(j)} } 
- \D{G}{ z^{(j)} } \frac{1}{ b^{(0)} } 
- \D{G}{ b^{(j)} } \frac{ x^{(0)} }{ b^{(0)} }
\\
\D{H}{ b^{(j - k)} } & = & 
\D{G}{ b^{(j - k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ b^{(j - k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ b^{(j - k)} }
\\
& = &
\D{G}{ b^{(j - k)} } 
- \D{G}{ z^{(j)} } \frac{k z^{(k)} }{j b^{(0)} }
- \D{G}{ b^{(j)} } \frac{ b^{(k)} }{ b^{(0)} }
\\
\D{H}{ x^{(k)} } & = & 
\D{G}{ x^{(k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ x^{(k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ q^{(j)} } \D{ q^{(j)} }{ x^{(k)} }
\\
& = & 
\D{G}{ x^{(k)} } 
- \D{G}{ b^{(j)} } \frac{ x^{(j-k)} }{ b^{(0)} }
\\
\D{H}{ z^{(k)} } & = & 
\D{G}{ z^{(k)} } 
+ \D{G}{ z^{(j)} } \D{ z^{(j)} }{ z^{(k)} }
+ \D{G}{ b^{(j)} } \D{ b^{(j)} }{ z^{(k)} }
\\
& = &
\D{G}{ z^{(k)} } 
- \D{G}{ z^{(j)} } \frac{k b^{(j-k)} }{ j b^{(0)} } 
\end{array}
\] 

Input File: omh/acos_reverse.omh
9.3.3: An Important Reverse Mode Identity
The theorem and the proof below is a restatement of the results on page 236 of 9.5.c: Evaluating Derivatives .

9.3.3.a: Notation
Given a function  f(u, v) where  u \in B^n we use the notation  \[
\D{f}{u} (u, v) = \left[ \D{f}{u_1} (u, v) , \cdots , \D{f}{u_n} (u, v) \right]
\] 


9.3.3.b: Reverse Sweep
When using 5.6.2.3: reverse mode we are given a function  F : B^n \rightarrow B^m , a matrix of Taylor coefficients  x \in B^{n \times p} , and a weight vector  w \in B^m . We define the functions  X : B \times B^{n \times p} \rightarrow B^n ,  W : B \times B^{n \times p} \rightarrow B , and  W_j : B^{n \times p} \rightarrow B by  \[
\begin{array}{rcl}
     X(t , x) & = & x^{(0)} + x^{(1)} t + \cdots + x^{(p-1)} t^{p-1}
     \\
     W(t, x)   & = &  w_0 F_0 [X(t, x)] + \cdots + w_{m-1} F_{m-1} [X(t, x)]
     \\
     W_j (x)   & = & \frac{1}{j!} \Dpow{j}{t} W(0, x)
\end{array}
\]
where  x^{(j)} is the j-th column of  x \in B^{n \times p} . The theorem below implies that  \[
     \D{ W_j }{ x^{(i)} } (x) = \D{ W_{j-i} }{ x^{(0)} } (x) 
\] 
A 5.6.2.3: general reverse sweep calculates the values  \[
     \D{ W_{p-1} }{ x^{(i)} } (x)  \hspace{1cm} (i = 0 , \ldots , p-1)
\] 
But the return values for a reverse sweep are specified in terms of the more useful values  \[
     \D{ W_j }{ x^{(0)} } (x)  \hspace{1cm} (j = 0 , \ldots , p-1)
\] 


9.3.3.c: Theorem
Suppose that  F : B^n \rightarrow B^m is a  p times continuously differentiable function. Define the functions  Z : B \times B^{n \times p} \rightarrow B^n ,  Y : B \times B^{n \times p }\rightarrow B^m , and  y^{(j)} : B^{n \times p }\rightarrow B^m by  \[
\begin{array}{rcl}
     Z(t, x)  & = & x^{(0)} + x^{(1)} t + \cdots + x^{(p-1)} t^{p-1}
     \\
     Y(t, x)  & = & F [ Z(t, x) ]
     \\
     y^{(j)} (x) & = & \frac{1}{j !} \Dpow{j}{t} Y(0, x) 
\end{array}
\] 
where  x^{(j)} denotes the j-th column of  x \in B^{n \times p} . It follows that for all  i, j such that  i \leq j < p ,  \[
\begin{array}{rcl}
\D{ y^{(j)} }{ x^{(i)} } (x) & = & \D{ y^{(j-i)} }{ x^{(0)} } (x)
\end{array}
\] 


9.3.3.d: Proof
If follows from the definitions that  \[
\begin{array}{rclr}
\D{ y^{(j)} }{ x^{(i)} } (x)
& = & 
\frac{1}{j ! } \D{ }{ x^{(i)} } 
     \left[ \Dpow{j}{t} (F \circ  Z) (t, x)  \right]_{t=0}
\\
& = &
\frac{1}{j ! } \left[ \Dpow{j}{t} 
     \D{ }{ x^{(i)} } (F \circ  Z) (t, x) 
\right]_{t=0}
\\
& = &
\frac{1}{j ! } \left\{ 
     \Dpow{j}{t} \left[ t^i ( F^{(1)} \circ Z ) (t, x) \right] 
\right\}_{t=0}
\end{array}
\] 
For  k > i , the k-th partial of  t^i with respect to  t is zero. Thus, the partial with respect to  t is given by  \[
\begin{array}{rcl}
\Dpow{j}{t} \left[ t^i ( F^{(1)} \circ Z ) (t, x) \right] 
& = &
\sum_{k=0}^i 
\left( \begin{array}{c} j \\ k \end{array} \right)
\frac{ i ! }{ (i - k) ! } t^{i-k} \; 
\Dpow{j-k}{t} ( F^{(1)} \circ Z ) (t, x)
\\
\left\{ 
     \Dpow{j}{t} \left[ t^i ( F^{(1)} \circ Z ) (t, x) \right] 
\right\}_{t=0}
& = &
\left( \begin{array}{c} j \\ i \end{array} \right)
i ! \Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x)
\\
& = &
\frac{ j ! }{ (j - i) ! }
\Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x)
\\
\D{ y^{(j)} }{ x^{(i)} } (x)
& = & 
\frac{ 1 }{ (j - i) ! }
\Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x)
\end{array}
\] 
Applying this formula to the case where  j is replaced by  j - i and  i is replaced by zero, we obtain  \[
\D{ y^{(j-i)} }{ x^{(0)} } (x)
=
\frac{ 1 }{ (j - i) ! }
\Dpow{j-i}{t} ( F^{(1)} \circ Z ) (t, x)
=
\D{ y^{(j)} }{ x^{(i)} } (x)
\] 
which completes the proof
Input File: omh/reverse_identity.omh
9.4: Glossary

9.4.a: AD Function
Given an 5: ADFun object f there is a corresponding AD of Base 9.4.g.b: operation sequence . This operation sequence defines a function  F : B^n \rightarrow B^m  where B is the space corresponding to objects of type Base. We refer to  F as the AD function corresponding to the operation sequence and to the object f. (See the 5.8.l: FunCheck discussion for possible differences between  F(x) and the algorithm that defined the operation sequence.)

9.4.b: AD of Base
An object is called an AD of Base object its type is either AD<Base> (see 4.1: default or 4.2.a.a: constructor ) or VecAD<Base>::reference (see 4.6: VecAD ) for some Base type.

9.4.c: AD Levels Above Base
If Base is a type, the AD levels above Base is the following sequence of types:
     AD<
Base, AD< AD<Base> > , AD< AD< AD<Base> > > , ...

9.4.d: Base Function
A function  f : B \rightarrow B is referred to as a Base function, if Base is a C++ type that represent elements of the domain and range space of f; i.e. elements of  B .

9.4.e: Base Type
If x is an AD<Base> object, Base is referred to as the base type for x.

9.4.f: Elementary Vector
The j-th elementary vector  e^j \in B^m is defined by  \[
e_i^j = \left\{ \begin{array}{ll}
     1 & {\rm if} \; i = j \\
     0 & {\rm otherwise}
\end{array} \right.
\] 


9.4.g: Operation

9.4.g.a: Atomic
An atomic Type operation is an operation that has a Type result and is not made up of other more basic operations.

9.4.g.b: Sequence
A sequence of atomic Type operations is called a Type operation sequence. A sequence of atomic 9.4.b: AD of Base operations is referred to as an AD of Base operation sequence. The abbreviated notation AD operation sequence is often used when it is not necessary to specify the base type.

9.4.g.c: Dependent
Suppose that x and y are Type objects and the result of
     
x < y
has type bool (where Type is not the same as bool). If one executes the following code
     if( 
x < y )
          
y = cos(x);
     else 
y = sin(x); 
the choice above depends on the value of x and y and the two choices result in a different Type operation sequence. In this case, we say that the Type operation sequence depends on x and y.

9.4.g.d: Independent
Suppose that i and n are size_t objects, and x[i], y are Type objects, where Type is different from size_t. The Type sequence of operations corresponding to
     
y = Type(0);
     for(
i = 0; i < ni++)
          
y += x[i];
does not depend on the value of x or y. In this case, we say that the Type operation sequence is independent of y and the elements of x.

9.4.h: Parameter
All Base objects are parameters. An AD<Base> object u is currently a parameter if its value does not depend on the value of an 5.1: Independent variable vector for an 9.4.j.a: active tape . If u is a parameter, the function 4.5.4: Parameter(u) returns true and 4.5.4: Variable(u) returns false.

9.4.i: Sparsity Pattern
Given a matrix  A \in B^{m \times n} , a boolean valued  m \times n matrix  P is a sparsity pattern for  A if for  i = 0, \ldots , m-1 and  j = 0 , \ldots n-1 ,  \[
A_{i,j} \neq 0  
\; \Rightarrow \; 
P_{i,j} = {\rm true}
\] 
Given two sparsity patterns  P and Q for a matrix A, we say that P is more efficient than Q if P has fewer true elements than Q.

9.4.j: Tape

9.4.j.a: Active
A new tape is created and becomes active after each call of the form (see 5.1: Independent )
     Independent(
x)
All operations that depend on the elements of x are recorded on this active tape.

9.4.j.b: Inactive
The 9.4.g.b: operation sequence stored in a tape must be transferred to a function object using the syntax (see 5.2: ADFun<Base> f(x, y) )
     ADFun<
Basefxy)
or using the syntax (see 5.3: f.Dependent(x, y) )
     
f.Dependent( xy)
After such a transfer, the tape becomes inactive.

9.4.j.c: Independent Variable
While the tape is active, we refer to the elements of x as the independent variables for the tape. When the tape becomes inactive, the corresponding objects become 9.4.h: parameters .

9.4.j.d: Dependent Variables
While the tape is active, we use the term dependent variables for the tape for any objects whose value depends on the independent variables for the tape. When the tape becomes inactive, the corresponding objects become 9.4.h: parameters .

9.4.k: Taylor Coefficient
Suppose  X : B \rightarrow B^n is a is  p times continuously differentiable function in some neighborhood of zero. For  k = 0 , \ldots , p , we use the column vector  x^{(k)} \in B^n for the k-th order Taylor coefficient corresponding to  X which is defined by  \[
     x^{(k)} = \frac{1}{k !} \Dpow{k}{t} X(0)
\] 
It follows that  \[
     X(t) = x^{(0)} + x^{(1)} t + \cdots + x^{(p)} t^p  + R(t)
\]
where the remainder  R(t) divided by  t^p converges to zero and  t goes to zero.

9.4.l: Variable
An AD<Base> object u is a variable if its value depends on an independent variable vector for a currently 9.4.j.a: active tape . If u is a variable, 4.5.4: Variable(u) returns true and 4.5.4: Parameter(u) returns false. For example, directly after the code sequence
     Independent(
x);
     AD<double> 
u = x[0];
the AD<double> object u is currently a variable. Directly after the code sequence
     Independent(
x);
     AD<double> 
u = x[0];
     
u = 5;
u is currently a 9.4.h: parameter (not a variable).

Note that we often drop the word currently and just refer to an AD<Base> object as a variable or parameter.
Input File: omh/glossary.omh
9.5: Bibliography

9.5.a: Abramowitz and Stegun
Handbook of Mathematical Functions, Dover, New York.

9.5.b: The C++ Programming Language
Bjarne Stroustrup, The C++ Programming Language, Special ed., AT&T, 2000

9.5.c: Evaluating Derivatives
Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Andreas Griewank, SIAM, Philadelphia, 2000

9.5.d: Numerical Recipes
Numerical Recipes in Fortran: The Art of Scientific Computing, Second Edition, William H. Press, William T. Vetterling, Saul, A. Teukolsky, Brian R. Flannery, Cambridge University Press, 1992

9.5.e: Shampine, L.F.
Implementation of Rosenbrock Methods, ACM Transactions on Mathematical Software, Vol. 8, No. 2, June 1982.
Input File: omh/bib.omh
9.6: Know Bugs and Problems Using CppAD

9.6.a: gcc 3.4.4 -O2
There appears to be a problem with gcc version 3.4.4 under Cygwin using the compiler option -O2.

9.6.a.a: Example
If you are running gcc 3.4.4, try using the 2.1.d: configure option
 
	CPP_ERROR_WARN="-O2 -Wall -ansi -pedantic-errors -std=c++98"
If the -O2 compiler option is a problem for your compiler, you will get warnings that do not make sense when executing the make command in the Example sub-directory. In addition, the example/Example program will generate a segmentation fault.

9.6.a.b: Adolc
If you are running gcc 3.4.4, try using the 2.1.d: configure options
     ./configure \
          ADOLC_DIR=
AdolcDir \
          CPP_ERROR_WARN="-Wall" \
          BOOST_DIR=
BoostDir
the following warning occurs during the make command:
 
/usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/bits/stl_uninitialized.h:82: 
warning: '__cur' might be used uninitialized in this function
This appears to be the same problem discussed in
     
http://www.cygwin.com/ml/cygwin-apps/2005-06/msg00159.html
and its follow up messages.
Input File: omh/bugs.omh
9.7: The CppAD Wish List

9.7.a: Atan2
The 4.4.3.2: atan2 function could be made faster by adding a special operator for it.

9.7.b: BenderQuad
See the 6.20.b: problem with the current BenderQuad specifications.

9.7.c: CondExp
Extend the conditional expressions 4.4.4: CondExp so that they are valid for complex types by comparing real parts. In addition, use this change to extend 6.21: LuRatio so that it works with complex AD types.

9.7.d: Exceptions
When the function 5.1: Independent is called, a new tape is created. If an exception occurs before the call to the corresponding 5: ADFun constructor or 5.3: Dependent , the tape recording will never stop. Thus, there should be a way to abort a tape recording.

9.7.e: Ipopt
  1. A speed test for 8.1.1: ipopt_cppad_nlp should be added. Then changes should be made to improve its speed.
  2. Perhaps it would help to cache the solution of the sparse Jacobian and spare Hessian graph coloring algorithm. Then, when the sparsity pattern does not depend on the argument value, these colorings would not have to be recomputed.
  3. In the case where retape(k) is true for some k , one can still use the structure of the representation to compute a sparsity structure. Currently ipopt_cppad_nlp uses a dense sparsity structure for this case
  4. The new_x flag could be used to avoid zero order forward mode computations. Because the same ADFun object is used at different argument values, this would require forward mode at multiple argument values (see 9.7.g: multiple arguments ).


9.7.f: Library
One could build a CppAD library for use with the type AD<double>. This would speed up compilation for the most common usage where the Base type is double.

9.7.g: Multiple Arguments
It has been suggested that computing and storing forward mode results for multiple argument values (and for multiple orders) is faster for Adolc. Perhaps CppAD should allow for forward mode at multiple argument values (perhaps multiple orders).

9.7.h: Numeric Limits
Use a multiple of std::numeric_limits<double>::epsilon() instead 1e-10 for a small number in correctness checks; e.g., see 4.4.2.12: tan.cpp .

9.7.i: Operation Sequence
It is possible to detect if the AD of Base 9.4.g.b: operation sequence does not depend on any of the 9.4.j.c: independent variable values. This could be returned as an extra 5.5: SeqProperty .

9.7.j: Optimization

9.7.j.a: Expression Hashing
Hash codes could be used to detect expressions that have already been computed (and avoid extra entries in the operation sequence). This would also involve has coding the constants and avoiding duplicate copies in the constant table.

9.7.j.b: Microsoft Compiler
The Microsoft's Visual C++ Version 9.0 generates a warning of the form warning C4396:%...% for every template function that is declared as a both a friend and inline (it thinks it is only doing this for specializations of template functions). The CPPAD_INLINE preprocessor symbol is used to convert these inline directives to empty code (if a Microsoft Visual C++ is used). If it is shown to be faster and does not slow down CppAD with other compilers, non-friend functions should be used to map these operations to member functions so that both can be compiled inline.

9.7.j.c: Remove Operations From Tape
A single 5.6.3.2: RevSparseJac sweep could be used to determine which parts of the operation sequence in an 5.2: ADFun object can be removed.

9.7.k: Scripting Languages
One could develop a SWIG compatible interface to AD<double> and ADFun<double> that would make it easy to connect the SWIG languages, e.g., Python, see, SWIG (http://www.swig.org/) for a description of SWIG and a list of the languages. This could also be used for faster evaluation of algorithms that have a fixed 9.4.g.b: operation sequence . This would require the 9.7.f: library wish list entry to be implemented.

9.7.l: Software Guidelines

9.7.l.a: Boost
The following is a list of some software guidelines taken from boost (http://www.boost.org/more/lib_guide.htm#Guidelines) . These guidelines are not followed by the current CppAD source code, but perhaps they should be:
  1. Names (except as noted below) should be all lowercase, with words separated by underscores. For example, acronyms should be treated as ordinary names (xml_parser instead of XML_parser).
  2. Template parameter names should begin with an uppercase letter.
  3. Use spaces rather than tabs.


9.7.m: Sparse Jacobians and Hessians
Testing 9.2.5.5: cppad_sparse_hessian.cpp with USE_CPPAD_SPARSE_HESSIAN equal to 1 (true) and 0 (false) indicates that sparse_hessian is more efficient than 5.7.4: Hessian (for large sparse cases). Create an implementation of 5.7.8: sparse_hessian that is more efficient (the initial implementation was only meant as a demonstration). For example, use arrays of index sets where for each row (column) the index contains the non-zero column (row) indices. (Also see 9.7.e: Ipopt wish list.)

9.7.n: Sparsity Patterns
Add option to use index sets for each variable (instead of a boolean array) for the computation of sparsity patterns. This should be more efficient for very large problems. When using arrays of booleans, use OpenMP to parallelize the computation of the sparsity patterns.

9.7.o: Speed Testing
Extend the speed tests for Adolc, Fadbad, and Sacado to run under MS Windows. Run the CppAD 9.2: speed tests on a set of different machines and operating systems.

9.7.p: Tan and Tanh
The AD tan and tanh functions are implemented using the AD sin, cos, sinh and cosh functions. They could be improved by making them atomic using the equations  \[
\begin{array}{rcl}
     \tan^{(1)} (x)  & = & 1 + \tan (x)^2 \\
     \tanh^{(1)} (x) & = & 1 - \tanh (x)^2
\end{array}
\] 
see 9.3.1.c: standard math functions .

9.7.q: Tracing
Add forward and reverse mode operation tracing to the developer documentation (perhaps it will eventually become part of the user interface and documentation).

9.7.r: VecAD
Make assignment operation in 4.6: VecAD like assignment in 4.2: ad_copy . This will fix slicing to int when assigning from double to VecAD< AD<double> >::reference object.

9.7.s: Vector Element Type
Change cross references from 6.7.b: elements of a specified type to 6.7.j: value_type .
Input File: omh/wish_list.omh
9.8: Changes and Additions to CppAD

9.8.a: Introduction
The sections listed below contain a list of the changes to CppAD in reverse order by date. The purpose of these sections is to assist you in learning about changes between various versions of CppAD.

9.8.b: Contents
whats_new_09: 9.8.1Changes and Additions to CppAD During 2009
whats_new_08: 9.8.2Changes and Additions to CppAD During 2008
whats_new_07: 9.8.3Changes and Additions to CppAD During 2007
whats_new_06: 9.8.4Changes and Additions to CppAD During 2006
whats_new_05: 9.8.5Changes and Additions to CppAD During 2005
whats_new_04: 9.8.6Changes and Additions to CppAD During 2004
whats_new_03: 9.8.7Changes and Additions to CppAD During 2003

Input File: omh/whats_new.omh
9.8.1: Changes and Additions to CppAD During 2009

9.8.1.a: Introduction
This section contains a list of the changes to CppAD during 2009 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

9.8.1.b: 01-31
Modify cppad/local/op_code.hpp to avoid incorrect warning by g++ version 4.3.2 when building pycppad (a python interface to CppAD).

9.8.1.c: 01-18
Sometimes an error occurs while taping AD operations. The 5.4: abort_recording function has been added to make it easier to recover in such cases.

Previously, CppAD speed and comparison tests used Adolc-1.10.2. The version used in the tests has been upgraded to Adolc-2.0.0. (http://www.math.tu-dresden.de/~adol-c/)

A discussion has been added to the documentation for 5.7.1: Jacobian about its use of 5.7.1.g: forward or reverse mode depending on which it estimates is more efficient.

A minor typo has been fixed in the description of 5.6.2.3.e: W(t, u) in 5.6.2.3: reverse_any . To be specific,  o ( t^{p-1} ) * t^{1-p} \rightarrow 0 has been replaced by  o ( t^{p-1} ) / t^{1-p} \rightarrow 0 .

9.8.1.d: 01-06
Made some minor improvements to the documentation in 5.2: FunConstruct .
Input File: omh/whats_new_09.omh
9.8.2: Changes and Additions to CppAD During 2008

9.8.2.a: Introduction
This section contains a list of the changes to CppAD during 2008 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

9.8.2.b: 12-19
In the documentation for 6.10: pow_int change the integer exponent from int y to const int &y . In the implementation for 4.4.3.4: pow make the integer base case agree with the documentation; i.e., change from int x to const int &x .

9.8.2.c: 12-14
Added another test of 8.1.11: mul_level calculations (in the test_more directory).

9.8.2.d: 12-04
Extensive explanation for the 8.1.1.3.5: ipopt_cppad_ode.cpp example was provided in the section 8.1.1.3: ipopt_cppad_ode .

9.8.2.e: 11-22
The CppAD interface to the Ipopt nonlinear programming solver has been moved from example/ipopt_cppad_nlp to 8.1.1: ipopt_cppad/ipopt_cppad_nlp .

9.8.2.f: 11-21
The Microsoft's Visual C++ Version 9.0 generates a warning of the form warning C4396:%...% for every template function that is declared as a both a friend and inline (it thinks it is only doing this for specializations of template functions). The warnings are no longer generated because these inline directives are converted to empty code when a Microsoft Visual C++ is used (see 9.7.j.b: Microsoft Compiler wish list item).

9.8.2.g: 11-20
The function tanh(x) was added to the 4.4.2: std_math_ad functions. The abs and erf functions were removed from the 4.7: Base requirements . The restrictions about the Base class were removed from 4.4.3.1: abs , 4.4.3.2: atan2 , 6.21: LuRatio , 4.4.3.3: erf .

Visual Studio Version 9.0 could not handle the large number of static constants in the CppAD 4.4.3.3: erf function. This function was changed to a simpler representation that is much faster and that is differentiable at all points (not defined differently on subregions). The down side to this is that the new version is not as accurate (see 4.4.3.3.e: method ).

9.8.2.h: 10-27
Change prototypes for 8.1.1.3.5: ipopt_cppad_ode.cpp helper routines to use const (where appropriate).

9.8.2.i: 10-17
Major improvements to the 8.1.1.3.5: ipopt_cppad_ode.cpp example.

9.8.2.j: 10-16
Minor improvement to description of optimization argument in 8.1.1.3.5: ipopt_cppad_ode.cpp .

9.8.2.k: 09-30
Add or modify some wish list entries; see 9.7.e: Ipopt , 9.7.g: multiple arguments , 9.7.m: sparse Jacobians and Hessians , 9.7.n: sparsity patterns .

9.8.2.l: 09-26
Use parenthesis and brackets to group terms of the form  m \times I to make the documentation of 8.1.1: ipopt_cppad_nlp easier to read. Changed 8.1.1.3.5: ipopt_cppad_ode.cpp to use  y(t) for the solution of the ODE to distinguish it for  x , the vector we are optimizing with respect to.

9.8.2.m: 09-18
Changed 8.1.1.3.5: ipopt_cppad_ode.cpp to a case where  x(t) is a pair of exponential functions instead of a linear and quadratic. Fixed some of the comments in this example and included the source code in the documentation (which was missing by mistake).

9.8.2.n: 09-17
Changed 8.1.1.3.5: ipopt_cppad_ode.cpp to a case where there are two components in the ODE (instead of one). Also removed an initialization section that was only intended for tests with a specific initial value.

9.8.2.o: 09-16
Add 8.1.1.3.5: ipopt_cppad_ode.cpp , an example and test that optimizes the solution of an ODE. Change r_eval to eval_r in 8.1.1: ipopt_cppad_nlp . Fix a dimension of u_ad error in ipopt_cppad_nlp.

9.8.2.p: 09-12
Converted from storing full Hessian and Jacobian to a sparse data structure in 8.1.1: ipopt_cppad_nlp . This greatly reduced the memory requirements (and increased the speed) for sparse problems.

9.8.2.q: 09-10
Fixed more indexing bugs in 8.1.1: ipopt_cppad_nlp that effected cases where the domain index vector  J_{k, \ell} was different for different values of  k and  \ell .

In 8.1.1: ipopt_cppad_nlp , combined fg_info->domain_index() and fg_info->range_index() into a single function called fg_info->index() . Also added more error checking (if NDEBUG is not defined).

9.8.2.r: 09-09
Fixed an indexing bug in 8.1.1: ipopt_cppad_nlp . (This effected cases where the domain index vector  J_{k, \ell} was different for different values of  k and  \ell .)

9.8.2.s: 09-07
Change 8.1.1: ipopt_cppad_nlp so that object and constraints are expressed as the double summation of simpler functions. This is more versatile that the single summation representation.

9.8.2.t: 09-06
Checked in a major change to 8.1.1: ipopt_cppad_nlp whereby the object and constraints can be expressed as the sum of simpler functions. This is the first step in what will eventually be a more versatile representation.

9.8.2.u: 09-05
Fix bug in 8.1.1: ipopt_cppad_nlp (not recording the function at the proper location. Here is the difference that occurred multiple places in the ipopt_cppad/ipopt_cppad_nlp.cpp source:
 
	for(j = 0; j < n_; j++)
-		x_ad_vec[0] = x[j];
+		x_ad_vec[j] = x[j];
This did not show up in testing because there currently is no test of ipopt_cppad_nlp where the operation sequence depends on the value of  x .

Changed eval_grad_f in ipopt_cppad_nlp.cpp to be more efficient.

9.8.2.v: 09-04
The 8.1.1: ipopt_cppad_nlp interface has been changed to use a derived class object instead of a pointer to a function.

9.8.2.w: 09-03
The 8.1.1: ipopt_cppad_nlp interface has been changed to use size_t instead of Ipopt::Index.

9.8.2.x: 09-01
Back out the changes made to 8.1.1: ipopt_cppad_nlp on 08-29 (because testing proved the change to be less efficient in the case that motivated the change).

9.8.2.y: 08-29
The push_vector member function was missing from the 6.23.j: vectorBool class. This has been fixed. In addition, it seems that for some cases (or compilers) the assignment
     
x[i] = y[j]
did not work properly when both x and y had type vectorBool. This has been fixed.

The 8.1.1: ipopt_cppad_nlp example has been extended so that it allows for both scalar and vector evaluation of the objective and constraints; see the argument fg_vector in 8.1.1: ipopt_cppad_nlp . In the case where there is not a lot of common terms between the functions, the scalar evaluation may be more efficient.

9.8.2.z: 08-19
Add 6.23.g: push of a vector to the CppAD::vector template class. This makes it easy to accumulate multiple scalars and 6.7: simple vectors into one large CppAD::vector.

9.8.2.aa: 08-08
There was an indexing bug in the 8.1.1: Ipopt example that affected the retape equal to false case. This has been fixed. In addition, the missing retape documentation was added.

9.8.2.ab: 07-02
Extend 2.1.d: configure command to check for extras libraries that are necessary for linking the ipopt example.

9.8.2.ac: 06-18
Add specifications for the Ipopt class 8.1.1: ipopt_cppad_nlp . This is only an example class it may change with future versions of CppAD.

9.8.2.ad: 06-15
The nonlinear programming example 8.1.1.2: ipopt_cppad_simple.cpp was added. This is a preliminary version of this example.

9.8.2.ae: 06-11
The sparsity pattern for the Hessian was being calculated each time by 5.7.8: SparseHessian . This is not efficient when the pattern does not change between calls to SparseHessian. An optional sparsity pattern argument was added to SparseHessian so that it need not be recalculated each time.

9.8.2.af: 06-10
The sparsity pattern for the Jacobian was being calculated each time by 5.7.7: SparseJacobian . This is not efficient when the pattern does not change between calls to SparseJacobian. An optional sparsity pattern argument was added to SparseJacobian so that it need not be recalculated each time.

9.8.2.ag: 05-08
The 5.7.7: sparse_jacobian routine has been added.

The example in 5.7.8: sparse_hessian pointed to 5.7.4.1: Hessian.cpp instead of 5.7.8.1: sparse_hessian.cpp . This has been fixed.

9.8.2.ah: 05-03
The retape flag has been added to 9.2.1: speed_main . In addition the routines 9.2.1.2: link_det_minor , 9.2.1.3: link_poly , and 9.2.1.5: link_ode pass this flag along to the speed test implementations (because the corresponding tests have a fixed operation sequence). If this flag is false, a test implementation is allowed to just tape the operation sequence once and reuse it. The following tests use this flag: 9.2.4.1: adolc_det_minor.cpp , 9.2.5.1: cppad_det_minor.cpp , 9.2.5.3: cppad_ode.cpp , 9.2.4.4: adolc_poly.cpp , 9.2.5.4: cppad_poly.cpp .

Create specialized zero order forward mode routine that should be faster, but does not test out as faster under cygwin g++ (GCC) 3.4.4.

9.8.2.ai: 04-20
Added the 9.2.2.7: ode_evaluate speed test utility in preparation for having ode speed tests. Created ode speed test for the cppad and double cases; see 9.2.1: speed_main . In addition, added the examples 9.2.2.7.1: ode_evaluate.cpp and 5.7.8.1: sparse_hessian.cpp .

Changed the 9.2.1: speed_main routines defined for each package from compute_name to link_name . For example, in speed/cppad/det_minor.cpp, the function name compute_det_minor was changed to link_det_minor.

9.8.2.aj: 04-18
Fix a problem in the 9.2.1.3: link_poly correctness test. Also add 9.2.3.5: double_sparse_hessian.cpp to the set speed and correctness tests (now available).

9.8.2.ak: 04-10
Change all the 9.2.4: Adolc speed examples to use 6.24: TrackNewDel instead of using new and delete directly. This makes it easy to check for memory allocation errors and leaks (when NDEBUG is not defined). Also include in documentation sub functions that indicate the sparse_hessian speed test is not available for 9.2.3.5: double_sparse_hessian.cpp , 9.2.6.5: fadbad_sparse_hessian.cpp , and 9.2.7.5: sacado_sparse_hessian.cpp .

9.8.2.al: 04-06
The following 9.7: wish list entry has been completed and removed from the list: "Change private member variables names (not part of the user interface) so that they all end with an underscore."

9.8.2.am: 04-04
Fix a problem compiling the speed test 9.2.1: main program with gcc 4.3.

9.8.2.an: 03-27
Corrected 9.2.5.5: cppad_sparse_hessian.cpp so that it uses the sparse case when USE_CPPAD_SPARSE_HESSIAN is 1. Also added a wish list 9.7.m: sparse Hessian entry.

Change the name of speedtest.cpp to 6.4.1: speed_program.cpp .

9.8.2.ao: 02-05
Change 2.2: windows install instructions instructions to use Unix formatted files (so only two instead of four tarballs are necessary for each version). The Microsoft project files for speed/cppad, speed/double, and speed/example were missing. This has also been fixed.

9.8.2.ap: 02-03
There was an ambiguity problem (detected by g++ 4.3) with the following operations
     
x op y
where x and y were AD<double> and op was a member operator of that class. This has been fixed by making all such member functions friends instead of members of AD<double>.

Remove computed assignment entry from wish list (it was fixed on 9.8.3.as: 2007-05-26 ). Add 9.7.j.a: expression hashing , 9.7.f: library , and 9.7.k: scripting languages entries to the wish list.

9.8.2.aq: 01-26
The 6.12.2: LuFactor routine gave a misleading error message when the input matrix had not a number or infinity in it. This has been fixed.

9.8.2.ar: 01-24
The 2.1.n: postfix directory has been added to the configure command line options.

9.8.2.as: 01-21
A sparse Hessian case was added to the 9.2: speed tests; see 9.2.1.4: sparse_hessian .

9.8.2.at: 01-20
CppAD can now be installed using yum on 2.1.a: Fedora operating systems.

9.8.2.au: 01-11
The CppAD correctness tests assume that machine epsilon is less than 1e-13. A test for this has been added to the test_more/test_more program; see 2.1.j: --with-Testing in Unix install instructions or 2.2.f: more correctness testing in Windows install instructions.

9.8.2.av: 01-08
Added a 5.7.8: sparse_hessian routine and extended 5.7.4: Hessian to allow for a weight vector w instead of just one component l.
Input File: omh/whats_new_08.omh
9.8.3: Changes and Additions to CppAD During 2007

9.8.3.a: Introduction
This section contains a list of the changes to CppAD during 2007 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

9.8.3.b: 12-29
License conversions missed the copyright message at the top in the following special cases: makefile.am, makefile.in, and omh/license.omh.

9.8.3.c: 12-25
The 2: install instructions have been improved.

9.8.3.d: 12-21
The 2.1.g: --with-Documentation option on the configure command line caused an error on some systems because it attempted to copy to many files. This has been fixed by copying the directory instead of the individual files.

9.8.3.e: 12-08
By mistake, the documentation 9.10: License statement for the GPL distribution was the same as for the CPL distribution. This has been fixed.

9.8.3.f: 12-05
Change the name of the spec file from cppad-yyyymmdd.spec to cppad.spec.

9.8.3.g: 12-04
Add the capability for the RPM spec file to use a different prefix directory.

9.8.3.h: 12-03
This is the first version with the rpm spec file cppad.spec.

9.8.3.i: 12-02
Add the DESTDIR=directory option on the 2.1.v: make install command line.

9.8.3.j: 11-29
The 4.4.2: std_math_ad function sqrt did not link properly when Base was AD<double>. This has been fixed.

9.8.3.k: 11-23
The routines nan and isnan were failing for some systems because they use nan and or isnan as preprocessor symbols. This has been fixed; see 6.9.c.a: macros . In addition, the example and test 6.9.1: nan.cpp has been added.

9.8.3.l: 11-18
Speed tests for tape_values branch were not better than trunk, so the good parts of that branch (but not all) were merged into the trunk.

The interface specifications for 4.7: base type requirements have been changed so that CppAD would compile with gcc 4.1.1 (which requires more definitions before use in template functions). This changed of requirements is demonstrated by the 4.7.1: base_complex.hpp and 4.7.2: base_adolc.hpp examples.

The problem with newer C++ compilers requiring more definitions before use also required the user to know about float and double definitions for the standard math functions in the CppAD namespace; see 6.22: std_math_unary .

The example/test_one.sh and test_more/test_one.sh scripts were modified so that one only need specify the test file name (does not also need the test routine name). Some of the test routine declarations were changed from name() to name(void) to make this possible.

The program test_more/test_more was changed to always report the memory leak test results (same as example/example).

The 4.3.4: PrintFor function was putting an unused variable in the tape. This has been fixed.

9.8.3.m: 11-06
Added the -DRAD_EQ_ALIAS compiler flag to the 9.2.7: Sacado speed tests . In addition, compiler flag documentation was included for 9.2.7.c: Sacado and all the other speed tests.

9.8.3.n: 11-05
MS project files have been added for running the 2.2.h: cppad and 2.2.i: double speed tests.

9.8.3.o: 11-04
The cppad/config.h file was not compatible with the 2.2: Windows install procedure and the Windows project's could not find a certain include file. This has been fixed in the definition of; see 8.4.c: Windows test vector .

The 2.1: unix install procedure has been modified so that the one configure flag 2.1.k: --with-Speed builds all the possible executables related to the speed testing.

9.8.3.p: 11-03
Improve the 9.2.1: speed_main documentation and output (as well as the title for other sections under 9.2: speed ).

The subversion copy of the 2.1.d: configure script was not executable. This has been fixed.

9.8.3.q: 11-02
The instructions for downloading the current version using 2.1.1: subversion have changed. The user should now directly edit the file
 
	trunk/configure
in order to set the correct date for the installation and to build the corresponding documentation.

The 9.2: speed section has been slightly reorganized (the main program and utilities have been separated).

Add 9.2.3: speed_double for testing the speed of evaluating functions in double as apposed to gradients using AD types.

9.8.3.r: 11-01
The instructions for downloading the current version using subversion have changed. The user must now execute the command
 
	./build.sh version
in order to set the correct version number for her (or his) installation.

Add the correctness tests 2.1.e: return status to the documentation.

9.8.3.s: 10-30
The download instructions did not update current version number and this broke the links to the current tarballs. This has been fixed.

The documentation for 9.2.2.3: det_by_minor and 9.2.2.4: det_by_lu has been improved. The order of the elements in 9.2.2.2: det_of_minor has been corrected (they were transposed but this did not really matter because determinants of transposes are equal).

The makefiles in the distribution have been changed so that one can run configure from a directory other than the distribution directory.

9.8.3.t: 10-27
A 2.1.1: subversion method for downloading CppAD has been added.

The installation was broken on some systems because the 2.1.d: configure command tried to run the autoconf and automake programs. This has been fixed by adding AM_MAINTAINER_MODE to the autoconf input file.

Extend the 2.1.1: subversion methods to include a full installation and old versions.

9.8.3.u: 10-23
The 2.1.t: compiler flags environment variable has been changed from CPP_ERROR_WARN to CXX_FLAGS.

The command configure --help now prints a description of the environment variables ADOLC_DIR, FADBAD_DIR, SACADO_DIR, BOOST_DIR, and CXX_FLAGS. In addition, if the environment variables POSTFIX_DIR or CPP_ERROR_WARN are used, an message is printed saying that are not longer valid.

9.8.3.v: 10-22
The correctness checks and speed test wrappers were moved from the individual package directories to 9.2.1: speed_main . This way they do not have to be reproduced for each package. This makes it easier to add a new package, but it requires the prototype for compute_test_name to be the same for all packages.

The Sacado (http://trilinos.sandia.gov/packages/sacado/) package was added to the list of 9.2: speed tests. In addition, the discussion about how to run each of the speed tests was corrected to include the seed argument.

The PostfixDir option was removed on 9.8.4.p: 2006-12-05 but it was not removed from the 2.1.d: configure documentation. This has been fixed.

The routine 6.8: CheckSimpleVector was changed. It used to require conversion of the form
     
Scalar(i)
where i was 0 or 1. This does not work with when Scalar is Sacado::Tay::Taylor<double>. This requirement has been changed (see 6.8.c: restrictions ) to support of
     
x = i
where x has type Scalar and i has type int.

Fix include directives in 9.2.6: speed_fadbad programs det_lu, det_minor, and poly, to use FADBAD++ instead of Fadbad++ directory.

Add ADOLC_DIR, FADBAD_DIR, SACADO_DIR, and BOOST_DIR to the 2.1.d: configure help string.

9.8.3.w: 10-16
Add seed argument and improve 9.2.1: speed_main documentation.

9.8.3.x: 10-13
Fix the title in 9.2.4.2: adolc_det_lu.cpp . Add the package name to each test case result printed by 9.2.1: speed_main .

9.8.3.y: 10-05
Added and example using complex calculations for a function that is not complex differentiable (4.7.1.2: not_complex_ad.cpp ).

9.8.3.z: 10-02
Extend the 4.4.3.4: pow function to work for any case where one argument is AD<Base> and the other is double (as do the binary operators).

9.8.3.aa: 09-06
If the 6.17.f.a: method.step function returned nan (not a number), it was possible for 6.17: OdeErrControl to drop into an infinite loop. This has been fixed.

9.8.3.ab: 08-09
Let user detect and handel the case where an ODE initial vector xi contains not a number nan (see 6.15: Runge45 , 6.16: Rosen34 , and 6.17: OdeErrControl ).

Use the || operation instead of | operator in the nan function (The Ginac library seems to use an alias for the type bool and does not have | defined for this alias).

The file test_more/ode_err_control.cpp was using the wrong include file name since the change on 08/07. This has been fixed.

9.8.3.ac: 08-07
Sometimes an ODE solver takes to large a step and this results in invalid values for the variables being integrated. The ODE solvers 6.15: Runge45 and 6.16: Rosen34 have been modified to abort and return 6.9: nan when it is returned by the differential equation evaluation. The solver 6.17: OdeErrControl have been modified to try smaller steps when this happens.

Fix an 5.2.g: Sequence Constructor referenced to Dependent in documentation (was using the 5.10: FunDeprecated one argument syntax).

Add comment about mixing debug and non-debug versions of CppAD in 6.24.k: TrackDelVec error message.

9.8.3.ad: 07-30
CppADCreateBinaryBool
and CppADCreateUnaryBool have been replaced by CPPAD_BOOL_BINARY and CPPAD_BOOL_UNARY respectively. In addition, the 9.7: WishList item for conversion of all preprocessor macros to upper case been completed and removed.

9.8.3.ae: 07-29
The preprocessor macros CppADUsageError and CppADUnknownError have been replaced by CPPAD_ASSERT_KNOWN and CPPAD_ASSERT_UNKNOWN respectively. The meaning for these macros has been included in the 6.1.2: cppad_assert section. In addition, the known argument to 6.1: ErrorHandler was wrong for the unknown case.

The 9.7: WishList item for conversion of all preprocessor macros to upper case has been changes (to an item that was previous missing).

9.8.3.af: 07-28
The preprocessor macro CPPAD_DISCRETE_FUNCTIOIN was defined as a replacement for CppADCreateDiscrete which has been deprecated.

9.8.3.ag: 07-26
Merge in changes made in branches/test_vector.

9.8.3.ag.a: 07-26
Change all occurrences of CppADvector, in the files test_more/*.cpp and speed/*/*.cpp, where changed to CPPAD_TEST_VECTOR. All occurrences of the CppADvector in the documentation were edited to reflect that fact that it has been deprecated. The documentation index and search for deprecated items has been improved.

9.8.3.ag.b: 07-25
Deprecate the preprocessor symbol CppADvector and start changing it to 8.4: CPPAD_TEST_VECTOR .

Change all occurrences of CppADvector, in the example/*.cpp files, to CPPAD_TEST_VECTOR.

9.8.3.ah: 07-23
The 6.24: TrackNewDel macros CppADTrackNewVec, CppADTrackDelVec, and CppADTrackExtend have been deprecated. The new macros names to use are CPPAD_TRACK_NEW_VEC, CPPAD_TRACK_DEL_VEC, and CPPAD_TRACK_EXTEND respectively. This item has been removed from the 9.7.l: software guidelines section of the wish list.

The member variable 9.7.l: software guideline wish list item has be brought up to date.

9.8.3.ai: 07-22
Minor improvements to the 8.1.9: ode_taylor_adolc.cpp example.

9.8.3.aj: 07-21
The 5.9.1: openmp_run.sh example programs example_a11c.cpp, multi_newton.cpp, and sum_i_inv.cpp have been changed so that they run on more systems (are C++ standard compliant).

The IdenticalEqual function, in the 4.7: base_require specification, was changed to IdenticalEqualPar (note the 4.7.b: warning in the Base requirement specifications).

Implementation of the 4.7: base requirements for complex types were moved into the 4.7.1: base_complex.hpp example.

9.8.3.ak: 07-20
The download for CppAD was still broken. It turned out that the copyright message was missing from the file 4.7.2: base_adolc.hpp and this stopped the creation of the download files. This has been fixed. In addition, the automated testing procedure has been modified so that missing copyright messages and test program failures will be more obvious in the test log.

9.8.3.al: 07-19
The download for CppAD has been broken since the example ode_taylor_adolc.cpp was added because the example/example program was failing. This has been fixed.

9.8.3.am: 07-18
A realistic example using Adolc with CppAD 8.1.9: ode_taylor_adolc.cpp was added. The documentation for 6.24: TrackNewDel was improved.

9.8.3.an: 07-14
Add a discussion at the beginning of 8.1.8: ode_taylor.cpp example (and improve the notation used in the example).

9.8.3.ao: 07-13
Separate the include file 4.7.2: base_adolc.hpp from the 4.7.2.1: mul_level_adolc.cpp example so that it can be used by other examples.

9.8.3.ap: 06-22
Add 4.7.2.1: mul_level_adolc.cpp , an example that demonstrates using adouble and for the 4.7: Base type.

The 2.1.h.a: get_started example did not build when the -with-Introduction and BOOST_DIR options were included on the 2.1.d: configure command line. In fact, some of the 9.2: speed tests also had compilation errors when BOOST_DIR was include in the configure command. This has been fixed.

There was a namespace reference missing in the files that could have caused compilation errors in the files speed/cppad/det_minor.cpp and speed/cppad/det_lu.cpp. This has been fixed.

9.8.3.aq: 06-20
The MS project test_more/test_more.vcproj would not build because the file test_more/fun_check.cpp was missing; this has been fixed. In addition, fix warnings generated by the MS compiler when compiling the test_more/test_more.cpp file.

Add a section defining the 4.7: Base type requirements . Remove the Base type restrictions from the 9.1: Faq . Make all the prototype for the default Base types agree with the specifications in the Base type requirements.

Fix the description of the tan function in 4.4.2: std_math_ad .

9.8.3.ar: 06-14
The routine 6.16: Rosen34 ( 6.15: Runge45 ) had a division of a size_t ( int ) by a Scalar, where Scalar was any 6.5: NumericType . Such an operation may not be valid for a particular numeric type. This has been fixed by explicitly converting the size_t to an int, then converting the int to a Scalar, and then preforming the division. (The conversion of an int to any numeric type must be valid.)

9.8.3.as: 05-26
If the Base type is not double, the 4.4.1.4: computed assignment operators did not always allow for double operands. For example, if x had type AD< AD<double> >
     
x += .5;
would slice the value .5 to an int and then convert it to an AD< AD<double> >. This has been fixed.

This slicing has also been fixed in the 4.2.a.b: assignment operation. In addition, the assignment and copy operations have been grouped together in the documentation; see 4.2: ad_copy .

9.8.3.at: 05-25
Document usage of double with binary arithmetic operators, and combine all those operators into one section (4.4.1.3: ad_binary ).

The documentation for all the 4.4.1.4: computed assignment operators has been grouped together. In addition, a computed assignment wish list item has been added (it was completed and removed with the 9.8.3.as: 05-26 update.)

9.8.3.au: 05-24
Suppose that op is a binary operation and we have
     
left op right
where one of the operands was AD< AD<double> > and the other operand was double. There was a bug in this case that caused the double operand to be converted to int before being converted to AD< AD<double> >. This has been fixed.

9.8.3.av: 05-22
The Microsoft 2.2.e: examples and testing project file example/example.vcproj was missing a reference to the source code file example/reverse_two.cpp. This has been fixed.

9.8.3.aw: 05-08
Reverse mode does not work with the 4.4.3.4: pow function when the base is less than or equal zero and the exponent is an integer. For this reason, the 6.10: pow_int function is no longer deprecated (and is used by CppAD when the exponent has type int).

9.8.3.ax: 05-05
Third and fourth order derivatives were included in the routine test_more/sqrt.cpp that tests square roots.

The return value descriptions were improved for the introduction examples: 3.2.4.e: exp_2_for1 , 3.2.6.e: exp_2_for2 , 3.3.4.d: exp_eps_for1 , and 3.3.6.e: exp_eps_for2 .

The summation index in 9.3.2.3: SqrtReverse was changed from  k to  \ell to make partial differentiation with respect to  z^{(k)} easier to understand. In addition, a sign error was corrected near the end of 9.3.2.3: SqrtReverse .

The dimension for the notation  X in 9.3.3: reverse_identity was corrected.

The word mega was added to the spelling exception list for 5.9.1: openmp_run.sh .

9.8.3.ay: 04-19
Improve connection from 9.3.3: reverse_identity theorem to 5.6.2.3: reverse_any calculations.

Improve the 5.9.1: openmp_run.sh script. It now runs all the test cases at once in addition to including multiple number of thread cases for each test.

Add the 5.9.1.3: sum_i_inv.cpp OpenMP example case.

There was a typo in the 5.6.1.3.l: second order discussion (found by Kipp Martin). It has been fixed.

9.8.3.az: 04-17
Add a paragraph to 9.3.3: reverse_identity explaining how it relates to 5.6.2.3: reverse_any calculations. Add description of 5.6.2.3.i.a: first and 5.6.2.3.i.b: second order results in 5.6.2.3: reverse_any .

9.8.3.ba: 04-14
Simplify the 5.6.2: Reverse mode documentation by creating a separate 5.6.2.2: reverse_two section for second order reverse, making major changes to the description in 5.6.2.3: reverse_any , and creating a third order example 5.6.2.3.1: reverse_any.cpp for reverse mode calculations.

Improve the 9.3.3: reverse_identity proof.

9.8.3.bb: 04-11
Merge in changes made in branches/intro.

9.8.3.bb.a: 04-11
Add 3.3.7: exp_eps_rev2 and its verification routine 3.3.7.1: exp_eps_rev2.cpp .

9.8.3.bb.b: 04-10
Finished off 3.2.7: exp_2_rev2 and added 3.2.7.1: exp_2_rev2.cpp which verifies its calculations. Added second order calculations to 3.2.8: exp_2_cppad . Added 3.3.6: exp_eps_for2 and its verification routine.

9.8.3.bb.c: 04-07
Added a preliminary version of 3.2.7: exp_2_rev2 (does not yet have verification or exercises).

9.8.3.bb.d: 04-06
Fixed a problem with the Microsoft Visual Studio project file introduction/exp_apx/exp_apx.vcproj (it did not track the file name changes of the form exp_apx/exp_2_for to exp_apx/exp_2_for1 on 04-05).

Added 3.2.6: exp_2_for2 to introduction.

9.8.3.bb.e: 04-05
Use order expansions in introduction; e.g., the 3.2.6.a: second order expansion for the 3.2: exp_2 example.

9.8.3.bc: 03-31
Merge in changes made in branches/intro and remove the corresponding Introduction item from the wish list:

9.8.3.bc.a: 03-31
Create the a simpler exponential approximation in the 3: introduction called 3.2: exp_2 which has a different program variable for each variable in the operation sequence.

Simplify the 3.3: exp_eps approximation using the  v_1 , \ldots , v_7 notation so that variables directly correspond to index in operation sequence (as with the 3.2: exp_2 example).

9.8.3.bc.b: 03-30
The Microsoft project file introduction/exp_apx/exp_apx.vcproj was referencing exp_apx_ad.cpp which no longer exists. It has been changed to reference exp_apx_cppad.cpp which is the new name for that file.

9.8.3.bd: 03-29
Fixed entries in this file where the year was mistakenly used for the month. To be more specific, 07-dd was changed to 03-dd for some of the entries directly below.

Corrected some places where CppAD was used in stead of Adolc in the 9.2.4.4: adolc_poly.cpp documentation.

Added an Introduction and 9.7.q: Tracing entry to the wish list. (The Introduction item was completed on 9.8.3.bc: 03-31 .)

9.8.3.be: 03-20
5.9.1.1: Example A.1.1c , from the OpenMP 2.5 standards document, was added to the tests that can be run using 5.9.1: openmp_run.sh .

9.8.3.bf: 03-15
Included the changes from openmp branch so that so CppAD does not use the OpenMP threadprivate command (some systems do not support this command).

9.8.3.bf.a: 03-15
Add command line arguments to 5.9.1.2: multi_newton.cpp , and modified 5.9.1: openmp_run.sh to allow for more flexible testing.

9.8.3.bf.b: 03-14
Fixed some Microsoft compiler warnings by explicitly converting from size_t to int.

In the Microsoft compiler case, the cppad/config.h file had the wrong setting of GETTIMEOFDAY. The setting is now overridden (and always false) when the _MSC_VER preprocessor symbol is defined.

Some minor changes were made in an effort to speed up the multi-threading case.

9.8.3.bf.c: 03-13
Started a new openmp branch and created a version of CppAD that does not use the OpenMP threadprivate command (not supported on some systems).

9.8.3.bg: 03-09
Included the changes from openmp branch so that OpenMP can be used with CppAD, see 5.9: omp_max_thread . The changes dated between 9.8.3.bg.f: 02-15 and 03-28 below were made in the openmp branch and transferred to the trunk on 03-09.

9.8.3.bg.a: 03-28
The conditional include commands were missing on some include files; for example
 
	# ifndef CPPAD_BENDER_QUAD_INCLUDED
	# define CPPAD_BENDER_QUAD_INCLUDED
was missing at the beginning of the 6.20: BenderQuad include file. This has been fixed.

The speed_test routines 6.3.j: timing was changed to use gettimeofday if it is available. (gettimeofday measures wall clock time which is better in a multi-threading environment).

Added the user multi-threading interface 5.9: omp_max_thread along with its examples which are distributed in the directory openmp.

The speed/*.hpp files have been moved to cppad/speed/*.hpp and the corresponding wish list item has been removed.

The multiple tapes with the same base type wish list item have been removed (it's purpose was multi-threading which has been implemented).

9.8.3.bg.b: 02-27
The 9.2: speed include files are currently being distributed above the cppad include directory. A fix this wish list item has been added.

Multiple active tapes required a lot of multi-threading access management for the tapes. This was made simpler (and faster) by having at most one tape per thread.

9.8.3.bg.c: 02-22
The include command in the 6.3: speed_test documentation was
 
	# include <speed/speed_test.hpp>
but it should have been
 
	# include <cppad/speed_test.hpp>
This has been fixed.

9.8.3.bg.d: 02-17
An entry about 9.7.j: optimizing the operation sequence in an 5.2: ADFun object was added.

Change the argument syntax for 5.3: Dependent and deprecate the 5.10.c: old Dependent syntax .

9.8.3.bg.e: 02-16
Added VecAD<Base> as a valid argument type for the 4.5.4: Parameter and Variable functions. In addition, 4.6.h: size_t indexing is was extended to be allowed during taping so long as the VecAD object is a parameter.

9.8.3.bg.f: 02-15
Fixed the example/test_one.sh script (it was using its old name one_test).

9.8.3.bh: 02-06
The 6.20: BenderQuad documentation was improved by adding the fact that the x and y arguments to the f.dy member function are equal to the x and y arguments to BenderQuad. Hence values depending on them can be stored as private objects in f and need not be recalculated.

9.8.3.bi: 02-04
The method for distributing the documentation needed to be changed in the top level makefile.am in order to be compatible with automake version 1.10.

9.8.3.bj: 02-03
The change on 9.8.3.bl: 02-01 had a new, saved as a static pointer, with no corresponding delete. This was not a bug, but it has been changed to avoid an error message when using CppAD with valgrind (http://valgrind.org/) .

The change to the pow function on 9.8.4.m: 06-12-10 did not include the necessary changes to the 5.6.3: Sparsity calculations. This has been fixed.

9.8.3.bk: 02-02
Fix minor errors and improve 2.1.k.c: profiling documentation. Also change the problem sizes used for the 9.2: speed tests.

9.8.3.bl: 02-01
There seems to be a bug in the cygwin version of g++ version 3.4.4 with the -O2 flag whereby some static variables in static member functions sometimes do not get constructed before being used. This has been avoided by using a static pointer and the new operator in cppad/local/ad.hpp.

9.8.3.bm: 01-29
The copyright message was missing from some of the distribution files for some new files added on 9.8.4.i: 06-12-15 . This resulted in the tarballs *.tgz and *.zip not existing for a period of time. The automated tests have been extended so that this should not happen again.
Input File: omh/whats_new_07.omh
9.8.4: Changes and Additions to CppAD During 2006

9.8.4.a: Introduction
This section contains a list of the changes to CppAD during 2006 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions of CppAD.

9.8.4.b: 12-24
Move exp_eps_ad to exp_eps_cppad and add exercises to the following sections: 3.3.5: exp_eps_rev1 , 3.3.8: exp_eps_cppad .

Add operation sequence indices to help track operations in 3.3.3: exp_eps_for0 , 3.3.4: exp_eps_for1 , 3.3.5: exp_eps_rev1 .

9.8.4.c: 12-23
Add exercises to the following sections: 3.1: get_started.cpp , 3.3: exp_eps , 3.3.3: exp_eps_for0 , and 3.3.4: exp_eps_for1 .

9.8.4.d: 12-22
Move 3.1: get_started.cpp below the 3: introduction directory.

Move the exponential example to the subdirectory introduction/exp_apx and change the 2.1.h: --with-Introduction unix configure option to build both the 3.1: get_started.cpp and 3.4: exp_apx_main.cpp example programs. (The --with-GetStarted configure command line option has been removed.)

9.8.4.e: 12-21
Add the 6.11.2: source code for Poly to the documentation and include 6.11: Poly in the in the 9.2.2: speed_utility section.

The 3.1: get_started.cpp section has been moved into the 3: Introduction and 3.1.f: exercises were added to that section. In addition some sections has switched position between the top level : CppAD and the 9: Appendix .

9.8.4.f: 12-19
Reorganize so that the source code is below the corresponding routine in the documentation tree (instead of at the same level) for the following routines: 9.2.2.3: det_by_minor , 9.2.2.6: det_grad_33 , 9.2.2.1: uniform_01 , 9.2.2.2: det_of_minor , 9.2.2.4: det_by_lu , 6.12.3: LuInvert , 6.12.2: LuFactor , 6.12.1: LuSolve .

Separate the specifications for the source in 9.2.2: speed_utility and add cross reference to the following routine specification and implementations: 9.2.2.3: det_by_minor , 9.2.2.6: det_grad_33 , 9.2.2.1: uniform_01 , 9.2.2.2: det_of_minor , 9.2.2.4: det_by_lu , 6.12.3: LuInvert , 6.12.2: LuFactor , 6.12.1: LuSolve .

9.8.4.g: 12-18
Make the 9.2: speed source code easier to read.

Change the speed test output name det_poly to poly (as it should have been).

9.8.4.h: 12-17
The speed test 9.2.2.6: det_grad_33 was missing its documentation (this has been fixed). In addition, the titles and indexing for the speed test documentation has been improved.

Add to the specifications that each repeated test corresponds to a different matrix in 9.2.1.1: det_lu and 9.2.1.2: det_minor . In addition, modify all the speed tests so that they abide by this rule.

Change some references from the deprecated name CppAD.h to new name cppad.hpp.

Change 9.2.4.1: adolc_det_minor.cpp and 9.2.5.1: cppad_det_minor.cpp to tape once and reuse operation sequence for each repeated matrix in the test.

Add the 9.2.1.3: poly speed test for all three packages. In addition, correct a missing include in 6.11: poly routine.

9.8.4.i: 12-15
The wish list item to simplify and better organize the speed tests been completed:
9.2: speed/ template functions that are speed tested
speed/example example usage of speed template functions
9.2.4: speed/adolc Adolc drivers for the template functions
9.2.5: speed/cppad CppAD drivers for the template functions
9.2.6: speed/fadbad Fadbad drivers for the template functions
2.1.k.c: speed/profile profiling version of CppAD drivers

9.8.4.j: 12-13
Next step for the speed wish list item: remove speed_cppad from the documentation and replace it by speed/cppad, see 9.2.5: speed_cppad for the new CppAD speed test routines.

9.8.4.k: 12-12
Started the speed wish list item by move the adolc director to speed/adolc and fadbad to speed/fadbad.

9.8.4.l: 12-11
Started the speed wish list item by creating the speed/example directory and moving the relevant examples from example/*.cpp and speed_example/*.cpp to speed/example/*.cpp. In addition, the relevant include files have been moved from example/*.hpp to speed/*.hpp.

A new 6.3: speed_test routine was added to the library.

9.8.4.m: 12-10
The 4.4.3.4: pow function was changed to be a an AD<Base> 9.4.g.a: atomic operation. This function used to return a nan if x is negative because it was implemented as
     pow(
xy) = exp( log(x) * y )
This has been fixed so that the function and its derivatives are now calculated properly when x is less than zero. The 4.4.3.4: pow documentation was improved and the 4.4.3.4.1: Pow.cpp example was changed to test more cases and to use the same variable names as in the documentation.

9.8.4.n: 12-09
A speed wish list item was added to the wish list.

The prototype for int arguments in binary operations (for example 4.4.1.3: addition ) was documented as const int & but was actually just plain int. This has been fixed. (Later changed to double.)

9.8.4.o: 12-07
Fix bug in the subversion installation instructions; see bug report (http://list.coin-or.org/pipermail/cppad/2006q4/000076.html) .

The some of the automatically generated makefile.in files had an improper license statement in the GPL license version. This has been fixed.

9.8.4.p: 12-05
Add the unix installation 2.1.g: --with-Documentation option and remove the PostfixDir option.

Create a fixed 9.8: whats_new section above the section for each particular year. Also improve the CppAD distribution README file.

9.8.4.q: 12-03
The include file directory CppAD was changed to be all lower case; i.e., cppad. If you are using a Unix system, see 9.9: include_deprecated . This completes the following 9.7: WishList items (which were removed):
  1. File and directory names should only contain lowercase letters, numbers underscores and possibly one period. The leading character must be alphabetic.
  2. C++ header files should have the .hpp extension.


9.8.4.r: 12-02
Put explanation of version numbering in the download instructions.

Correct some file name references under the Windows heading in 9.2.5: speed_cppad .

9.8.4.s: 12-01
All of the Makefile.am and Makefile files were changed to lower case; i.e., makefile.am and makefile.

Fix compiler warning while compiling cppad/RombergOne/ (mistake occurred during 9.8.4.u: 11-20 change).

9.8.4.t: 11-30
Cygwin packages, and other system packages, should not have a dash in the version number. See cygwin package file naming (http://cygwin.com/setup.html#naming) or, to quote the rpm file naming convention (http://www.rpm.org/max-rpm/ch-rpm-file-format.html) The only restriction placed on the version is that it cannot contain a dash "-". As per the acceptable package naming conventions for cygwin, CppAD version numbering has be changed from yy-mm-dd format to yyyymmdd; i.e. cppad-06-11-30 was changed to cppad-20061130.

9.8.4.u: 11-29
There was a problem using 6.13: RombergOne with floating point types other than double. This has been fixed.

9.8.4.v: 11-28
The 2: installation download files were not being built because Makefile.am referenced Doc when it should have referenced doc. This has been fixed.

9.8.4.w: 11-23
A Version Numbering entry was added to the 9.7: WishList (this was completed on 9.8.4.t: 11-30 ).

9.8.4.x: 11-18
The example routine that computes determinants using expansion by minors DetOfMinor was changed to 9.2.2.2: det_of_minor , in preparation for more formal speed comparisons with other packages. To be specific, its documentation was improved, its dependence on the rest of CppAD was removed (it no longer includes : CppAD.h ).

9.8.4.y: 11-12
The 2.1.i: example and 2.1.j: test_more programs were changed to print out the number of tests that passed or failed instead of just "All the tests passed" or "At least one of the tests failed".

The windows project files for 2.2.e: examples and testing should have been changes to use lower case file names on as part of the 11-08 change below. This has been fixed.

9.8.4.z: 11-08
Move the Example directory to example and change all its files to use lower case names.

9.8.4.aa: 11-06
Move the TestMore directory to test_more and change all its files to use lower case names.

9.8.4.ab: 11-05
Remove references in the 9.2.5: speed_cppad tests to the Memory and Size functions because they have been 5.10: deprecated .

Correct some references to var_size that should have been 5.5.h: size_var .

9.8.4.ac: 11-04
Put text written to standard output in the documentation for the 3.1.h: get_started.cpp and 4.3.4.1.b: PrintFor.cpp examples. (Now documentation can be built from a subversion checkout with out needing to execute automake.) The PrintFor.cpp and speedtest.cpp examples were missing in 8.3: ListAllExamples (which has been fixed).

Move the Speed directory to speed and change all its files to use lower case names.

9.8.4.ad: 11-02
The print_for directory was referenced as PrintFor in the root CppAD Makefile.am this has been fixed.

The documentation for the Adolc helper routines AllocVec and AllocMat were not being included. This has been fixed.

Move the GetStarted directory to get_started and change all its files to use lower case names.

9.8.4.ae: 11-01
Move the PrintFor directory to print_for and change all its files to use lower case names.

9.8.4.af: 10-31
Move the SpeedExample directory to speed_cppad_example and change all its files to use lower case names.

9.8.4.ag: 10-29
Move the Adolc directory to adolc and change all its files to use lower case names.

Change all the file in the omh directory to use lower case names.

The file Makefile.am in the distribution directory had the CPL copyright message in the GPL version. This has been fixed.

9.8.4.ah: 10-28
The copyright message in the script files example/OneTest and TestMore/OneTest were GPL (in the CPL distribution). This has been fixed by moving them to example/OneTest.sh and TestMore/OneTest.sh so that the distribution automatically edits the copyright message.

9.8.4.ai: 10-27
Change 5.7.4.2: HesLagrangian.cpp example so that it computes the Lagrangian two ways. One is simpler and the other can be used to avoid re-taping operation sequence.

9.8.4.aj: 10-26
Change 5.7.4.2: HesLagrangian.cpp example so that it modifies the independent variable vector between the call to 5.1: Independent and the ADFun<Base> 5.2: constructor .

9.8.4.ak: 10-25
A subversion install procedure was added to the documentation.

Fix definition of preprocessor symbol PACKAGE_STRING in Speed/Speed.cpp (broken by change on 10-18).

Added the example 5.7.4.2: HesLagrangian.cpp which computes the Hessian of a Lagrangian.

9.8.4.al: 10-18
Document and fix possible conflicts for 7: preprocessor symbols that do not begin with CppAD or CPPAD_.

Include a default value for the file cppad/config.h in the subversion repository.

9.8.4.am: 10-16
Fix bug when using 6.17: OdeErrControl with the type AD< AD<double> >.

9.8.4.an: 10-10
Add the 4.3.5: Var2Par function so it is possible to obtain the 4.3.1: Value of a variable. Move the Discrete.cpp example to 4.4.5.1: TapeIndex.cpp . Fix the Microsoft project file so that the Windows install 2.2.e: examples and testing works properly (it was missing the 8.1.10: StackMachine.cpp example).

9.8.4.ao: 09-30
These changes were grouped together because it took a while for Coin-Or to review the dual licensing version and because it was not possible to get the nightly build changed:
  1. Change shell scripts to use *.sh extension.
  2. Two versions, one with CPL and other with GPL license.
  3. Change subversion version of CppAD from GPL to CPL copyright.
  4. Change all files in cppad/local to use lower case and *.hpp extension.
  5. CppAD_vector.h was generating a warning on version 4 of gcc. This have been fixed.
  6. Change the preprocessor # define commands in cppad/local/*.hpp to use upper case names.
  7. Add the 8.1.10: StackMachine.cpp example.


9.8.4.ap: 08-17
Some error message occurred while executing
 
	valgrind --tool=memcheck example/example
	valgrind --tool=memcheck TestMore/TestMore

These were not really bugs, but they have been fixed to avoid this conflict between CppAD and valgrind (http://valgrind.org/) .

9.8.4.aq: 07-14
Make some improvements were made to the 3: Introduction , 3.3.1: exp_eps.hpp and 3.3.5: exp_eps_rev1 sections.

9.8.4.ar: 07-12
Use a drop down menu for the navigation links, instead of a separate frame for the navigation links, for each section in the documentation.

9.8.4.as: 06-29
Newer versions of the gcc compiler generated an error because 4.4.3.3: erf was using 4.4.4: CondExp before it was defined. This was found by Kasper Kristensen and his fix has been included in the CppAD distribution.

9.8.4.at: 06-22
The 5: ADFun operation f(xy) no longer executes a zero order 5.6.1: Forward operation when a new operation sequence is stored in f. In addition, the syntax for this operation was changed to f.Dependent(y) (see 5.3: Dependent ).

9.8.4.au: 06-19
The changes listed under 06-17 and 06-18 were made in the branches/ADFun branch of the CppAD subversion repository. They did not get merged into the trunk and become part of the distribution until 06-19. This accomplished the following goal, which was removed from the 9.7: WishList :

"We would like to be able to erase the function values so that 5: ADFun objects use less memory. We may even want to erase the AD operation sequence so that 5: ADFun objects use even less memory and can be used for a subsequent AD operation sequence."

9.8.4.au.a: 06-17
Added 5.6.1.6: capacity_taylor which can be used to control the amount of memory used to store 5.6.1: Forward results. Also 5.10: deprecated taylor_size, and defined 5.6.1.4: size_taylor in its place.

9.8.4.au.b: 06-18
Added the 5.2: ADFun default constructor and the ability to 5.3: store a new operation sequence in an ADFun object with out having to use ADFun pointers together with new and delete.

9.8.4.av: 06-17
The location where the distribution files are stored has changed and this broke the Download Current Version links for the unix and windows installation. This has been fixed.

The compiling instructions for the 9.2.5: speed_cppad routines have been improved.

The 4.3.1: Value function has been extended to allow for 9.4.h: parameter arguments even if the corresponding tape is in the Recording state.

The 6.20: BenderQuad documentation and example have been improved by changing Vector to BAvector to emphasize that it corresponds to a vector of Base objects.

9.8.4.aw: 06-15
Change 6.20: BenderQuad to use Base instead of AD<Base> where every possible. This allows for more calculations to be done in the base type; i.e., is more efficient.

9.8.4.ax: 06-09
Add a size check (size one) for the 6.20.g: function value argument, g in BenderQuad.

9.8.4.ay: 06-07
Some major changes were made to the notation in 3.1: get_started.cpp (to make it easier to start using CppAD).

In the 3: Introduction example,  exp_eps was changed to  {\rm exp\_eps} .

9.8.4.az: 06-05
Change 6.20: BenderQuad  F_y (x, y) to  H(x,y) so applies in a more general setting. This was another change to the BenderQuad interface, fun.fy was changed to fun.h.

9.8.4.ba: 06-02
Newer versions of the gcc compiler generated a warning for possible use of an uninitialized pointer. This was found by Michael Tautschnig and his fix has been included in the CppAD distribution.

9.8.4.bb: 05-31
The interface to 6.20: BenderQuad has been changed. Now all the function evaluation routines are member functions of one class object. This makes it easy for them to share common data.

9.8.4.bc: 05-29
Change statement of command syntax to be in the same browser frame as the command documentation (for all the commands with a syntax statement). Now when a user links to a specific heading in a command's documentation, the syntax for that command is automatically included. Before the user needed to follow another link to see to the command syntax.

9.8.4.bd: 05-27
Added 6.20: BenderQuad for computing the Hessian of Bender's reduced objective function.

Added special specifications for resize(0) to 6.23: CppAD_vector .

9.8.4.be: 05-03
The g++ (GCC) 4.1.0 (Red Hat 4.1.0-3) compiler reported an error because certain functions were used before being defined (version 3.4.4 did not complain about this). This has been fixed.

9.8.4.bf: 04-29
Change all of the example and test driver programs so that they return error codes; i.e., zero for no error and one for an error.

Add more discussion and a reference for the 9.6.a: gcc 3.4.4 -O2 bug.

9.8.4.bg: 04-28
Improve the 3.1: get_started.cpp example and move it so that it is visible at the too level of the documentation.

9.8.4.bh: 04-26
The programs in 3: Introduction have been converted to automated test that return true or false with the driver program 3.4: Introduction .

9.8.4.bi: 04-25
Add an 3: Introduction section to the documentation (replaces old example that was part of the 9.3: Theory section).

9.8.4.bj: 04-19
A discussion was added near the end of the 5.8: FunCheck documentation. And the cross references to the 5.6.1.5: CompareChange discussion were changed to the FunCheck discussion.

An operation sequence entry was added to the 9.7: WishList .

9.8.4.bk: 04-18
The new definitions for 9.4.b: AD of Base and 9.4.g.b: operation sequence have been used throughout the documentation.

Add the 5.8: FunCheck section for checking that a sequence of operations is as intended.

9.8.4.bl: 04-17
The documentation for 6.4: SpeedTest and 6.11: Poly was improved.

Definitions were added for an atomic 9.4.g: operation and for an operation sequence being dependent and independent of the values of specific operands.

The definition of AD sequence of operations was made abstract and moved to the glossary as 9.4.g.b: Type operation sequence .

9.8.4.bm: 04-15
The 8.1.11: mul_level example was moved from 5: ADFun to 8.1: General . The documentation for 6.4: SpeedTest was improved.

9.8.4.bn: 04-14
Documentation and examples were improved for the following routines: 5.7.5: ForTwo , 5.7.6: RevTwo . In addition, the computation in RevTwo was made more efficient (it used to possibly calculate some first order partials that were not used).

9.8.4.bo: 04-13
Documentation and examples were improved for the following routines: 5.7.1: Jacobian , 5.7.2: ForOne , 5.7.3: RevOne , and 5.7.4: Hessian .

9.8.4.bp: 04-08
In the case where 5.5.g: use_VecAD is true, the 5.6.3.1: ForSparseJac calculation in only for the current independent variable values. In this case, the sparsity pattern can be (and has been) made more efficient; i.e., fewer true values (because it only applies to the current 5.6.1.1: ForwardZero ).

The conversion from 4.6.d: VecAD<Base>::reference to 4: AD gave a compile error (this has been fixed). Code example for this fix
 
	VecAD<double> V(1);
	AD<double> zero = 0;
	V[zero] = 1.;
	static_cast< AD<double> > ( V[zero] );


9.8.4.bq: 04-06
The 5.6.3.1: ForSparseJac , 5.6.3.2: RevSparseJac , 5.6.3.3: RevSparseHes sparsity results are now valid for all independent variable values (if the AD operation sequence does no use any VecAD<Base> operands). In addition, the ForSparseJac, 5.6.3.2: RevSparseJac and 5.6.3.3: RevSparseHes documentation and examples were improved.

The 5.5.g: useVecAD member function was added to 5: ADFun objects.

The var_size member function was changed to 5.5.h: size_var (this is not backward compatible, but var_size was just added on 9.8.4.bt: 04-03 ).

9.8.4.br: 04-05
The documentation and example for 5.6.1.5: CompareChange were improved and moved to be part of the 5.6.1: Forward section.

9.8.4.bs: 04-04
The documentation and examples for 5.6.2: Reverse were improved and split into 5.6.2.1: reverse_one and 5.6.2.3: reverse_any .

9.8.4.bt: 04-03
Create separate sections for the 5.6.1.1: zero and 5.6.1.2: ForwardOne first order case of 5.6.1: Forward mode.

The ADFun 5.10.f: Size member function has been deprecated (use 5.6.1.4: size_taylor instead).

The 5.6.2: Reverse member function is now declared, and documented as, const; i.e., it does not effect the state of the ADFun object.

Change the examples that use 5.6.2: Reverse to use the same return value notation as the documentation; i.e., dw.

9.8.4.bu: 04-02
The member functions of 5: ADFun that return properties of AD of Base 9.4.g.b: operation sequence have been grouped into the 5.5: SeqProperty section. In addition, the 5.5.1: SeqProperty.cpp example has been added.

The 5.6.1.5: CompareChange function documentation was improved and moved to a separate section.

Group the documentation for the 5: ADFun member functions that 5.6: evaluate functions and derivative values .

Remove the old Fun.cpp example and extend 5.1.1: Independent.cpp so that it demonstrates using different choices for the 6.7: SimpleVector type.

9.8.4.bv: 04-01
Move the 5.2: ADFun Constructor to its own separate section, improve its documentation, and use 5.1.1: Independent.cpp for its example.

The following member functions of 5: ADFun have been 5.10: deprecated : Order, Memory.

The wish list entry for Memory usage was updated on 04-01. The request was implemented on 9.8.4.au: 06-19 and the entry was removed from the wish list.

9.8.4.bw: 03-31
Add examples for the 4.5.4: Parameter, Variable and 5.1: Independent functions.

Move the 4.5.4: Parameter and Variable functions from the 5: ADFun section to the 4: AD section.

In the examples for the 4: AD sections, refer to the range space vector instead of the dependent variable vector because some of the components may not be 9.4.l: variables .

9.8.4.bx: 03-30
Move the 6.21: LuRatio section below 6.12: LuDetAndSolve .

Move the definition of an AD of Base 9.4.g.b: operation sequence from the glossary to the 4: AD section.

Improve the definition of tape state.

Add mention of taping to 4.4.3.3: Erf , 4.5.3: BoolFun , 4.5.2: NearEqualExt ,and 4.4.3.4: Pow .

Change the definition for 4.6.d: VecAD<Base>::reference so that it stands out of the text better.

9.8.4.by: 03-29
Mention the 4.6.d: VecAD<Base>::reference case in documentation and examples for 4.4.3.1: abs , 4.4.3.2: atan2 , 4.4.3.3: erf , and 4.4.3.4: pow .

Fix a bug derivative computation for abs(x) when x had type AD< AD<double> > and x had value zero.

Fix a bug using non-zero AD indices for 4.6: VecAD vectors while the tape is in the empty state.

Extend 4.4.3.3: erf to include float, double, and VecAD<Base>::reference.

9.8.4.bz: 03-28
Mention the 4.6.d: VecAD<Base>::reference case in documentation and examples for 4.4.1.1: UnaryPlus , 4.4.1.2: UnaryMinus , 4.4.1.3: ad_binary , 4.4.1.4: compute_assign , and 4.4.2: std_math_ad

9.8.4.ca: 03-27
Extend and improve the 4.6.d.a: VecAD exceptions .

Mention the 4.6.d: VecAD<Base>::reference case and generally improve 4.4.1.3: addition documentation and examples.

9.8.4.cb: 03-26
Improve documentation and examples for 4.6: VecAD and change its element type from VecADelem<Base> to VecAD_reference<Base> (so that it looks more like 4.6.d: VecAD<Base>::reference ).

Mention the 4.6.d: VecAD<Base>::reference case and generally improve 4.3.1: Value , 4.3.3: Output and 4.2.a.b: assignment documentation and examples.

Extend 4.3.2: Integer and 4.3.4: PrintFor to include the 4.6.d: VecAD<Base>::reference case (and mention in documentation and examples).

9.8.4.cc: 03-24
Move 4.6: VecAD and 6.21: LuRatio from the old ExtendDomain section to 4: AD .

9.8.4.cd: 03-23
Improve documentation and examples for 4.4.4: CondExp and 4.4.5: Discrete . Move both of these sections from ExtendDomain to 4.4: ADValued .

9.8.4.ce: 03-22
The documentation sections under 4: AD have been organized into a new set of sub-groups.

9.8.4.cf: 03-18
The documentation and example for 4.3.4: PrintFor have been improved. The sections below 4: AD in the documentation have been organized into subgroups.

9.8.4.cg: 03-17
The documentation and examples have been improved for the following functions: 4.5.3: BoolFun , and 4.5.2: NearEqualExt .

9.8.4.ch: 03-16
Improve the documentation and example for the 4.4.3.4: pow function. This includes splitting out and generalizing the integer case 6.10: pow_int .

The copies of the atan2 function were included in the CppAD namespace for the float and double types.

9.8.4.ci: 03-15
Improve the b: introduction to CppAD.

9.8.4.cj: 03-11
The file cppad/local/MathOther.h had a file name case error that prevented the documentation from building and tests from running (except under Cygwin which is not really case sensitive). This has been fixed.

The term AD of Base 9.4.g.b: operation sequence has been defined. It will be used to improve the user's understanding of exactly how an 5: ADFun object is related to the C++ algorithm.

9.8.4.ck: 03-10
The math functions that are not under 4.4.2: std_math_ad have been grouped under 4.4.3: MathOther .

The documentation and examples have been improved for the following functions: 4.4.3.1: abs , 4.4.3.2: atan2 .

9.8.4.cl: 03-09
The examples 4.4.2.4: Cos.cpp , 4.4.2.5: Cosh.cpp , 4.4.2.6: Exp.cpp , 4.4.2.7: Log.cpp , 4.4.2.8: Log10.cpp , 4.4.2.9: Sin.cpp , 4.4.2.10: Sinh.cpp , 4.4.2.11: Sqrt.cpp have been improved.

9.8.4.cm: 03-07
The tan function has been added to CppAD.

The examples 4.4.2.1: Acos.cpp , 4.4.2.2: Asin.cpp and 4.4.2.3: Atan.cpp have been improved.

9.8.4.cn: 03-05
The AD standard math unary functions documentation has been grouped together with improved documentation in 4.4.2: std_math_ad .

9.8.4.co: 02-28
The 4.3.3: Output and 4.4.3.1: Abs documentation and example have been improved. Minor improvements were also made to the 8.2.3: LuVecAD documentation.

9.8.4.cp: 02-25
The 4.5.1: Compare documentation and example have been improved.

9.8.4.cq: 02-24
The documentation and examples have been improved for the following sections: 4.4.1.3: division , 4.4.1.4: -= , 4.4.1.4: *= , and 4.4.1.4: /= .

9.8.4.cr: 02-23
The 4.4.1.3: multiplication documentation and example have been improved.

9.8.4.cs: 02-21
The 4.4.1.3: subtraction documentation and example have been improved.

There was a bug 5.7.6: RevTwo that was not detected by the 5.7.6.1: RevTwo.cpp test. This bug was reported by Kasper Kristensen (http://list.coin-or.org/pipermail/cppad/2006-February/000020.html) A test was added TestMore/RevTwo.cpp that detects this problem and the problem has been fixed.

9.8.4.ct: 02-15
The 4.4.1.4: += documentation and example have been improved.

9.8.4.cu: 02-14
The 4.4.1.3: addition documentation and example have been improved.

9.8.4.cv: 02-13
Combine the old binary operator and computed assignment documentation into 4.4.1: Arithmetic documentation.

The documentation and examples have been improved for the following sections: 4.2.a.b: assignment , 4.4.1.1: UnaryPlus , 4.4.1.2: UnaryMinus .

9.8.4.cw: 02-11
The documentation and examples have been improved for the following sections: 4.1: Default , 4.2: ad_copy , and 4.3.1: Value .

9.8.4.cx: 02-10
This is the beginning of a pass to improve the documentation: The documentation sections The CopyBase (formerly FromBase and now part of 4.2: ad_copy ) and 4.2.a.a: AD copy constructor (formerly Copy) documentation has been modified.

Some of the error messaging during 5: ADFun construction has been improved.

9.8.4.cy: 02-04
There was a read memory access past the end of an array in 6.23.f: CppAD::vector::push_back . This has been fixed and in addition 6.24: TrackNewDel is now used to do and check the allocation in CppAD::vector.

The routines 6.15: Runge45 and 6.16: Rosen34 had static vectors to avoid recalculation on each call. These have been changed to be plain vectors to avoid memory leak detection by 6.24.m: TrackCount .

9.8.4.cz: 01-20
Add 9.7.l: software guidelines to the wish list.

9.8.4.da: 01-18
Improve the definition for 9.4.h: parameters and 9.4.l: variables . Remove unnecessary reference to parameter and variable in documentation for 5.1: Independent .

9.8.4.db: 01-08
The aclocal program is part of the automake and autoconf system. It often generates warnings of the form:
     /usr/share/aclocal/
...: warning: underquoted definition of
     
...
The shell script file FixAclocal, which attempts to fix these warnings, was added to the distribution.

9.8.4.dc: 01-07
Change CppAD error handler from using the macros defined in cppad/CppADError.h to using a class defined in 6.1: cppad/error_handler.hpp . The macros CppADUnknownError and CppADUsageError have been depreciated (they are temporarily still available in the file cppad/local/CppADError.h).

9.8.4.dd: 01-02
Add the sed script Speed/gprof.sed to aid in the display of the 2.1.k.c: profiling output.

Make the following source code files easier to understand: Add.h, Sub.h, Mul.h, Div.h (in the directory cppad/local).

9.8.4.de: 01-05
Make the following source code files easier to understand: RevSparseHes.h, Reverse.h, Fun.h, Forward.h, ForSparseJac.h, RevSparseJac.h (in the directory cppad/local).
Input File: omh/whats_new_06.omh
9.8.5: Changes and Additions to CppAD During 2005

9.8.5.a: 12-24
Fix a memory leak that could occur during the 5.6.3.1: ForSparseJac calculations.

9.8.5.b: 12-23
The buffers that are used to do 5.6.3.2: RevSparseJac and 5.6.3.3: RevSparseHes calculations are now freed directly after use.

The 6.24.1: TrackNewDel.cpp example was missing from the Windows install 2.2.e: examples and testing project file. This has been fixed.

9.8.5.c: 12-22
The buffer that is are used to do 5.6.2: Reverse mode calculations is now freed directly after use. This reduces the memory requirements attached to an 5: ADFun object.

9.8.5.d: 12-20
Buffers that are used to store the tape information corresponding to the AD<Base> type are now freed when the corresponding 5: ADFun object is constructed. This reduces memory requirements and actually had better results with the 9.2.5: speed_cppad tests.

The 9.2.5: speed_cppad test program now outputs the version of CppAD at the top (to help when comparing output between different versions).

9.8.5.e: 12-19
The 6.24: TrackNewDel routines were added for track memory allocation and deletion with new[] and delete[]. This is in preparation for making CppAD more efficient in it's use of memory. The bug mentioned on 9.8.5.p: 12-01 resurfaced and the corresponding routine was changed as follows:
 
	static ADTape<Base> *Tape(void)
	{	// If we return &tape, instead of creating and returning ptr,
		// there seems to be a bug in g++ with -O2 option.
		static ADTape<Base> tape;
		static ADTape<Base> *ptr = &tape;
		return ptr;
	}


9.8.5.f: 12-16
The 6.2: NearEqual function documentation for the relative error case was changed to
     | 
x - y | <= r * ( |x| + |y| )
so that there is no problem with division by zero when x and y are zero (the code was changed to that form also). The std::abs function replaced the direct computation of the complex norms (for the complex case in NearEqual). In addition, more extensive testing was done in 6.2.1: Near_Equal.cpp .

9.8.5.g: 12-15
Extend 6.2: NearEqual and 4.5.2: NearEqualExt to cover more cases while converting them from, a library function in lib/CppADlib.a and an utility in example/NearEqualExt.h, to a template functions in cppad/near_equal.hpp and cppad/local/NearEqualExt.h. This is another step along the way of removing the entire CppADlib.a library.

The change on 9.8.5.h: 12-14 broke the Microsoft project files example/Example.sln and TestMore/TestMore.sln used during CppAD 2.2: installation on Windows . This has been fixed.

Move lib/SpeedTest.cpp to cppad/speed_test.hpp. This was the last change necessary in order to remove the CppAD library, so remove all commands related to building and linking CppADlib.a. The corresponding entry has been removed from the 9.7: WishList .

One of the entries in the 9.7: WishList corresponded to the 4.3.2: Integer function. It has also been removed (because it is already implemented).

9.8.5.h: 12-14
Extend 4.4.3.3: erf to cover more cases while converting it from a function in lib/CppADlib.a to a template function in cppad/local/Erf.h. This is one step along the way of removing the entire CppADlib.a library.

9.8.5.i: 12-11
Group routines that extend the domain for which an 5: ADFun object is useful into the ExtendDomain section.

Add an example of a C callable routine that computes derivatives using CppAD (see 8.1.2: Interface2C.cpp ).

9.8.5.j: 12-08
Split out 6.12.2: LuFactor with the ratio argument to a separate function called 6.21: LuRatio . This needed to be done because 6.21: LuRatio is more restrictive and should not be part of the general template 6: library .

9.8.5.k: 12-07
Improve 6.8: CheckSimpleVector so that it tests element assignment. Change 6.8.1: CheckSimpleVector.cpp so that it provides and example and test of a case where a simple vector returns a type different from the element type and the element assignment returns void.

9.8.5.l: 12-06
The specifications for a 6.7: SimpleVector template class were extended so that the return type of an element access is not necessarily the same as the type of the elements. This enables us to include std::vector<bool> which packs multiple elements into a single storage location and returns a special type on element access (not the same as bool). To be more specific, if x is a std::vector<bool> object and i has type size_t, x[i] does not have type bool.

Add a Home icon, that links to the CppAD home page (http://www.coin-or.org/CppAD/) , to the top left of the navigation frame (left frame) for each documentation section.

9.8.5.m: 12-05
The 5.6.3.3: RevSparseHes reverse mode Hessian sparsity calculation has been added.

The definition of a 9.4.i: sparsity pattern has been corrected to properly correspond to the more efficient form mentioned under 9.8.5.s: whats_new_05 below.

The dates in this file used to correspond to local time for when the change was checked into the subversion repository (http://projects.coin-or.org/CppAD/browser) . From now on the dates in this file will correspond to the first version of CppAD where the change appears; i.e., the date in the unix and windows download file names CppAD-yy-mm-dd.

9.8.5.n: 12-03
There was a bug in the 5.6.3.2: RevSparseJac reverse mode sparsity patterns when used with 4.6: VecAD calculations. This bug was fixed and the calculations were made more efficient (fewer true entries).

9.8.5.o: 12-02
There was a bug in the 5.6.3.1: ForSparseJac forward mode sparsity patterns when used with 4.6: VecAD calculations. This bug was fixed and the calculations were made more efficient (fewer true entries).

9.8.5.p: 12-01
The speed test of 8.2.3: LuVecAD has been reinstated. It appears that there is some sort of bug in the gcc compiler with the -O2 option whereby the following member function
 
	static ADTape<Base> *Tape(void)
	{	static ADTape<Base> tape;
		return &tape;
	}
(in cppad/local/AD.h) would sometimes return a null value (during 4.6: VecAD operations). A speed improvement in cppad/local/ExtendBuffer.h seems to prevent this problem. This fix is not well understood; i.e., we should watch to see if this problem reoccurs.

The source code for 8.2.3.1: LuVecADOk.cpp was mistakenly used for speed_cppad/LuSolveSpeed.cpp. This has been fixed.

9.8.5.q: 11-23
The speed test of 8.2.3: LuVecAD has been commented out because it sometimes generates a segmentation fault. Here is an explanation:

If X is a AD<Base> object, y is a Base object, X[y] uses pointer from the element back to the original vector. Optimizing compilers might reorder operations so that the vector is destroyed before the object is used. This can be avoided by changing the syntax for 4.6: VecAD objects to use set and get member functions.

9.8.5.r: 11-22
A much better 4.6.1: example for using 4.6: VecAD vectors has been provided. In addition, a bug in the computation of derivatives using VecAD vectors has been fixed.

CppAD now checks that the domain dimension during 5.1: Independent and the range dimension during 5: ADFun (provided that -DNDEBUG is not defined). If either of these is zero, the CppADUsageError macro is invoked.

9.8.5.s: 11-20
The sparsity pattern routines 5.6.3.1: ForSparseJac and 5.6.3.2: RevSparseJac have been modified so that they are relative to the Jacobian at a single argument value. This enables us to return more efficient 9.4.i: sparsity patterns .

An extra 4.6.d.a: exception has been added to the use of 4.6: VecAD elements. This makes VecAD some what more efficient.

9.8.5.t: 11-19
Improve the output messages generated during execution of the 2.1.d: configure command.

Put a try and catch block around all of the uses of new so that if a memory allocation error occurs, it will generate a CppADUsageError/ message.

The 3.1: get_started.cpp example has been simplified so that it is easier to understand.

9.8.5.u: 11-15
Fix a memory leak in both the 5.6.3.1: ForSparseJac and 5.6.3.2: RevSparseJac calculations.

9.8.5.v: 11-12
Add reverse mode 5.6.3.2: Jacobian sparsity calculation.

9.8.5.w: 11-09
Add prototype documentation for 6.12.1.l: logdet in the 6.12.1: LuSolve function.

Add the optional ratio argument to the 6.12.2: LuFactor routine. (This has since been moved to a separate routine called 6.21: LuRatio .)

9.8.5.x: 11-07
Remove some blank lines from the example files listed directly below (under 11-06). Comments for computing the entire Jacobian 5.6.3.1.i: entire sparsity pattern was added.

9.8.5.y: 11-06
The cases of std::vector, std::valarray, and CppAD::vector were folded into the standard example and tests format for the following cases: 5.7.6.1: RevTwo.cpp , 5.7.3.1: RevOne.cpp , Reverse.cpp, 5.7.4.1: Hessian.cpp , 5.7.1.1: Jacobian.cpp , 5.6.1.7: Forward.cpp , 5.7.5.1: ForTwo.cpp , 5.7.2.1: ForOne.cpp , Fun.cpp (Fun.cpp has since been replaced by 5.1.1: Independent.cpp , Reverse.cpp has since been replaced by 5.6.2.1.1: reverse_one.cpp and reverse_any.cpp).

9.8.5.z: 11-01
Add forward mode 5.6.3.1: Jacobian sparsity calculation.

9.8.5.aa: 10-20
Add 9.4.i: sparsity patterns to the whish list.

9.8.5.ab: 10-18
The Unix install 2.1.d: configure command was missing the -- before of the prefix command line argument.

9.8.5.ac: 10-14
The template class 6.23: CppAD_vector uses a try/catch block during the allocation of memory (for error reporting). This may be slow down memory allocation and hence it is now replaced by simple memory allocation when the preprocessor variable NDEBUG is defined.

The specialization of CppAD::vector<bool> was moved to 6.23.j: vectorBool so that CppAD::vector<bool> does not pack one bit per value (which can be slow to access).

9.8.5.ad: 10-12
Change the 2.1.d: configure script so that compilation of the 3.1: get_started.cpp and 4.3.4.1: PrintFor.cpp examples are optional.

One of the dates in the Unix installation extraction discussion was out of date. This has been fixed.

9.8.5.ae: 10-06
Change the Unix install configure script so that is reports information using the same order and notation as its 2.1.d: documentation .

Some compiler errors in the 6.19.1: OdeGearControl.cpp and 8.1.7: OdeStiff.cpp examples were fixed.

9.8.5.af: 09-29
Add a specialization to 6.23: CppAD_vector for the CppAD::vector<bool> case. A test for the push_back member function as well as a 6.8: CheckSimpleVector test has been added to 6.23.1: CppAD_vector.cpp . The source code for this template vector class, cppad/vector.hpp, has been removed from the documentation.

9.8.5.ag: 09-27
Add the 2.1.f: PrefixDir and PostfixDir (PostfixDir has since been removed) options to the configure command line. This gives the user more control over the location where CppAD is installed.

9.8.5.ah: 09-24
The stiff Ode routines, 6.18: OdeGear and 6.19: OdeGearControl , were added to the 6: library . A comparison various Ode solvers on a stiff problem 8.1.7: OdeStiff.cpp was added. In addition, OdeGear and OdeGearControl were added to the 6: library and the library was reorganized.

9.8.5.ai: 09-20
The Microsoft compiler project files example/Example.vcproj and TestMore/TestMore.vcproj were not up to date. This has been fixed. In addition, the example 6.5.1: NumericType.cpp has been added.

Make the building of the Example, TestMore, and Speed, directories optional during the 2.1.d: configure command. The 2.1: Unix installation instructions were overhauled to make the larger set of options easy to understand.

9.8.5.aj: 09-14
Added the 6.5: NumericType concept and made the following library routines require this concept for their floating point template parameter type: 6.12.1: LuSolve , 6.12.2: LuFactor , 6.13: RombergOne , 6.14: RombergMul , 6.15: Runge45 , 6.16: Rosen34 , and 6.17: OdeErrControl . This is more restrictive than the previous requirements for these routines but it enables future changes to the implementation of these routines (for optimization purposes) with out affecting their specifications.

9.8.5.ak: 09-09
Add the 4.4.1.1: UnaryPlus operator and move the Neg examples and tests to 4.4.1.2: UnaryMinus .

9.8.5.al: 09-07
Change name of distribution files from CppAD.unix.tar.gz and CppAD.dos.tar.gz to CppAD-yy-mm-dd.tar.gz and CppAD-yy-mm-dd.zip (the *.zip file uses pkzip compression).

9.8.5.am: 08-30
The maxabs argument has been added to the 6.17: OdeErrControl function so that it can be used with relative errors where components of the ODE solution may be zero (some of the time). In addition, some of the rest of the OdeErrControl documentation has been improved.

The documentation for replacing defaults in CppAD error macros has been improved.

9.8.5.an: 08-24
Changed Romberg to 6.13: RombergOne and added 6.14: RombergMul . In addition, added missing entries to 8.3: ListAllExamples and reorganized 6: library .

9.8.5.ao: 08-20
Backed out addition of Romberg integration routine (at this point uncertain of the interface that is most useful in the context of AD.)

9.8.5.ap: 08-19
Added a Romberg integration routine for where the argument types are template parameters (for use with AD types).

9.8.5.aq: 08-15
The Microsoft project files example/Example.vcproj and TestMore/TestMore.vcproj were missing some necessary routines. In addition, Speed/Speed.vcproj was generating a warning. This has been fixed.

9.8.5.ar: 08-14
An 4.3.2: Integer conversion function as been added.

The 4.3.1.1: Value.cpp example has been improved and the old example has been moved into the TestMore directory.

9.8.5.as: 08-13
The 4.4.2: AD standard math unary functions sinh, and cosh have been added. In addition, more correctness testing ( 2.1.j: unix , 2.2.f: windows ) has been added for the sin and cos functions.

The 6.17: OdeErrControl routine could lock in an infinite loop. This has been fixed and a test case has been added to check for this problem.

9.8.5.at: 08-07
The 4.4.4: conditional expression function has been changed from just CondExp to CondExpLt, CondExpLe, CondExpEq, CondExpGe, CondExpGt. This should make code with conditional expressions easier to understand. In addition, it should reduce the number of tape operations because one need not create as many temporaries to do comparisons with. The old CondExp function has been deprecated.

9.8.5.au: 07-21
Remove unnecessary no-op that was left in tape for the 4.4.2: AD standard math unary functions acos, asin, atan, cos.

Improve the index entries in the documentation that corresponds to the cppad/local directory source code.

9.8.5.av: 07-19
The 9.7: WishList and 9.6: Bugs information were moved out of this section and into their own separate sections.

A discussion of 4.6.k: VecAD speed and memory was added as well as an entry in the 9.7: WishList to make it more efficient.

9.8.5.aw: 07-15
The BOOST_DIR and CPP_ERROR_WARN 2.1.d: configure options were not properly implemented for compiling the lib sub-directory. This has been fixed.

Some compiler warnings in the file lib/ErrFun.cpp, which computes the 4.4.3.3: erf function, have been fixed.

9.8.5.ax: 07-11
The 6.23.f: push_back function has been added to the CppAD::vector template class.

It appears that the TestMore/Runge45.cpp file was missing an include of example/NearEqualExt.h. This has been fixed.

9.8.5.ay: 07-08
The documentation for 5.6.1: Forward and 5.6.2: Reverse has been improved.

9.8.5.az: 07-05
The 6.16.1: Rosen34.cpp example mixed the 6.23: CppAD::vector and CppADvector vector types. This caused the compilation of the examples to fail when CppADvector was defined as something other than CppAD::vector (found by Jon Pearce). This has been fixed.

The 6.8: CheckSimpleVector run time code has been improved so that it is only run once per case that is being checked.

Simple Vector concept checking (6.8: CheckSimpleVector ) was added to the routines: 5.7.2: ForOne , 5.7.5: ForTwo , 5.6.1: Forward , 5: ADFun , 5.7.4: Hessian , 5.1: Independent , 5.7.1: Jacobian , 5.7.3: RevOne , 5.7.6: RevTwo , and 5.6.2: Reverse .

9.8.5.ba: 07-04
Simple Vector concept checking (6.8: CheckSimpleVector ) was added to the routines: 6.12.2: LuFactor , 6.12.1: LuSolve , 6.12.3: LuInvert , 6.17: OdeErrControl , 6.15: Runge45 , and 6.16: Rosen34 .

The previous version of the routine 6.17: OdeErrControl was mistakenly in the global namespace. It has been moved to the CppAD namespace (where all the other 6: library routines are).

The previous distribution (version 05-07-02) was missing the file cppad/local/Default.h. This has been fixed.

9.8.5.bb: 07-03
Added 6.8: CheckSimpleVector , a C++ concept checking utility that checks if a vector type has all the necessary conditions to be a 6.7: SimpleVector class with a specific element type.

9.8.5.bc: 07-02
Version 7 of Microsoft's C++ compiler supports the standard declaration for a friend template function. Version 6 did not and CppAD used macros to substitute the empty string for <Base>, < AD<Base> >, and < VecAD<Base> > in these declarations. These macro substitutions have been removed because Version 6 of Microsoft's C++ compiler is no longer supported by CppAD.

The copy base section was split into the 4.1: default constructor and the construction for the base type. The construction from base type has been extended to include any type that is convertible to the base type. As a special case, this provides the previous wish list item of a constructor from an arbitrary Base to a AD< AD<Base> >, AD< AD< AD<Base> > > etc.

9.8.5.bd: 07-01
The permissions were set as executable for many of the no-executable files in the distribution; for example, the README, file. This has been fixed.

9.8.5.be: 06-25
Some improvements were made to the README, AUTHORS, COPYING, and INSTALL files. In addition, the file UWCopy040507.html (../UWCopy040507.html) which contains the University of Washington's copyright policy (see Section 2) was added to the distribution.

9.8.5.bf: 06-24
The List2Vector 8.2: example utility is no longer used and has been removed.

9.8.5.bg: 06-18
CppAD is now supported by Microsoft Visual C++ version 7 or higher. The version 6 project files *.dsw and *.dsp have been replaced by the version 7 project files *.sln and *.vcproj.

9.8.5.bh: 06-14
A new 4.4.4.1: CondExp example has been added and the old 4.4.4: CondExp example has been moved to the TestMore directory (it is now only a test).

9.8.5.bi: 06-13
The changes made on 06-06 do not run under Microsoft Visual C++ version 6.0 (even though they are within the C++ standard). Preliminary testing under version 7 indicates that Microsoft has fixed this problem in later versions of their C++ compiler.

9.8.5.bj: 06-06
Converted the routines 5.6.1: Forward and 5.6.2: Reverse to allow for any 6.7: SimpleVector instead of just CppADvector. In addition, separated the syntax of the function call from the prototype for each of the arguments. This was also done for all the easy to use 5.7: Drivers as well as the 5.1: Independent function and the 5: ADFun constructor.

Add a section containing a list of 8.3: all the examples .

9.8.5.bk: 05-19
A significant improvement in speed was obtained by moving the buffer extension to a separate function and then inline the rest of putting operators in the tape. For example, here is part of the speed test output before this change:
 
	Tape of Expansion by Minors Determinant: Length = 350, Memory = 6792
	size = 5 rate = 230
	size = 4 rate = 1,055
	size = 3 rate = 3,408
	size = 2 rate = 7,571
	size = 1 rate = 13,642
and here is the same output after this change:
 
	Tape of Expansion by Minors Determinant: Length = 350, Memory = 6792
	size = 5 rate = 448
	size = 4 rate = 2,004
	size = 3 rate = 5,761
	size = 2 rate = 10,221
	size = 1 rate = 14,734
Note that your results will vary depending on operating system and machine.

9.8.5.bl: 05-18
Change name of OdeControl to 6.17: OdeErrControl and improve its documentation.

Correct the syntax for the 4.5.4: Parameter and Variable functions.

9.8.5.bm: 05-16
Change 6.17: OdeErrControl to have method return its order instead of having a separate argument to OdeErrControl.

Add the argument scur to OdeErrControl, improve OdeErrControl choice of step size and documentation.

9.8.5.bn: 05-12
Using profiling, the 4.4.1.3: multiplication operator was show to take a significant amount of time. It was reorganized in order to make it faster. The profiling indicated an improvement so that same change was made to the 4.4.1.3: ad_binary and 4.4.1.4: compute_assign operators.

9.8.5.bo: 05-06
The documentation for 6.7: SimpleVector and 6.2: NearEqual were changed to use more syntax (what the user enters) and simpler prototypes (the compiler oriented description of the arguments). In addition, exercises were added at the end of the 6.7: SimpleVector , 6.23: CppAD_vector , and 6.2: NearEqual documentation.

There was a undesired divide by zero case in the file TestMore/VecUnary.cpp that just happened to work in corresponding 6.2: NearEqual check. The NearEqual routine has been changed to return false if either of the values being compared is infinite or not a number. In addition, the divide by zero has been removed from the TestMore/VecUnary.cpp test.

9.8.5.bp: 05-01
The doubly linked list was also removed from the 4.6: VecAD internal data structure because this method of coding is simpler and it makes it more like the rest of CppAD.

9.8.5.bq: 04-21
The profiling indicated that the destructor for an AD object was using a significant amount of time. The internal data structure of an AD object had a doubly linked list that pointed to the current variables and this was modified when an AD object was destroyed. In order to speed AD operations in general, the internal data structure of an AD object has been changed so that this list is no longer necessary (a tape id number is used in its place)

During the process above, the function 4.5.4: Variable was added.

9.8.5.br: 04-20
Add 2.1.k.c: profiling to the speed tests.

9.8.5.bs: 04-19
Remove an extra (not necessary) semi-colon from the file cppad/local/Operator.h.

9.8.5.bt: 03-26
The new routine 6.17: OdeErrControl does automatic step size control for the ODE solvers.

9.8.5.bu: 03-23
The routine 6.16: Rosen34 is an improved stiff integration method that has an optional error estimate in the calling sequence. You must change all your calls to OdeImplicit to use Rosen34 (but do not need to change other arguments because error estimate is optional).

9.8.5.bv: 03-22
The routine 6.15: Runge45 is an improved Runge-Kutta method that has an optional error estimate in the calling sequence. You must change all your calls to OdeRunge to use Runge45 (but do not need to change other arguments because error estimate is optional).

9.8.5.bw: 03-09
Some extra semi-colons (empty statements) were generating warnings on some compilers. The ones that occurred after the macros CppADStandardMathBinaryFun, CppADCompareMember, CppADBinaryMember, and CppADFoldBinaryOperator have been removed.

9.8.5.bx: 03-04
An new multiple level of AD example 8.1.11: mul_level was added.

9.8.5.by: 03-01
An option that specifies error and warning 2.1.t: flags for all the C++ compile commands, was added to the 2.1: Unix installation instructions .

9.8.5.bz: 02-24
The routine 6.12.1: LuSolve was split into 6.12.2: LuFactor and 6.12.3: LuInvert . This enables one to efficiently solve equations where the matrix does not change and the right hand side for one equation depends on the left hand side for a previous equation.

An extra requirement was added to the 6.7: SimpleVector template class. There must be a typedef for value_type which is the type of elements in the vector

Under Mandrake Linux 10.1, some template friend declarations were failing because the corresponding operations were not declared before being indicated as friends (found by Jean-Pierre Dussault (mailto:Jean-Pierre.Dussault@Usherbrooke.ca) ). This has been fixed.

9.8.5.ca: 01-08
The 4.4.3.3: erf function was added. The implementation of this function used conditional expressions (4.4.4: CondExp ) and some times the expression that was not valid in a region caused division by zero. For this reason, the check and abort on division by zero has been removed.
Input File: omh/whats_new_05.omh
9.8.6: Changes and Additions to CppAD During 2004

9.8.6.a: Introduction
This section contains a list of the changes plus future plans for CppAD during 2004 (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions.

9.8.6.b: 12-11
The documentation for the CppAD error macros was improved. The package title in : cppad was changed. The documentation for 6.23: CppAD::vector was improved and the corresponding source code cppad/vector.hpp was included.

9.8.6.c: 12-09
The 6.12.1: LuSolve and OdeRunge source code was modified to make the more in line with the introduction to C++ AD book (OdeRunge has been replaced by 6.15: Runge45 ). In addition, the examples OdeRunge.cpp and 6.12.1.1: LuSolve.cpp were modified to make the simpler. (The more complex version of OdeRunge.cpp was moved to the TestMore directory.)

9.8.6.d: 12-03
The 6.11: Poly documentation and source code were modified to make them more in line with the introduction to C++ AD book.

9.8.6.e: 11-17
Changing to Autoconf and Automake on 9.8.6.ah: 08-24 mistakenly forgot the -Wall compiler switch (all warnings). This has been added and the corresponding warnings have been fixed.

9.8.6.f: 11-16
The 11-15 Debug version would not compile under Visual C++ version 7.0 because a declaration of LessThanOrZero was missing. This has been fixed.

9.8.6.g: 11-15
The 5.7.2: ForOne and 5.7.3: RevOne easy to use 5.7: drivers were added.

9.8.6.h: 11-14
The notation in the 5: ADFun sections was changed to make the 5.6.1: Forward and 5.6.2: Reverse routines easier to use.

9.8.6.i: 11-13
The Taylor coefficient vector and matrix notation was folded into just 9.4.k: Taylor coefficients .

9.8.6.j: 11-12
If NDEBUG is not defined during compile time, all AD<Base> 4.5.1: comparison operations are checked during 5.6.1.1: zero order forward mode calculations. The 5.6.1.5: CompareChange function returns the number of comparison operations that have changed.

9.8.6.k: 11-10
The 3.1: get_started.cpp example was changed to use the 5.7.1: Jacobian driver. In addition, more 11: index entries, that point to the 5.7: easy to use drivers , were added.

9.8.6.l: 11-04
The Microsoft Visual Studio project file example/Example.dsp/ was missing some new examples that needed to be linked in (see 2.2: InstallWindows ). This has been fixed.

9.8.6.m: 11-02
The 2.1: unix installation required the user to touch the files to get the dates in proper order. This is no longer necessary.

9.8.6.n: 11-01
Some of the dependency directories and files, for example PrintFor/.deps and PrintFor/.deps/PrintFor.Po had an extra ? at the end of their names. This seems to have been fixed by using a newer version of the autoconf and automake tools.

9.8.6.o: 10-29
Add the example and test 6.7.1: SimpleVector.cpp to the 6.7: SimpleVector documentation.

The specifications for e: preprocessor symbols state that all the CppAD preprocessor symbols begin with CppAD (so they do not conflict with other packages). Some preprocessor symbols in the file cppad/config.h did began with WITH_. This has been fixed.

9.8.6.p: 10-28
The examples 8.1.6: HesLuDet.cpp , 8.1.5: HesMinorDet.cpp , 8.1.4: JacLuDet.cpp , and 8.1.3: JacMinorDet.cpp used the negative of a size_t value. The value has been changed to an int.

The 6.23: CppAD::vector template class was converted into a library routine so it can be used separately from the rest of CppAD.

9.8.6.q: 10-27
The 4.3.4: PrintFor example was moved to its own directory because the conversion from VC 6.0 to VC 7.0 projects did not work when there were multiple executables in one project file. The 2: install instructions were modified to reflect this change.

9.8.6.r: 10-21
One declaration (for the 4.3.1: Value function) was missing from the file cppad/local/Declare.h. This has been added and CppAD should now compile and run under both Microsoft VC 6.0 and 7.0.

9.8.6.s: 10-19
The current version of CppAD has a problem compiling under Microsoft Visual C++ version 7.0 (it compiles and works under version 6.0). The problem appears to be due to a closer agreement between VC 7.0 and the C++ standard for declaring templates functions as friends. Some friend declarations were removed and others were made more specific in order to migrate the a version that will compile and run using VC 7.0.

9.8.6.t: 10-16
The example 4.5.1.1: Compare.cpp displayed the text from 4.5.3.1: BoolFun.cpp by mistake. This has been fixed.

The 4.5.1: Compare operators have been extended to work with int operands.

9.8.6.u: 10-06
The test TapeDetLu was added to speed_cppad/DetLuSpeed.cpp and TapeDetMinor was added to speed_cppad/DetMinorSpeed.cpp. These tests just tape the calculations without computing any derivatives. Using this, and the other tests, one can to separate the taping time from the derivative calculation time.

The 2.2: windows installation steps do not build a config.h file. Hence a default config.h file was added to the distribution for use with Microsoft Visual Studio.

The Distribute section of the developer documentation was brought up to date.

Links to the ADOLC and FADBAD download pages were added to the 2.1: unix installation instructions.

9.8.6.v: 09-29
The include files for the 6: library are now included by the root file cppad/cppad.hpp. They can still be included individually with out the rest of the CppAD package.

9.8.6.w: 09-26
The routine OdeRunge was modified so that it will now integrate functions of a complex arguments. This was done by removing all uses of greater than and less than comparisons were removed. (OdeRunge has been replaced by 6.15: Runge45 ).

The changes on 9.8.6.y: 09-21 did not fix all the file date and time problems; i.e., automake was still running in response to the 2.1: unix installation make command.

9.8.6.x: 09-23
There was a reference to B that should have been X in the description of the 6.12.1.k: X argument of LuSolve. This has been fixed.

9.8.6.y: 09-21
The 4.4.4: CondExp function has been modified so that it works properly for AD< AD<Base> > types; i.e., it now works for multiple levels of taping.

The date of the files aclocal.m4 and config.h.in were later than the date of top level Makefile.am. This caused the make command during the 2.1: unix installation to try to run autoconf and this did not work on systems with very old versions of autoconf. This has been fixed.

9.8.6.z: 09-13
The examples that are specific to an operation were moved to be below that operation in the documentation tree. For example 4.4.1.3.1: Add.cpp is below 4.4.1.3: ad_binary in the documentation tree.

9.8.6.aa: 09-10
The version released on 04-09-09 did not have the new file PrintFor.h in cppad/local. This has been fixed.

The Base type requirements were simplified.

The 2.1: Unix installation instructions were modified so just one make command was executed at the top level. This was necessary because the order of the makes is now important (as previously suggested, the makes did not work properly).

9.8.6.ab: 09-09
The 4.3.4: PrintFor function was added so that users can debug the computation of function values at arguments that are different from those used when taping.

9.8.6.ac: 09-07
In the 2.1: Unix installation instructions place ./ in front of current directory program names; for example, ./GetStarted instead of GetStarted (because some unix systems do not have the current directory in the default executable path).

9.8.6.ad: 09-04
A library containing the 6.4: SpeedTest and 6.2: NearEqual object files was added to the distribution.

All of the include files of the form <cppad/library/name.h> were moved to <cppad/name.h>.

9.8.6.ae: 09-02
Some more messages were added to the output of configure during the 2.1: Unix installation .

The suggested compression program during 2.2: Windows installation was changed from 7-zip (http://www.7-zip.org) to WinZip (http://www.winzip.com) .

9.8.6.af: 08-27
The error messages printed by the default version of the CppAD error macros had YY-MM-DD in place of the date for the current version. This has been fixed.

All the correctness tests are now compiled with the -g command line option (the speed tests are still compiled with -O2 -DNDEBUG).

The 2: installation instructions for Unix and Windows were split into separate pages.

9.8.6.ag: 08-25
The 2: installation now automates the replacement of 6.23: CppAD::vector by either the std::vector or boost::numeric::ublas::vector.

9.8.6.ah: 08-24
This date marks the first release that uses the Gnu tools Autoconf and Automake. This automates the building of the make files for the 2: installation and is the standard way to distribute open source software. This caused some organizational changes, for example, the 3.1: GetStarted example now has its own directory and the distribution directory is named
     cppad-
yy-mm-dd
where yy-mm-dd is the year, month and date of the distribution. (Note the distribution directory is different from the directory where CppAD is finally installed.)

9.8.6.ai: 08-12
Move OdeExplicit into the cppad/library/ directory. In addition, change it so that the vector type was a template argument; i.e., works for any type of vector (not just CppADvector).

9.8.6.aj: 07-31
Move 6.12.1: LuSolve into the cppad/library/ directory. In addition, change it so that the vector type was a template argument; i.e., works for any type of vector (not just CppADvector).

9.8.6.ak: 07-08
The file cppad/example/NearEqual.h has been moved to cppad/example/NearEqualExt.h because it contains extensions of the 6.2: NearEqual routine to AD types.

9.8.6.al: 07-07
The double and std::complex<double> cases for the 6.2: NearEqual routine arguments has been moved to the general purpose 6: library .

9.8.6.am: 07-03
The CppAD error macros names CppADExternalAssert and CppADInternalAssert were changed to CppADUsageError and CppADUnknownError. The 6.4: SpeedTest routine was changed to use CppADUsageError instead of a C assert.

9.8.6.an: 07-02
The 6.4: SpeedTest output was improved so that the columns of values line up. Previously, this was not the case when the number of digits in the size changed.

9.8.6.ao: 06-29
Added code to trap and report memory allocation errors during new operations.

9.8.6.ap: 06-25
A discussion of the order dependence of the 4.2.a.b: assignment operator and the 5.1: independent function was added to the 9.1.a: Faq . In addition, a similar discussion was added to the documentation for the 5.1: Independent function.

The definition of a 9.4.h: parameter and 9.4.l: variable were changed to reflect that fact that these are time dependent (current) properties of an AD<Base> object.

9.8.6.aq: 06-12
All of the 4.4.1: arithmetic operators (except for the unary operators) can now accept int arguments. The documentation for these arguments has been changed to reflect this. In addition, the corresponding test cases have been changed to test this and to test high order derivative cases. The old versions of these tests were moved into the cppad/Test directory.

9.8.6.ar: 06-04
The 4.4.3.2: atan2 function was added.

9.8.6.as: 06-03
The asin and acos 4.4.2: AD standard math unary functions were added.

There was a bug the reverse mode theory and calculation of derivatives of 4.4.2: sqrt for fourth and higher orders. This has been fixed. In addition, the following examples have been changed so that they test derivative up to fifth order: 4.4.2.2: asin , 4.4.2.3: atan , 4.4.2.4: cos , 4.4.2.6: exp , 4.4.2.7: log , 4.4.2.9: sin , 4.4.2.11: sqrt .

9.8.6.at: 06-01
There was a bug in the 4.4.2: atan function 5.6.1: forward mode calculations for Taylor coefficient orders greater than two. This has been fixed.

9.8.6.au: 05-30
The 4.4.2.9: sin and 4.4.2.4: cos examples were changed so that they tested higher order derivatives.

9.8.6.av: 05-29
The forward mode recursion formulas for each of the 9.3.1.c.c: standard math functions has been split into separate sections.

A roman (instead of italic) font was used for the name of for the name of each of the standard math functions in the assumption statements below the section for the standard math functions. For example,  \sin(x) instead of  sin(x) .

9.8.6.aw: 05-26
In the documentation for 6.11: Poly , the reference to example/Poly.h was corrected to cppad/library/Poly.h.

In the documentation for 6.4: SpeedTest , the reference to Lib/SpeedTest.h was corrected to cppad/library/SpeedTest.h. In addition, the example case was corrected.

In 5.6.2: Reverse , the definition for  U(t, u) had  t^p-1 where it should have had  t^{p-1} . This has been fixed.

9.8.6.ax: 05-25
The special case where the second argument to the 4.4.3.4: pow function is an int has been added.

9.8.6.ay: 05-14
Change all of the include syntax
     # include "
filename"
to the syntax
     # include <
filename>
so that examples and other use better reflect how one would use CppAD after it was installed in a standard include directory; for example /usr/local/include/cppad.

The user documentation was moved from the directory cppad/User to the directory cppad/Doc.

The directory cppad/Lib was moved to cppad/library to reflect that fact that it is not what one expects in a standard lib directory or a standard include directory.

9.8.6.az: 05-12
The string YY-MM-DD in the preprocessor symbol CppADVersion was not being replaced by the current date during distribution. This resulted in the CppADExternalAssert macro printing YY-MM-DD where is should have printed the date of distribution. This has been fixed.

All of the include commands of the form
     # include "include/
name.h"
     # include "lib/
name.h"
have been changed to the form
     # include "cppad/include/
name.h"
     # include "cppad/lib/
name.h"
This will avoid mistakenly loading a file from another package that is in the set of directories being searched by the compiler. It is therefore necessary to specify that the directory above the CppAD directory be searched by the compiler. For example, if CppAD is in /usr/local/cppad, you must specify that /usr/local be searched by the compiler. Note that if /usr/local/cppad/ is no longer searched, you will have to change
 
	# include "cppad.hpp"
to
 
	# include "cppad/cppad.hpp"
.

The window nmake file Speed/Speed.mak was out of date. This has been fixed.

9.8.6.ba: 05-09
Move 6.11: Poly and 6.4: SpeedTest into the cppad/Lib directory and the CppAD namespace.

9.8.6.bb: 05-07
The 4.4.1.3.4: divide operator tests were extended to include a second order derivative calculation using reverse mode.

The 6.11: Poly routine was modified to be more efficient in the derivative case. In addition, it was changed to use an arbitrary vector for the coefficients (not just a CppADvector).

9.8.6.bc: 05-04
A reloading of the data base caused the files include/atan.h and include/cos.h to be mistakenly started with lower case letters. These have been moved to include/Atan.h and include/Cos.h respectively.

9.8.6.bd: 05-03
The 5.6.2: Reverse mode calculations for 4.4.4: conditional expressions were mistakenly left out. This has been fixed.

9.8.6.be: 04-29
The unary functions, such as 4.4.2: sin and 4.4.2: cos , were not defined for elements of an 4.6: VecAD vector. This has been fixed.

9.8.6.bf: 04-28
The operator 6.23.h: << was added to the default 8.4: test_vector template class.

A FADBAD correctness and speed comparison with CppAD was added.

9.8.6.bg: 04-25
Factor out common sub-expressions in order to make 8.2.3: LuVecAD faster.

Convert description from C++ Automatic Differentiation to C++ Algorithmic Differentiation.

9.8.6.bh: 04-24
The 4.6: VecAD element class is no longer a derived class of the 4: AD class. This enabled a decrease in tape memory and an increase in the speed for 4.6: VecAD operations.

The 4.4.2: log10 function was added.

9.8.6.bi: 04-22
Add 4.4.4: CondExp and use it to speed up 8.2.3: LuVecAD .

9.8.6.bj: 04-21
Use 4.4.3.1: abs to speed up 8.2.3: LuVecAD .

9.8.6.bk: 04-20
The 4.4.3.1: absolute value function was added.

The value n for OdeExplicit and OdeImplicit is deduced from the argument x0 and is not passed as a separate argument. This documentation has been fixed to this effect.

9.8.6.bl: 04-19
The 4.4.1.4: += operator did not function correctly when the left hand operand was a 9.4.h: parameter and the right hand operand was a variable (found by Mike Dodds (mailto:magister@u.washington.edu) ). This has been fixed.

9.8.6.bm: 04-09
Adding special operators for using parameters to index VecAD objects increased the speed and reduced the memory requirements (by about 20%) for the 4.6: VecAD case in the speed_cppad/LuSolveSpeed.cpp/ test.

The 4.6: VecAD objects are not being handled correctly by the 5.6.2: Reverse function. The VecAD test was extended to demonstrate the problem and the problem was fixed (it is now part of TestMore/VecAD).

9.8.6.bn: 04-08
The example 8.2.3.1: LuVecADOk.cpp uses 4.6: VecAD to executes different pivoting operations during the solution of linear equations with out having to retape.

The speed test speed_cppad/LuSolveSpeed.cpp/ has been added. It shows that the initial implementation of 4.6: VecAD is slow (and uses a lot of memory.) In fact, it is faster to use 6.12.1: LuSolve and retape for each set of equations than it is to use 8.2.3: LuVecAD and not have to retape. This test will help us improve the speed of 8.2.3: LuVecAD .

9.8.6.bo: 04-07
There were bugs in the assignment to 4.6: VecAD elements during taping that have been fixed. In addition, an example of tapping the pivoting operations in an 8.2.3: Lu factorization has been added.

9.8.6.bp: 04-03
Added size_t indexing to the 4.6: VecAD class.

Fixed a bug connected to the 4.6: VecAD class and erasing the tape.

9.8.6.bq: 04-02
Some memory savings is done with regard to equal parameter values being stored in the tape. There was a bug in this logic when parameter in an AD< AD<Base> > class had values that were variables in the AD<Base> class. This has been fixed.

9.8.6.br: 04-01
The name of the class that tapes indexing operations was changed from ADVec to 4.6: VecAD . This class was extended so that the value of elements in these vectors can be variables (need not be 9.4.h: parameters ).

9.8.6.bs: 03-30
Do some simple searching of the parameter table during taping avoid multiple copies of parameters on tape (use less tape memory).

9.8.6.bt: 03-28
The version 4.6: ADVec , a vector class that tapes indexing operations, is now available. It is currently restricted by the fact that all the values in the vector must be 9.4.h: parameters .

9.8.6.bu: 03-25
The internal taping structure has been changed to have variable length instructions. This is to save memory on the tape. In addition, it may help in the implementation of the vector class that tracks indexing. (A now functioning version of this class is described in 4.6: VecAD .)

9.8.6.bv: 03-18
A change was made to the way parameter values are stored on the tape. This resulted in a significant savings in the amount of memory required.

9.8.6.bw: 03-17
Change the return type for 6.4: SpeedTest from const char * to std::string. The memory required for the largest test cases was added to the 9.2.5: speed_cppad tests output.

9.8.6.bx: 03-15
The comparison between ADOLC and CppAD for the DetLuADOLC.cpp/ example was returning an error (because it was checking for exact equality of calculated derivatives instead of nearly equal). This has been fixed.

9.8.6.by: 03-12
The user defined unary functions were removed and the user defined 4.4.5: discrete functions were added. These discrete functions add the capability of conditional expressions (alternate calculations) being included in an 5: ADFun object.

9.8.6.bz: 03-11
The classes 9.2.2.3: det_by_minor and 9.2.2.4: det_by_lu were added and used these to simplify the examples that compute determinants.

9.8.6.ca: 03-09
The routines Grad and Hess have been removed. You should use 5.7.1: Jacobian and 5.7.4: Hessian instead.

9.8.6.cb: 03-07
The driver routines 5.7.4: Hessian and 5.7.6: RevTwo has been added. These to compute specialized subsets of the second order partials.

Documentation errors in 5.7.5: ForTwo and 5.6.2: Reverse were fixed. The 8: example documentation was reorganized.

9.8.6.cc: 03-06
The driver 5.7.5: ForTwo has been added. It uses forward mode to compute a subset of the second order partials.

Split all of the "example" and "test" index entries that come from cppad/example/*.cpp into sorted subheadings.

9.8.6.cd: 03-05
The Grad routine, which only computed first derivatives of scalar valued functions, has been replaced by the 5.7.1: Jacobian routine which computes the derivative of vector valued functions.

9.8.6.ce: 03-04
The bug reported on 9.8.6.cl: 02-17 was present in all the operators. These have all been fixed and tests for all the operators have been added to the cppad/Test directory.

The 5.5.f: f.Parameter() function was added so that one can count how many components of the range space depend on the value of the domain space components. This helps when deciding whether to use forward or reverse mode.

9.8.6.cf: 03-03
Special operators were added to distinguish the cases where one of the operands is a 9.4.h: parameter . This reduced the amount of branching that is necessary when executing 5.6.1: Forward and 5.6.2: Reverse calculations.

The 5.1: Independent and 5.5.f: Parameter functions were moved below 5: ADFun in the documentation.

9.8.6.cg: 03-01
The DetLuADOLC.cpp, DetLu case was added to the ADOLC comparison tests.

9.8.6.ch: 02-29
Under certain optimization flag values, and on certain systems, an error was reported by the ADOLC correctness comparison. It turned out that CppAD was not initializing a particular index when debugging was turned off. This has been fixed.

9.8.6.ci: 02-28
A set of routines for comparing CppAD with ADOLC has been added to the distribution. In addition, documentation for compiling and linking the 8: Examples and 9.2.5: Speed Tests has been added.

9.8.6.cj: 02-21
If you use the user defined unary atomic functions there is a restriction on the order of the derivatives that can be calculated. This restriction was documented in the user defined unary function 5.6.1: Forward and 5.6.2: Reverse . (These unary functions were removed on 9.8.6.by: 03-12 .)

9.8.6.ck: 02-20
A user interface to arbitrary order 5.6.2: reverse mode calculations was implemented. In addition, the 5: ADFun member functions Rev and RevTwo were removed because it is easier to use the uniform syntax below:
Old Syntax Uniform Syntax
r1 = f.Rev(v) r1 = f.Reverse(1, v)
q1 = f.RevTwo(v) r2 = f.Reverse(2, v)
q1[i] == r2[2 * i + 1]


The 9.3: Theory section has been completely changed so that it corresponds to the arbitrary order calculations. (Some of this change was made when the arbitrary forward mode interface was added on 9.8.6.cn: 04-02-15 .

The directory cppad/Test has been added. It contains tests cases that are not intended as examples.

9.8.6.cl: 02-17
There was a bug in the way CppAD handled the parameters zero and one when they were variables on a lower level tape; i.e. x might be a parameter on an AD< AD<Base> > tape and a its value might be a variable on the AD<Base> tape. This bug in the multiply and divide routines has been fixed.

There was a bug that is some cases reported a divide by zero error when the numerator was zero. This has been fixed.

9.8.6.cm: 02-16
A bug in 5.6.1: Forward prevented the calculation of derivatives with higher order than two. In addition, this checking for user errors in the use of Forward was also faulty. This has been fixed.

The Microsoft project file example\Example.dsp was out of date. This has been fixed.

The example that 8.1.11: tapes derivative calculations has been changed to an application of 8.1.8: Taylor's method for solving ordinary differential equations.

9.8.6.cn: 02-15
A user interface to arbitrary order 5.6.1: forward mode calculations was implemented. In addition, the 5: ADFun member functions Arg, For and ForTwo were removed because it is easier to use the uniform syntax below:
Old Syntax Uniform Syntax
v0 = f.Arg(u0) v0 = f.Forward(0, u0)
v1 = f.For(u1) v1 = f.Forward(1, u1)
v2 = f.For(u2) v2 = f.Forward(1, u2)

9.8.6.co: 02-12
All of the derivative calculations are now done using arbitrary order Taylor arithmetic routines. The 9.3: Theory section was changed to document this method of calculation.

9.8.6.cp: 02-01
The definition of a 9.4.k: Taylor coefficient was changed to include the factorial factor. This change was also made to the output specifications for the FunForTwo routine.

9.8.6.cq: 01-29
There were some bugs in the FunArg function that were fixed.
  1. If one of the dependent variables was a 9.4.h: parameter FunArg did not set it's value properly. (All its derivatives are zero and this was handled properly.)
  2. The user defined unary functions were not computed correctly.
The specifications for the usage and unknown CppAD error macros were modified so that they could be used with out side effects.

9.8.6.cr: 01-28
Some corrections and improvements were made to the documentation including: CppADvector was placed before its use, a reference to Ode_ind and Ode_dep was fixed in OdeImplicit.

9.8.6.cs: 01-22
The specifications for the routine FunForTwo was changed to use 9.4.k: Taylor coefficients . This makes the interface to CppAD closer to the interface for ADOLC (http://www.math.tu-dresden.de/~adol-c/) .
Input File: omh/whats_new_04.omh
9.8.7: Changes and Additions to CppAD During 2003

9.8.7.a: Introduction
This section contains a list of the changes plus for (in reverse order by date). The purpose of this section is to assist you in learning about changes between various versions.

9.8.7.b: 12-24
Some references to double should have been references to the 9.4.e: base type (in reverse mode and in the Grad/ and Hess functions). This has been fixed.

9.8.7.c: 12-22
The preprocessor symbol WIN32 was being used to determine if one was using Microsoft's C++ compiler. This symbol is predefined by the MinGW (http://www.mingw.org) version of the GNU C++ compiler and hence CppAD had errors during installation using MinGW. This has been fixed by using the preprocessor symbol _MSC_VER to determine if one is using the Microsoft C++ compiler.

9.8.7.d: 12-14
The extended system solvers OdeOne and OdeTwo have been removed from the distribution. In addition, the interface to the ODE solvers have been simplified.

9.8.7.e: 12-13
Remove the CppADCreateTape macro and have the tapes created and grow automatically.

9.8.7.f: 12-12
The old method where one directly accesses the tape has been removed and the following functions are no longer available:
          size_t 
TapeName.Independent(AD<Base> &indvar)
          size_t 
TapeName.Record(size_t order)
          size_t 
TapeName.Stop(void)
          bool Dependent(const AD<
Base> &var) const
          bool 
TapeName.Dependent(const AD<Base> &var) const
          size_t 
TapeName.Total(void) const
          size_t 
TapeName.Required(void) const
          size_t 
TapeName.Erase(void)
          TapeState 
TapeName.State(void) const
          size_t 
TapeName.Order(void) const
          size_t 
TapeName.Required(void) const 
          bool Parameter(CppADvector< AD<
Base> > &u)
          
TapeName.Forward(indvar)
          
TapeName.Reverse(var)
          
TapeName.Partial(var)
          
TapeName.ForwardTwo(indvar)
          
TapeName.ReverseTwo(var)
          
TapeName.PartialTwo(var)

9.8.7.g: 12-10
The change on 9.8.7.i: 12-01 make the taping process simpler if one does not directly access CppADCreateTape. The 8: examples were changed to not use TapeName. The following examples were skipped because they document the functions that access TapeName: DefFun.cpp, For.cpp, for_two.cpp, Rev.cpp, and rev_two.cpp.

9.8.7.h: 12-05
There was a bug in f.Rev and f.RevTwo and when two dependent variables were always equal and shared the same location in the tape. This has been fixed.

The ODE Example was changed to tape the solution (and not use OdeOne or OdeTwo). This is simpler to use and the resulting speed tests gave much faster results.

9.8.7.i: 12-01
The following function has been added:
     void Independent(const CppADvector<
Base> &x)
which will declare the independent variables and begin recording AD<Base> operations (see 5.1: Independent ). The 5: ADFun constructor was modified so that it stops the recording and erases that tape as well as creates the 5: ADFun object. In addition, the tape no longer needs to be specified in the constructor.

9.8.7.j: 11-21
Add StiffZero to set of ODE solvers.

9.8.7.k: 11-20
The AbsGeq and LeqZero in 6.12.1: LuSolve were changed to template functions so they could have default definitions in the case where the <= and >= operators are defined. This made the double and AD<double> use of LuSolve simpler because the user need not worry about these functions. On the other hand, it made the std::complex and AD<std::complex> use of LuSolve more complex.

The member function names for the fun argument to ODE were changed from fun.f to fun.Ode and from fun.g to fun.Ode_ini.

9.8.7.l: 11-16
The 1: table of contents was reorganized to provide a better grouping of the documentation.

The 6.12.1: LuSolve utility is now part of the distribution and not just an example; i.e., it is automatically included by cppad.hpp.

9.8.7.m: 11-15
The ODE solver was modified so that it can be used with any type (not just an AD type. This was useful for the speed testing. It is also useful for determining how the integrator steps should be before starting the tape.

The template argument Type was changed to Base where ever it was the 9.4.e: base type of an AD class.

9.8.7.n: 11-14
An speed_cppad/OdeSpeed.cpp/ test was added and some changes were made to the ODE interface in order to make it faster. The most significant change was in the specifications for the ODE function object fun.

9.8.7.o: 11-12
The user defined unary function example example/UnaryFun.cpp was incorrect. It has been corrected and extended.

9.8.7.p: 11-11
The 6.23: CppAD::vector template class is now used where the std::vector template class was previously used. You can replace the CppAD::vector class with a vector template class of your choosing during the 2: Install procedure.

9.8.7.q: 11-06
The documentation for 8.1.11: taping derivative calculations was improved as well as the corresponding example. In order to make this simpler, the example tape name DoubleTape was changed to ADdoubleTape (and the other example tape names were also changed).

9.8.7.r: 11-04
The ODE utility was changed from an example to part of the distribution. In addition, it was extended so that it now supports taping the solution of the differential equations (case order equal zero) or solving the extended set of differential equations for both first and second derivatives (cases order equal one and two). In addition, an initial condition that depends on the parameter values is also allowed.

9.8.7.s: 11-02
It is now legal to differentiate a 9.4.h: parameter with respect to an 9.4.j.c: independent variable (parameter derivatives are always equal to zero). This is an extension of the Reverse, Partial, ReverseTwo, and PartialTwo functions.

9.8.7.t: 10-21
All the CppAD include files, except cppad.hpp were moved into an include subdirectory.

9.8.7.u: 10-16
The 5: ADFun template class was added so that one can save a tape recording and use it as a differentiable function. The ADFun functions supports directional derivatives in both 5.6.1: Forward and 5.6.2: Reverse mode where as the tape only supports partial derivatives.

9.8.7.v: 10-14
The sqrt function was added to the 4.4.2: AD standard math unary functions . In addition, a definition of the power function for the 4.4.3.4.f: standard types was automatically included in the CppAD namespace.

The 4.3.1: Value function was changed so that it can be called when the tape is in the Empty state.

9.8.7.w: 10-10
The atan function was added to the 4.4.2: AD standard math unary functions .

9.8.7.x: 10-06
In the notation below, zero and one are parameters that are exactly equal to zero and one. If the variables z and x were related in any of the following ways, they share can share the same record on the tape because they will have the same derivatives.
     
z = x + zero        z =  x * one
     
z = zero + x        z =  one * x
     
z = x - zero        z =  x / one
Furthermore, in the following cases, the result z is a parameter (equal to zero) and need not be recorded in the tape:
     
z = x * zero        z =  zero / x
     
z = zero * x
The 4.4.1: arithmetic operators were all checked to make sure they did not add to the tape in these special cases. The total record count for the program in the Example directory was 552 before this change and 458 after.

9.8.7.y: 10-05
The process of converting the tape to operators was completed. In order to make this conversion, the binary user defined functions were removed. (Bob Goddard (http://www.apl.washington.edu/people/professional_staff/goddard_r.html) suggested a very nice way to keep the unary functions.) Another significant change was made to the user interface during this procedure, the standard math library functions are now part of the CppAD distribution and not defined by the user.

The function TapeName.Total was added to make it easy to track how many tape records are used by the test suite. This will help with future optimization of the CppAD recording process.

There was a bug (found by Mike Dodds (mailto:magister@u.washington.edu) ) in the error checking of the TapeName.Erase function. If Erase was called twice in a row, and NDEBUG was false during compilation, the program would abort. This has been fixed.

9.8.7.z: 09-30
A process of changing the tape from storing partial derivatives to storing operators has been started. This will make the tape smaller and it will enable the computation of higher derivatives with out having to tape the tape (see 8.1.11: mul_level ). The Add, Subtract, Multiply and Divide operators have been converted. The user defined functions are presenting some difficulties, so this process has not yet been completed.

There was a bug in reverse mode when an dependent variable was exactly equal to an independent variable. In this case, it was possible for it to be located before other of the independent variables on the tape. These other independent variable partials were not initialized to zero before the reverse calculation and hence had what ever value was left by the previous mode calculation. This has been fixed and the 4.2.3: Eq.cpp example has been changed to test for this case.

The following tape functions were changed to be declared const because they do not modify the tape in any way: State, Order, Required, Dependent, and 4.5.4: Parameter .

9.8.7.aa: 09-20
The functions Grad and Hess were changed to use function objects instead of function pointers.

9.8.7.ab: 09-19
The higher order constructors (in standard valarray) were removed from the ODE example in order to avoid memory allocation of temporaries (and hence increase speed). In addition, the function objects in the ODE examples were changed to be const.

9.8.7.ac: 09-18
An ordinary differential equation solver was added. In addition, the extended system to differentiate the solution was included.

9.8.7.ad: 09-15
The linked list of AD variables was not being maintained correctly by the AD destructor. This was fixed by have the destructor use RemoveFromVarList to remove variables from the list. (RemoveFromVarList is a private AD member function not visible to the user.)

9.8.7.ae: 09-14
There is a new Faq question about evaluating derivatives at multiple values for the 9.1.f: independent variables .

9.8.7.af: 09-13
An example that uses AD< AD<double> > to compute higher derivatives was added.

The name GaussEliminate was changed to 6.12.1: LuSolve to better reflect the solution method.

9.8.7.ag: 09-06
Changed the 3.1: get_started.cpp and 4.7.1.1: ComplexPoly.cpp examples so they use a template function with both base type and AD type arguments. (The resulting code is simpler and a good use of templates.)

9.8.7.ah: 09-05
A 3.1: getting started example was added and the organization of the 8: Examples was changed.

9.8.7.ai: 09-04
The AbsOfDoubleNotDefine flag is no longer used and it was removed from the Windows 2: install instructions.

The 03-09-03 distribution did not have the proper date attached to it. The distribution script has been changed so that attaching the proper date is automated (i.e., this should not happen again).

A 9.1: Frequently Asked Questions and Answers section was started.

9.8.7.aj: 09-03
Added the 4.3.1: Value function which returns the 9.4.e: base type value corresponding to an AD object.

9.8.7.ak: 08-23
A new version of Cygwin was installed on the development system (this may affect the timing tests reported in this document). In addition, 6.12.1: LuSolve was changed to use back substitution instead of reduction to an identity matrix. This reduced the number of floating point operations corresponding to evaluation of the determinant. The following results correspond to the speed test of DetLu on a 9 by 9 matrix:
Version double Rate AD<double> Rate Gradient Rate Hessian Rate Tape Length
03-08-20 8,524 5,278 4,260 2,450 532
03-08-23 7,869 4,989 4,870 2,637 464

9.8.7.al: 08-22
The 4.4.1.2: unary minus operator was added to the AD operations.

9.8.7.am: 08-19
The standard math function examples were extended to include the complex case.

The 6.12.1: LuSolve routine what changed to use std::vector<Base> & arguments in place of Base * arguments. This removes the need to use new and delete with LuSolve.

When testing the speed of the change to using standard vector, it was noticed that the LuSolve routine was much slower. (see times for 03-08-16 below). This was do to computing the determinant instead of the log of the determinant. Converting back to the log of the determinant regained the high speeds. The following results correspond to the speed test of DetLu on a 9 by 9 matrix:
Version double Rate AD<double> Rate Gradient Rate Hessian Rate Tape Length
03-08-16 9,509 5,565 3,587 54 537
03-08-19 8,655 5,313 4,307 2,495 532

9.8.7.an: 08-17
The macro CppADTapeOverflow was added so that CppAD can check for tape overflow even in the NDEBUG preprocessor flag is defined.

9.8.7.ao: 08-16
The 6.12.1: LuSolve routine was extended to handle complex arguments. Because the complex absolute value function is nowhere differentiable, this required the allowing for user defined 4.5.3: boolean valued functions with AD arguments . The examples 6.12.1.1: LuSolve.cpp and GradLu.cpp were converted to a complex case.

9.8.7.ap: 08-11
The routine 6.12.1: LuSolve was made more efficient so that it is more useful as a tool for differentiating linear algebra calculations. The following results correspond to the speed test of DetLu on a 9 by 9 matrix:
Version double Rate AD<double> Rate Gradient Rate Hessian Rate Tape Length
03-08-10 49,201 7,787 2,655 1,809 824
03-08-11 35,178 12,681 4,521 2,541 540
In addition the corresponding test case 6.12.1.1: LuSolve.cpp was changed to a Hilbert matrix case.

9.8.7.aq: 08-10
A 4.7.1.1: complex polynomial example was added.

The documentation and type conversion in 6.12.1: LuSolve was improved.

The absolute value function was removed from the examples because some systems do not yet properly support double abs(double x),

9.8.7.ar: 08-07
Because the change to the multiplication operator had such a large positive effect, all of the 4.4.1: arithmetic operators were modified to reduce the amount of information in the tape (where possible).

9.8.7.as: 08-06
During Lu factorization, certain elements of the matrix are know to be zero or one and do not depend on the variables. The 4.4.1.3: multiplication operator was modified to take advantage of this fact. This reduced the size of the tape and increased the speed for the calculation of the gradient and Hessian for the Lu determinant test of a 5 by 5 matrix as follows:
Version Tape Length Gradient Rate Hessian Rate
03-08-05 176 11,362 1,149
03-08-06 167 12,780 10,625

9.8.7.at: 08-05
Fixed a mistake in the calculation of the sign of the determinant in the 6.12.1: LuSolve example.

9.8.7.au: 08-04
Added a the compiler flag
 
	AbsOfDoubleNotDefined
to the make files so that it could be removed on systems where the function
     double abs(double 
x)
was defined in math.h.

9.8.7.av: 08-03
The Grad and Hess functions were modified to handel the case where the function does not depend on the independent variables.

The 6.12.1: LuSolve example was added to show how on can differentiate linear algebra calculations. In addition, it was used to add another set of 9.2.5: speed tests .

The standard Math functions were added both as examples of defining atomic operations and to support mathematical operations for the AD<double> case.

The 4.3.3: << operator was added to the AD template class for output to streams.

9.8.7.aw: 08-01
The 4.4.1: computed assignment operators were added to the AD template class.

The name of the Speed/SpeedTest program was changed to 9.2.5: Speed/Speed . In addition, Speed/SpeedRun was changed to Speed/SpeedTest.

9.8.7.ax: 07-30
The 4.2.a.b: assignment operator was changed so the it returns a reference to the target. This allows for statements of the form
     
x = y = z;
i.e., multiple assignments.

9.8.7.ay: 07-29
If the 4.2.a.a: AD copy constructor constructor or 4.2.a.b: assignment operator used an 9.4.j.c: independent variable for its source value, the result was also an independent variable. This has been fixed so that the result is a dependent variable in these cases.

9.8.7.az: 07-26
The AD<Base> data structure was changed to include a doubly linked list of variables. This enabled the 4.2.a.a: AD copy constructor constructor and 4.2.a.b: assignment operator to create multiple references to the same place in the tape. This reduced the size of the tape and increased the speed for the calculation of the gradient and Hessian for the determinant of a 5 by 5 matrix as follows:
Version Tape Length Gradient Rate Hessian Rate
03-07-22 1668 1,363 53
03-07-26 436 3,436 213

9.8.7.ba: 07-22
The facility was added so that the user can define binary functions together with their derivatives. (This facility has been removed because it is better to define binary functions using AD variables.)

The Windows version make file directive /I ..\.. in example\Example.mak and Speed\Speed.mak was changed to /I .. (as it should have been).

9.8.7.bb: 07-20
The facility was added so that the user can define unary functions, together with their derivatives. For example, the standard math functions such as 4.4.2.6: exp are good candidates for such definitions. (This feature has been replaced by and the standard math functions are now part of the AD types, see 4: AD .)

The first Alpha for the Windows 2: installation was released.

9.8.7.bc: 07-18
Computing the determinant of a minor of a matrix 9.2.2.2: det_of_minor was documented as a realistic example using CppAD.

9.8.7.bd: 07-16
Fixed some non-standard constructions that caused problems with the installation on other machines.

Compiled and ran the tests under Microsoft Windows. (The Windows release should not take much more work.)

9.8.7.be: 07-14
First Alpha release of CppAD and is being released under the 9.10: Gnu Public License . It is intended for use by a Unix system. A Microsoft release is intended in the near future.
Input File: omh/whats_new_03.omh
9.9: Deprecated Include Files

9.9.a: Purpose
The following is a list of deprecated include file names and the corresponding names that should be used. For example, if your program uses the deprecated preprocessor command
 
	# include <CppAD/CppAD.h>
you should change it to the command
 
	# include <cppad/cppad.hpp>


9.9.b: Linking New Files to Deprecated Commands
On Unix systems, references in your source code of the from
     # include <CppAD/
name.h>
will refer to the older versions of CppAD unless you preform the following steps (this only needs to be done once, not for every install):
     cp 
PrefixDir/include
     sudo mv CppAD CppAD.old
     sudo ln -s cppad CppAD
where 2.1.f: PrefixDir is the prefix directory corresponding to your 2.1: Unix installation . This will link form the deprecated commands to the commands that should be used:
Deprecated    Should Use    Documentation
CppAD/CheckNumericType.h    cppad/check_numeric_type.hpp    6.6: CheckNumericType
CppAD/CheckSimpleVector.h    cppad/check_simple_vector.hpp    6.8: CheckSimpleVector
CppAD/CppAD.h    cppad/cppad.hpp    : CppAD
CppAD/CppAD_vector.h    cppad/vector.hpp    6.23: CppAD_vector
CppAD/ErrorHandler.h    cppad/error_handler.hpp    6.1: ErrorHandler
CppAD/LuFactor.h    cppad/lu_factor.hpp    6.12.2: LuFactor
CppAD/LuInvert.h    cppad/lu_invert.hpp    6.12.3: LuInvert
CppAD/LuSolve.h    cppad/lu_solve.hpp    6.12.1: LuSolve
CppAD/NearEqual.h    cppad/near_equal.hpp    6.2: NearEqual
CppAD/OdeErrControl.h    cppad/ode_err_control.hpp    6.17: OdeErrControl
CppAD/OdeGear.h    cppad/ode_gear.hpp    6.18: OdeGear
CppAD/OdeGearControl.h    cppad/ode_gear_control.hpp    6.19: OdeGearControl
CppAD/Poly.h    cppad/poly.hpp    6.11: Poly
CppAD/RombergMul.h    cppad/romberg_mul.hpp    6.14: RombergMul
CppAD/RombergOne.h    cppad/romberg_one.hpp    6.13: RombergOne
CppAD/Rosen34.h    cppad/rosen_34.hpp    6.16: Rosen34
CppAD/Runge45.h    cppad/runge_45.hpp    6.15: Runge45
CppAD/SpeedTest.h    cppad/speed_test.hpp    6.4: SpeedTest
CppAD/TrackNewDel.h    cppad/track_new_del.hpp    6.24: TrackNewDel

Input File: omh/include_deprecated.omh
9.10: Your License for the CppAD Software


Common Public License Version 1.0

THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC
LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM
CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.

1. DEFINITIONS

"Contribution" means:

    a) in the case of the initial Contributor, the initial code and
documentation distributed under this Agreement, and

    b) in the case of each subsequent Contributor:

    i) changes to the Program, and

    ii) additions to the Program;

    where such changes and/or additions to the Program originate from and are
distributed by that particular Contributor. A Contribution 'originates' from a
Contributor if it was added to the Program by such Contributor itself or anyone
acting on such Contributor's behalf. Contributions do not include additions to
the Program which: (i) are separate modules of software distributed in
conjunction with the Program under their own license agreement, and (ii) are not
derivative works of the Program.

"Contributor" means any person or entity that distributes the Program.

"Licensed Patents " mean patent claims licensable by a Contributor which are
necessarily infringed by the use or sale of its Contribution alone or when
combined with the Program.

"Program" means the Contributions distributed in accordance with this Agreement.

"Recipient" means anyone who receives the Program under this Agreement,
including all Contributors.

2. GRANT OF RIGHTS

    a) Subject to the terms of this Agreement, each Contributor hereby grants
Recipient a non-exclusive, worldwide, royalty-free copyright license to
reproduce, prepare derivative works of, publicly display, publicly perform,
distribute and sublicense the Contribution of such Contributor, if any, and such
derivative works, in source code and object code form.

    b) Subject to the terms of this Agreement, each Contributor hereby grants
Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed
Patents to make, use, sell, offer to sell, import and otherwise transfer the
Contribution of such Contributor, if any, in source code and object code form.
This patent license shall apply to the combination of the Contribution and the
Program if, at the time the Contribution is added by the Contributor, such
addition of the Contribution causes such combination to be covered by the
Licensed Patents. The patent license shall not apply to any other combinations
which include the Contribution. No hardware per se is licensed hereunder.

    c) Recipient understands that although each Contributor grants the licenses
to its Contributions set forth herein, no assurances are provided by any
Contributor that the Program does not infringe the patent or other intellectual
property rights of any other entity. Each Contributor disclaims any liability to
Recipient for claims brought by any other entity based on infringement of
intellectual property rights or otherwise. As a condition to exercising the
rights and licenses granted hereunder, each Recipient hereby assumes sole
responsibility to secure any other intellectual property rights needed, if any.
For example, if a third party patent license is required to allow Recipient to
distribute the Program, it is Recipient's responsibility to acquire that license
before distributing the Program.

    d) Each Contributor represents that to its knowledge it has sufficient
copyright rights in its Contribution, if any, to grant the copyright license set
forth in this Agreement.

3. REQUIREMENTS

A Contributor may choose to distribute the Program in object code form under its
own license agreement, provided that:

    a) it complies with the terms and conditions of this Agreement; and

    b) its license agreement:

    i) effectively disclaims on behalf of all Contributors all warranties and
conditions, express and implied, including warranties or conditions of title and
non-infringement, and implied warranties or conditions of merchantability and
fitness for a particular purpose;

    ii) effectively excludes on behalf of all Contributors all liability for
damages, including direct, indirect, special, incidental and consequential
damages, such as lost profits;

    iii) states that any provisions which differ from this Agreement are offered
by that Contributor alone and not by any other party; and

    iv) states that source code for the Program is available from such
Contributor, and informs licensees how to obtain it in a reasonable manner on or
through a medium customarily used for software exchange. 

When the Program is made available in source code form:

    a) it must be made available under this Agreement; and

    b) a copy of this Agreement must be included with each copy of the Program. 

Contributors may not remove or alter any copyright notices contained within the
Program.

Each Contributor must identify itself as the originator of its Contribution, if
any, in a manner that reasonably allows subsequent Recipients to identify the
originator of the Contribution.

4. COMMERCIAL DISTRIBUTION

Commercial distributors of software may accept certain responsibilities with
respect to end users, business partners and the like. While this license is
intended to facilitate the commercial use of the Program, the Contributor who
includes the Program in a commercial product offering should do so in a manner
which does not create potential liability for other Contributors. Therefore, if
a Contributor includes the Program in a commercial product offering, such
Contributor ("Commercial Contributor") hereby agrees to defend and indemnify
every other Contributor ("Indemnified Contributor") against any losses, damages
and costs (collectively "Losses") arising from claims, lawsuits and other legal
actions brought by a third party against the Indemnified Contributor to the
extent caused by the acts or omissions of such Commercial Contributor in
connection with its distribution of the Program in a commercial product
offering. The obligations in this section do not apply to any claims or Losses
relating to any actual or alleged intellectual property infringement. In order
to qualify, an Indemnified Contributor must: a) promptly notify the Commercial
Contributor in writing of such claim, and b) allow the Commercial Contributor to
control, and cooperate with the Commercial Contributor in, the defense and any
related settlement negotiations. The Indemnified Contributor may participate in
any such claim at its own expense.

For example, a Contributor might include the Program in a commercial product
offering, Product X. That Contributor is then a Commercial Contributor. If that
Commercial Contributor then makes performance claims, or offers warranties
related to Product X, those performance claims and warranties are such
Commercial Contributor's responsibility alone. Under this section, the
Commercial Contributor would have to defend claims against the other
Contributors related to those performance claims and warranties, and if a court
requires any other Contributor to pay any damages as a result, the Commercial
Contributor must pay those damages.

5. NO WARRANTY

EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR
IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE,
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each
Recipient is solely responsible for determining the appropriateness of using and
distributing the Program and assumes all risks associated with its exercise of
rights under this Agreement, including but not limited to the risks and costs of
program errors, compliance with applicable laws, damage to or loss of data,
programs or equipment, and unavailability or interruption of operations.

6. DISCLAIMER OF LIABILITY

EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY
CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST
PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS
GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

7. GENERAL

If any provision of this Agreement is invalid or unenforceable under applicable
law, it shall not affect the validity or enforceability of the remainder of the
terms of this Agreement, and without further action by the parties hereto, such
provision shall be reformed to the minimum extent necessary to make such
provision valid and enforceable.

If Recipient institutes patent litigation against a Contributor with respect to
a patent applicable to software (including a cross-claim or counterclaim in a
lawsuit), then any patent licenses granted by that Contributor to such Recipient
under this Agreement shall terminate as of the date such litigation is filed. In
addition, if Recipient institutes patent litigation against any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the Program
itself (excluding combinations of the Program with other software or hardware)
infringes such Recipient's patent(s), then such Recipient's rights granted under
Section 2(b) shall terminate as of the date such litigation is filed.

All Recipient's rights under this Agreement shall terminate if it fails to
comply with any of the material terms or conditions of this Agreement and does
not cure such failure in a reasonable period of time after becoming aware of
such noncompliance. If all Recipient's rights under this Agreement terminate,
Recipient agrees to cease use and distribution of the Program as soon as
reasonably practicable. However, Recipient's obligations under this Agreement
and any licenses granted by Recipient relating to the Program shall continue and
survive.

Everyone is permitted to copy and distribute copies of this Agreement, but in
order to avoid inconsistency the Agreement is copyrighted and may only be
modified in the following manner. The Agreement Steward reserves the right to
publish new versions (including revisions) of this Agreement from time to time.
No one other than the Agreement Steward has the right to modify this Agreement.
IBM is the initial Agreement Steward. IBM may assign the responsibility to serve
as the Agreement Steward to a suitable separate entity. Each new version of the
Agreement will be given a distinguishing version number. The Program (including
Contributions) may always be distributed subject to the version of the Agreement
under which it was received. In addition, after a new version of the Agreement
is published, Contributor may elect to distribute the Program (including its
Contributions) under the new version. Except as expressly stated in Sections
2(a) and 2(b) above, Recipient receives no rights or licenses to the
intellectual property of any Contributor under this Agreement, whether
expressly, by implication, estoppel or otherwise. All rights in the Program not
expressly granted under this Agreement are reserved.

This Agreement is governed by the laws of the State of New York and the
intellectual property laws of the United States of America. No party to this
Agreement will bring a legal action under this Agreement more than one year
after the cause of action arose. Each party waives its rights to a jury trial in
any resulting litigation.


Input File: omh/license.omh
10: Alphabetic Listing of Cross Reference Tags
A
5.4: abort_recording
Abort Recording of an Operation Sequence
5.4.1: abort_recording.cpp
Abort Current Recording: Example and Test
4.4.3.1: abs
AD Absolute Value Function
4.4.3.1.1: Abs.cpp
AD Absolute Value Function: Example and Test
4.4.2.1: Acos.cpp
The AD acos Function: Example and Test
9.3.1.7: AcosForward
Arccosine Function Forward Taylor Polynomial Theory
9.3.2.7: AcosReverse
Arccosine Function Reverse Mode Theory
4: AD
AD Objects
4.4.1.3: ad_binary
AD Binary Arithmetic Operators
4.2: ad_copy
AD Copy Constructor and Assignment Operator
4.4.1.3.1: Add.cpp
AD Binary Addition: Example and Test
4.4.1.4.1: AddEq.cpp
AD Computed Assignment Addition: Example and Test
5: ADFun
ADFun Objects
9.2.4.2: adolc_det_lu.cpp
Adolc Speed: Gradient of Determinant Using Lu Factorization
9.2.4.1: adolc_det_minor.cpp
Adolc Speed: Gradient of Determinant by Minor Expansion
9.2.4.3: adolc_ode.cpp
Adolc Speed: Ode
9.2.4.4: adolc_poly.cpp
Adolc Speed: Second Derivative of a Polynomial
9.2.4.5: adolc_sparse_hessian.cpp
Adolc Speed: Sparse Hessian
4.4: ADValued
AD Valued Operations and Functions
9: Appendix
Appendix
4.4.1: Arithmetic
AD Arithmetic Operators and Computed Assignments
4.4.2.2: Asin.cpp
The AD asin Function: Example and Test
9.3.1.6: AsinForward
Arcsine Function Forward Taylor Polynomial Theory
9.3.2.6: AsinReverse
Arcsine Function Reverse Mode Theory
4.4.2.3: Atan.cpp
The AD atan Function: Example and Test
4.4.3.2: atan2
AD Two Argument Inverse Tangent Function
4.4.3.2.1: Atan2.cpp
The AD atan2 Function: Example and Test
9.3.1.5: AtanForward
Arctangent Function Forward Taylor Polynomial Theory
9.3.2.5: AtanReverse
Arctangent Function Reverse Mode Theory
B
4.7.2: base_adolc.hpp
Enable use of AD<Base> where Base is Adolc's adouble Type
4.7.1: base_complex.hpp
Enable use of AD<Base> where Base is std::complex<double>
4.7: base_require
AD<Base> Requirements for Base Type
6.20: BenderQuad
Computing Jacobian and Hessian of Bender's Reduced Objective
6.20.1: BenderQuad.cpp
BenderQuad: Example and Test
9.5: Bib
Bibliography
4.5.3: BoolFun
AD Boolean Functions
4.5.3.1: BoolFun.cpp
AD Boolean Functions: Example and Test
4.5: BoolValued
Bool Valued Operations and Functions with AD Arguments
9.6: Bugs
Know Bugs and Problems Using CppAD
C
5.6.1.6: capacity_taylor
Controlling taylor_ Coefficients Memory Allocation
6.6: CheckNumericType
Check NumericType Class Concept
6.6.1: CheckNumericType.cpp
The CheckNumericType Function: Example and Test
6.8: CheckSimpleVector
Check Simple Vector Concept
6.8.1: CheckSimpleVector.cpp
The CheckSimpleVector Function: Example and Test
4.5.1: Compare
AD Binary Comparison Operators
4.5.1.1: Compare.cpp
AD Binary Comparison Operators: Example and Test
5.6.1.5: CompareChange
Comparison Changes During Zero Order Forward Mode
5.6.1.5.1: CompareChange.cpp
CompareChange and Re-Tape: Example and Test
4.7.1.1: ComplexPoly.cpp
Complex Polynomial: Example and Test
4.4.1.4: compute_assign
AD Computed Assignment Operators
4.4.4: CondExp
AD Conditional Expressions
4.4.4.1: CondExp.cpp
Conditional Expressions: Example and Test
4.3: Convert
Conversion and Printing of AD Objects
4.2.1: CopyAD.cpp
AD Copy Constructor: Example and Test
4.2.2: CopyBase.cpp
AD Constructor From Base Type: Example and Test
4.4.2.4: Cos.cpp
The AD cos Function: Example and Test
4.4.2.5: Cosh.cpp
The AD cosh Function: Example and Test
: CppAD
cppad-20090131.0: A Package for Differentiation of C++ Algorithms
6.1.2: cppad_assert
CppAD Assertions During Execution
9.2.5.2: cppad_det_lu.cpp
CppAD Speed: Gradient of Determinant Using Lu Factorization
9.2.5.1: cppad_det_minor.cpp
CppAD Speed: Gradient of Determinant by Minor Expansion
9.2.5.3: cppad_ode.cpp
CppAD Speed: Gradient of Ode Solution
9.2.5.4: cppad_poly.cpp
CppAD Speed: Second Derivative of a Polynomial
9.2.5.5: cppad_sparse_hessian.cpp
CppAD Speed: Sparse Hessian
6.23: CppAD_vector
The CppAD::vector Template Class
6.23.1: CppAD_vector.cpp
CppAD::vector Template Class: Example and Test
D
4.1: Default
AD Default Constructor
4.1.1: Default.cpp
Default AD Constructor: Example and Test
5.3: Dependent
Stop Recording and Store Operation Sequence
9.2.2.5: det_33
Check Determinant of 3 by 3 matrix
9.2.2.5.1: det_33.hpp
Source: det_33
9.2.2.4: det_by_lu
Determinant Using Expansion by Lu Factorization
9.2.2.4.1: det_by_lu.cpp
Determinant Using Lu Factorization: Example and Test
9.2.2.4.2: det_by_lu.hpp
Source: det_by_lu
9.2.2.3: det_by_minor
Determinant Using Expansion by Minors
9.2.2.3.1: det_by_minor.cpp
Determinant Using Expansion by Minors: Example and Test
9.2.2.3.2: det_by_minor.hpp
Source: det_by_minor
9.2.2.6: det_grad_33
Check Gradient of Determinant of 3 by 3 matrix
9.2.2.6.1: det_grad_33.hpp
Source: det_grad_33
9.2.2.2: det_of_minor
Determinant of a Minor
9.2.2.2.1: det_of_minor.cpp
Determinant of a Minor: Example and Test
9.2.2.2.2: det_of_minor.hpp
Source: det_of_minor
4.4.5: Discrete
Discrete AD Functions
4.4.1.3.4: Div.cpp
AD Binary Division: Example and Test
4.4.1.4.4: DivEq.cpp
AD Computed Assignment Division: Example and Test
9.2.3.2: double_det_lu.cpp
Double Speed: Determinant Using Lu Factorization
9.2.3.1: double_det_minor.cpp
Double Speed: Determinant by Minor Expansion
9.2.3.3: double_ode.cpp
Double Speed: Ode Solution
9.2.3.4: double_poly.cpp
Double Speed: Evaluate a Polynomial
9.2.3.5: double_sparse_hessian.cpp
Double Speed: Sparse Hessian
5.7: Drivers
First and Second Derivatives: Easy Drivers
E
4.2.3: Eq.cpp
AD Assignment Operator: Example and Test
4.5.5: EqualOpSeq
Check if Equal and Correspond to Same Operation Sequence
4.5.5.1: EqualOpSeq.cpp
EqualOpSeq: Example and Test
4.4.3.3: erf
The AD Error Function
4.4.3.3.1: Erf.cpp
The AD erf Function: Example and Test
6.1: ErrorHandler
Replacing the CppAD Error Handler
6.1.1: ErrorHandler.cpp
Replacing The CppAD Error Handler: Example and Test
8: Example
Examples
8.2.1: Example.cpp
Program That Runs the CppAD Examples
5.9.1.1: example_a11c.cpp
A Simple Parallel Loop
8.2: ExampleUtility
Utility Routines used by CppAD Examples
4.4.2.6: Exp.cpp
The AD exp Function: Example and Test
3.2: exp_2
Second Order Exponential Approximation
3.2.2: exp_2.cpp
exp_2: Test
3.2.1: exp_2.hpp
exp_2: Implementation
3.2.8: exp_2_cppad
exp_2: CppAD Forward and Reverse Sweeps
3.2.3: exp_2_for0
exp_2: Operation Sequence and Zero Order Forward Mode
3.2.3.1: exp_2_for0.cpp
exp_2: Verify Zero Order Forward Sweep
3.2.4: exp_2_for1
exp_2: First Order Forward Mode
3.2.4.1: exp_2_for1.cpp
exp_2: Verify First Order Forward Sweep
3.2.6: exp_2_for2
exp_2: Second Order Forward Mode
3.2.6.1: exp_2_for2.cpp
exp_2: Verify Second Order Forward Sweep
3.2.5: exp_2_rev1
exp_2: First Order Reverse Mode
3.2.5.1: exp_2_rev1.cpp
exp_2: Verify First Order Reverse Sweep
3.2.7: exp_2_rev2
exp_2: Second Order Reverse Mode
3.2.7.1: exp_2_rev2.cpp
exp_2: Verify Second Order Reverse Sweep
3.4: exp_apx_main.cpp
Run the exp_2 and exp_eps Tests
3.3: exp_eps
An Epsilon Accurate Exponential Approximation
3.3.2: exp_eps.cpp
exp_eps: Test of exp_eps
3.3.1: exp_eps.hpp
exp_eps: Implementation
3.3.8: exp_eps_cppad
exp_eps: CppAD Forward and Reverse Sweeps
3.3.3: exp_eps_for0
exp_eps: Operation Sequence and Zero Order Forward Sweep
3.3.3.1: exp_eps_for0.cpp
exp_eps: Verify Zero Order Forward Sweep
3.3.4: exp_eps_for1
exp_eps: First Order Forward Sweep
3.3.4.1: exp_eps_for1.cpp
exp_eps: Verify First Order Forward Sweep
3.3.6: exp_eps_for2
exp_eps: Second Order Forward Mode
3.3.6.1: exp_eps_for2.cpp
exp_eps: Verify Second Order Forward Sweep
3.3.5: exp_eps_rev1
exp_eps: First Order Reverse Sweep
3.3.5.1: exp_eps_rev1.cpp
exp_eps: Verify First Order Reverse Sweep
3.3.7: exp_eps_rev2
exp_eps: Second Order Reverse Sweep
3.3.7.1: exp_eps_rev2.cpp
exp_eps: Verify Second Order Reverse Sweep
9.3.1.1: ExpForward
Exponential Function Forward Taylor Polynomial Theory
9.3.2.1: ExpReverse
Exponential Function Reverse Mode Theory
F
9.2.6.2: fadbad_det_lu.cpp
Fadbad Speed: Gradient of Determinant Using Lu Factorization
9.2.6.1: fadbad_det_minor.cpp
Fadbad Speed: Gradient of Determinant by Minor Expansion
9.2.6.3: fadbad_ode.cpp
Fadbad Speed: Ode
9.2.6.4: fadbad_poly.cpp
Fadbad Speed: Second Derivative of a Polynomial
9.2.6.5: fadbad_sparse_hessian.cpp
Fadbad Speed: Sparse Hessian
9.1: Faq
Frequently Asked Questions and Answers
5.7.2: ForOne
First Order Partial Derivative: Driver Routine
5.7.2.1: ForOne.cpp
First Order Partial Driver: Example and Test
5.6.3.1: ForSparseJac
Jacobian Sparsity Pattern: Forward Mode
5.6.3.1.1: ForSparseJac.cpp
Forward Mode Jacobian Sparsity: Example and Test
5.7.5: ForTwo
Forward Mode Second Partial Derivative Driver
5.7.5.1: ForTwo.cpp
Subset of Second Order Partials: Example and Test
5.6.1: Forward
Forward Mode
5.6.1.7: Forward.cpp
Forward Mode: Example and Test
5.6.1.3: ForwardAny
Any Order Forward Mode
5.6.1.2: ForwardOne
First Order Forward Mode: Derivative Values
9.3.1: ForwardTheory
The Theory of Forward Mode
5.6.1.1: ForwardZero
Zero Order Forward Mode: Function Values
5.8: FunCheck
Check an ADFun Sequence of Operations
5.8.1: FunCheck.cpp
ADFun Check and Re-Tape: Example and Test
5.2: FunConstruct
Construct an ADFun Object and Stop Recording
5.10: FunDeprecated
ADFun Object Deprecated Member Functions
5.6: FunEval
Evaluate ADFun Functions, Derivatives, and Sparsity Patterns
G
8.1: General
General Examples
3.1: get_started.cpp
A Simple Program Using CppAD to Compute Derivatives
9.4: glossary
Glossary
H
5.7.4.2: HesLagrangian.cpp
Hessian of Lagrangian and ADFun Default Constructor: Example and Test
8.1.6: HesLuDet.cpp
Gradient of Determinant Using LU Factorization: Example and Test
8.1.5: HesMinorDet.cpp
Gradient of Determinant Using Expansion by Minors: Example and Test
5.7.4: Hessian
Hessian: Easy Driver
5.7.4.1: Hessian.cpp
Hessian: Example and Test
5.6.2.2.2: HesTimesDir.cpp
Hessian Times Direction: Example and Test
I
9.9: include_deprecated
Deprecated Include Files
5.1: Independent
Declare Independent Variables and Start Recording
5.1.1: Independent.cpp
Independent and ADFun Constructor: Example and Test
2: Install
CppAD Download, Test, and Installation Instructions
2.1: InstallUnix
Unix Download, Test and Installation
2.2: InstallWindows
Windows Download and Test
4.3.2: Integer
Convert From AD to Integer
4.3.2.1: Integer.cpp
Convert From AD to Integer: Example and Test
8.1.2: Interface2C.cpp
Interfacing to C: Example and Test
4.4.5.2: interp_onetape.cpp
Interpolation With Out Retaping: Example and Test
4.4.5.3: interp_retape.cpp
Interpolation With Retaping: Example and Test
3: Introduction
An Introduction by Example to Algorithmic Differentiation
8.1.1: ipopt_cppad_nlp
Nonlinear Programming Using the CppAD Interface to Ipopt
8.1.1.3: ipopt_cppad_ode
Example Simultaneous Solution of Forward and Inverse Problem
8.1.1.3.5: ipopt_cppad_ode.cpp
ipopt_cppad_nlp ODE Example Source Code
8.1.1.3.1: ipopt_cppad_ode_forward
An ODE Forward Problem Example
8.1.1.3.2: ipopt_cppad_ode_inverse
An ODE Inverse Problem Example
8.1.1.3.4: ipopt_cppad_ode_represent
ipopt_cppad_nlp ODE Problem Representation
8.1.1.3.3: ipopt_cppad_ode_simulate
Simulating ODE Measurement Values
8.1.1.2: ipopt_cppad_simple.cpp
Nonlinear Programming Using CppAD and Ipopt: Example and Test
8.1.1.1: ipopt_cppad_windows
Linking the CppAD Interface to Ipopt in Visual Studio 9.0
J
8.1.4: JacLuDet.cpp
Gradient of Determinant Using Lu Factorization: Example and Test
8.1.3: JacMinorDet.cpp
Gradient of Determinant Using Expansion by Minors: Example and Test
5.7.1: Jacobian
Jacobian: Driver Routine
5.7.1.1: Jacobian.cpp
Jacobian: Example and Test
L
6: library
The CppAD General Purpose Library
9.10: License
Your License for the CppAD Software
9.2.1.1: link_det_lu
Speed Testing Gradient of Determinant Using Lu Factorization
9.2.1.2: link_det_minor
Speed Testing Gradient of Determinant by Minor Expansion
9.2.1.5: link_ode
Speed Testing Gradient of Ode Solution
9.2.1.3: link_poly
Speed Testing Second Derivative of a Polynomial
9.2.1.4: link_sparse_hessian
Speed Testing Sparse Hessian
8.3: ListAllExamples
List of All the CppAD Examples
4.4.2.7: Log.cpp
The AD log Function: Example and Test
4.4.2.8: Log10.cpp
The AD log10 Function: Example and Test
9.3.1.2: LogForward
Logarithm Function Forward Taylor Polynomial Theory
9.3.2.2: LogReverse
Logarithm Function Reverse Mode Theory
6.12.2.2: lu_factor.hpp
Source: LuFactor
6.12.3.2: lu_invert.hpp
Source: LuInvert
6.12.1.2: lu_solve.hpp
Source: LuSolve
6.12: LuDetAndSolve
Compute Determinants and Solve Equations by LU Factorization
6.12.2: LuFactor
LU Factorization of A Square Matrix
6.12.2.1: LuFactor.cpp
LuFactor: Example and Test
6.12.3: LuInvert
Invert an LU Factored Equation
6.12.3.1: LuInvert.cpp
LuInvert: Example and Test
6.21: LuRatio
LU Factorization of A Square Matrix and Stability Calculation
6.21.1: LuRatio.cpp
LuRatio: Example and Test
6.12.1: LuSolve
Compute Determinant and Solve Linear Equations
6.12.1.1: LuSolve.cpp
LuSolve With Complex Arguments: Example and Test
8.2.3: LuVecAD
Lu Factor and Solve with Recorded Pivoting
8.2.3.1: LuVecADOk.cpp
Lu Factor and Solve With Recorded Pivoting: Example and Test
M
4.4.3: MathOther
Other AD Math Functions
4.4.1.3.3: Mul.cpp
AD Binary Multiplication: Example and Test
8.1.11: mul_level
Using Multiple Levels of AD
8.1.11.1: mul_level.cpp
Multiple Tapes: Example and Test
4.7.2.1: mul_level_adolc.cpp
Using Adolc with Multiple Levels of Taping: Example and Test
4.4.1.4.3: MulEq.cpp
AD Computed Assignment Multiplication: Example and Test
5.9.1.2.1: multi_newton
Multi-Threaded Newton's Method Routine
5.9.1.2: multi_newton.cpp
Multi-Threaded Newton's Method Main Program
5.9.1.2.2: multi_newton.hpp
OpenMP Multi-Threading Newton's Method Source Code
N
6.9: nan
Obtain Nan and Determine if a Value is Nan
6.9.1: nan.cpp
nan: Example and Test
6.2.1: Near_Equal.cpp
NearEqual Function: Example and Test
6.2: NearEqual
Determine if Two Values Are Nearly Equal
4.5.2: NearEqualExt
Compare AD and Base Objects for Nearly Equal
4.5.2.1: NearEqualExt.cpp
Compare AD with Base Objects: Example and Test
4.7.1.2: not_complex_ad.cpp
Not Complex Differentiable: Example and Test
6.5: NumericType
Definition of a Numeric Type
6.5.1: NumericType.cpp
The NumericType: Example and Test
O
9.2.2.7: ode_evaluate
Evaluate a Function Defined in Terms of an ODE
9.2.2.7.1: ode_evaluate.cpp
ode_evaluate: Example and test
9.2.2.7.2: ode_evaluate.hpp
Source: ode_evaluate
8.1.8: ode_taylor.cpp
Taylor's Ode Solver: An Example and Test
8.1.9: ode_taylor_adolc.cpp
Using Adolc with Taylor's Ode Solver: An Example and Test
6.17: OdeErrControl
An Error Controller for ODE Solvers
6.17.1: OdeErrControl.cpp
OdeErrControl: Example and Test
6.17.2: OdeErrMaxabs.cpp
OdeErrControl: Example and Test Using Maxabs Argument
6.18: OdeGear
An Arbitrary Order Gear Method
6.18.1: OdeGear.cpp
OdeGear: Example and Test
6.19: OdeGearControl
An Error Controller for Gear's Ode Solvers
6.19.1: OdeGearControl.cpp
OdeGearControl: Example and Test
8.1.7: OdeStiff.cpp
A Stiff Ode: Example and Test
5.9: omp_max_thread
OpenMP Maximum Thread Number
5.9.1: openmp_run.sh
Compile and Run the OpenMP Test
4.3.3: Output
AD Output Stream Operator
4.3.3.1: Output.cpp
AD Output Operator: Example and Test
P
4.5.4: ParVar
Is an AD Object a Parameter or Variable
4.5.4.1: ParVar.cpp
AD Parameter and Variable Functions: Example and Test
6.11: Poly
Evaluate a Polynomial or its Derivative
6.11.1: Poly.cpp
Polynomial Evaluation: Example and Test
6.11.2: poly.hpp
Source: Poly
4.4.3.4: pow
The AD Power Function
4.4.3.4.1: Pow.cpp
The AD Power Function: Example and Test
6.10: pow_int
The Integer Power Function
4.4.3.4.2: pow_int.cpp
The Pow Integer Exponent: Example and Test
7: preprocessor
Preprocessor Definitions Used by CppAD
4.3.4: PrintFor
Printing AD Values During Forward Mode
4.3.4.1: PrintFor.cpp
Printing During Forward Mode: Example and Test
R
5.6.2: Reverse
Reverse Mode
5.6.2.3: reverse_any
Any Order Reverse Mode
5.6.2.3.1: reverse_any.cpp
Any Order Reverse Mode: Example and Test
9.3.3: reverse_identity
An Important Reverse Mode Identity
5.6.2.1: reverse_one
First Order Reverse Mode
5.6.2.1.1: reverse_one.cpp
First Order Reverse Mode: Example and Test
5.6.2.2: reverse_two
Second Order Reverse Mode
5.6.2.2.1: reverse_two.cpp
Second Order Reverse ModeExample and Test
9.3.2: ReverseTheory
The Theory of Reverse Mode
5.7.3: RevOne
First Order Derivative: Driver Routine
5.7.3.1: RevOne.cpp
First Order Derivative Driver: Example and Test
5.6.3.3: RevSparseHes
Hessian Sparsity Pattern: Reverse Mode
5.6.3.3.1: RevSparseHes.cpp
Reverse Mode Hessian Sparsity: Example and Test
5.6.3.2: RevSparseJac
Jacobian Sparsity Pattern: Reverse Mode
5.6.3.2.1: RevSparseJac.cpp
Reverse Mode Jacobian Sparsity: Example and Test
5.7.6: RevTwo
Reverse Mode Second Partial Derivative Driver
5.7.6.1: RevTwo.cpp
Second Partials Reverse Driver: Example and Test
6.14: RombergMul
Multi-dimensional Romberg Integration
6.14.1: RombergMul.cpp
One Dimensional Romberg Integration: Example and Test
6.13: RombergOne
One DimensionalRomberg Integration
6.13.1: RombergOne.cpp
One Dimensional Romberg Integration: Example and Test
6.16: Rosen34
A 3rd and 4th Order Rosenbrock ODE Solver
6.16.1: Rosen34.cpp
Rosen34: Example and Test
6.15: Runge45
An Embedded 4th and 5th Order Runge-Kutta ODE Solver
6.15.1: Runge45.cpp
Runge45: Example and Test
S
9.2.7.2: sacado_det_lu.cpp
Sacado Speed: Gradient of Determinant Using Lu Factorization
9.2.7.1: sacado_det_minor.cpp
Sacado Speed: Gradient of Determinant by Minor Expansion
9.2.7.3: sacado_ode.cpp
Sacado Speed: Gradient of Ode Solution
9.2.7.4: sacado_poly.cpp
Sacado Speed: Second Derivative of a Polynomial
9.2.7.5: sacado_sparse_hessian.cpp
Sacado Speed: Sparse Hessian
5.5: SeqProperty
ADFun Sequence Properties
5.5.1: SeqProperty.cpp
ADFun Sequence Properties: Example and Test
6.7: SimpleVector
Definition of a Simple Vector
6.7.1: SimpleVector.cpp
Simple Vector Template Class: Example and Test
4.4.2.9: Sin.cpp
The AD sin Function: Example and Test
9.3.1.4: SinCosForward
Trigonometric and Hyperbolic Sine and Cosine Forward Theory
9.3.2.4: SinCosReverse
Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
4.4.2.10: Sinh.cpp
The AD sinh Function: Example and Test
5.6.1.4: size_taylor
Number Taylor Coefficients, Per Variable, Currently Stored
5.6.3: Sparse
Calculating Sparsity Patterns
9.2.2.8: sparse_evaluate
Evaluate a Function That Has a Sparse Hessian
9.2.2.8.1: sparse_evaluate.cpp
sparse_evaluate: Example and test
9.2.2.8.2: sparse_evaluate.hpp
Source: sparse_evaluate
5.7.8: sparse_hessian
Sparse Hessian: Easy Driver
5.7.8.1: sparse_hessian.cpp
Sparse Hessian: Example and Test
5.7.7: sparse_jacobian
Sparse Jacobian: Easy Driver
5.7.7.1: sparse_jacobian.cpp
Sparse Jacobian: Example and Test
9.2: speed
Speed Test Routines
9.2.4: speed_adolc
Speed Test Derivatives Using Adolc
9.2.5: speed_cppad
Speed Test Derivatives Using CppAD
9.2.3: speed_double
Speed Test Functions in Double
8.2.2: speed_example.cpp
Program That Runs the Speed Examples
9.2.6: speed_fadbad
Speed Test Derivatives Using Fadbad
9.2.1: speed_main
Speed Testing Main Program
6.4.1: speed_program.cpp
Example Use of SpeedTest
9.2.7: speed_sacado
Speed Test Derivatives Using Sacado
6.3: speed_test
Run One Speed Test and Return Results
6.3.1: speed_test.cpp
speed_test: Example and test
9.2.2: speed_utility
Speed Testing Utilities
6.4: SpeedTest
Run One Speed Test and Print Results
4.4.2.11: Sqrt.cpp
The AD sqrt Function: Example and Test
9.3.1.3: SqrtForward
Square Root Function Forward Taylor Polynomial Theory
9.3.2.3: SqrtReverse
Square Root Function Reverse Mode Theory
8.1.10: StackMachine.cpp
Example Differentiating a Stack Machine Interpreter
4.4.2: std_math_ad
AD Standard Math Unary Functions
6.22: std_math_unary
Float and Double Standard Math Unary Functions
4.4.1.3.2: Sub.cpp
AD Binary Subtraction: Example and Test
4.4.1.4.2: SubEq.cpp
AD Computed Assignment Subtraction: Example and Test
2.1.1: subversion
Using Subversion To Download Source Code
5.9.1.3: sum_i_inv.cpp
Sum of 1/i Main Program
T
4.4.2.12: Tan.cpp
The AD tan Function: Example and Test
4.4.2.13: Tanh.cpp
The AD tanh Function: Example and Test
4.4.5.1: TapeIndex.cpp
Taping Array Index Operation: Example and Test
8.4: test_vector
Choosing The Vector Testing Template Class
9.3: Theory
The Theory of Derivative Calculations
6.24: TrackNewDel
Routines That Track Use of New and Delete
6.24.1: TrackNewDel.cpp
Tracking Use of New and Delete: Example and Test
U
4.4.1.2: UnaryMinus
AD Unary Minus Operator
4.4.1.2.1: UnaryMinus.cpp
AD Unary Minus Operator: Example and Test
4.4.1.1: UnaryPlus
AD Unary Plus Operator
4.4.1.1.1: UnaryPlus.cpp
AD Unary Plus Operator: Example and Test
9.2.2.1: uniform_01
Simulate a [0,1] Uniform Random Variate
9.2.2.1.1: uniform_01.hpp
Source: uniform_01
V
4.3.1: Value
Convert From an AD Type to its Base Type
4.3.1.1: Value.cpp
Convert From AD to its Base Type: Example and Test
4.3.5: Var2Par
Convert an AD Variable to a Parameter
4.3.5.1: Var2Par.cpp
Convert an AD Variable to a Parameter: Example and Test
4.6: VecAD
AD Vectors that Record Index Operations
4.6.1: VecAD.cpp
AD Vectors that Record Index Operations: Example and Test
6.23.2: vectorBool.cpp
CppAD::vectorBool Class: Example and Test
W
9.8: whats_new
Changes and Additions to CppAD
9.8.7: whats_new_03
Changes and Additions to CppAD During 2003
9.8.6: whats_new_04
Changes and Additions to CppAD During 2004
9.8.5: whats_new_05
Changes and Additions to CppAD During 2005
9.8.4: whats_new_06
Changes and Additions to CppAD During 2006
9.8.3: whats_new_07
Changes and Additions to CppAD During 2007
9.8.2: whats_new_08
Changes and Additions to CppAD During 2008
9.8.1: whats_new_09
Changes and Additions to CppAD During 2009
9.7: WishList
The CppAD Wish List

11: Keyword Index
!=
     AD operator 4.5.1: AD Binary Comparison Operators
     example 4.5.1.1: AD Binary Comparison Operators: Example and Test
*
     AD example 4.4.1.3.3: AD Binary Multiplication: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
*=
     AD example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
+
     AD example 4.4.1.3.1: AD Binary Addition: Example and Test
     AD unary operator 4.4.1.1: AD Unary Plus Operator
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
+=
     AD example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
-
     AD example 4.4.1.3.2: AD Binary Subtraction: Example and Test
     AD unary operator 4.4.1.2: AD Unary Minus Operator
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
-=
     AD example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
/
     AD example 4.4.1.3.4: AD Binary Division: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
/=
     AD example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
<
     AD operator 4.5.1: AD Binary Comparison Operators
     example 4.5.1.1: AD Binary Comparison Operators: Example and Test
<<
     AD example 4.3.3.1: AD Output Operator: Example and Test
     AD output 4.3.3: AD Output Stream Operator
<=
     AD operator 4.5.1: AD Binary Comparison Operators
     example 4.5.1.1: AD Binary Comparison Operators: Example and Test
==
     AD operator 4.5.1: AD Binary Comparison Operators
     example 4.5.1.1: AD Binary Comparison Operators: Example and Test
>
     AD operator 4.5.1: AD Binary Comparison Operators
     example 4.5.1.1: AD Binary Comparison Operators: Example and Test
>=
     AD operator 4.5.1: AD Binary Comparison Operators
     example 4.5.1.1: AD Binary Comparison Operators: Example and Test
[]
     CppAD vector 6.23.e: The CppAD::vector Template Class: Element Access
     vector 6.7.k: Definition of a Simple Vector: Element Access
A
A.1.1c
     OpenMP example 5.9.1.1: A Simple Parallel Loop
AD : cppad-20090131.0: A Package for Differentiation of C++ Algorithms
     arithmetic operator 4.4.1: AD Arithmetic Operators and Computed Assignments
     assignment 4.2: AD Copy Constructor and Assignment Operator
     binary compare operator 4.5.1: AD Binary Comparison Operators
     computed assignment 4.4.1: AD Arithmetic Operators and Computed Assignments
     convert from 4.3: Conversion and Printing of AD Objects
     convert to Base 4.3.1: Convert From an AD Type to its Base Type
     convert to integer 4.3.2: Convert From AD to Integer
     copy 4.2: AD Copy Constructor and Assignment Operator
     default construct 4.1.1: Default AD Constructor: Example and Test
     default construct 4.1: AD Default Constructor
     Ipopt 8.1.1: Nonlinear Programming Using the CppAD Interface to Ipopt
     introduction 3: An Introduction by Example to Algorithmic Differentiation
     level 9.4.c: Glossary: AD Levels Above Base
     multiple level 8.1.11.1: Multiple Tapes: Example and Test
     multiple level 8.1.11: Using Multiple Levels of AD
     object 4: AD Objects
     stream output 4.3.3: AD Output Stream Operator
     unary minus operator 4.4.1.2: AD Unary Minus Operator
     unary plus operator 4.4.1.1: AD Unary Plus Operator
ADFun
     CompareChange 5.6.1.5: Comparison Changes During Zero Order Forward Mode
     check 5.8: Check an ADFun Sequence of Operations
     construct 5.2: Construct an ADFun Object and Stop Recording
     Dependent deprecated 5.10.c: ADFun Object Deprecated Member Functions: Dependent
     Domain 5.5: ADFun Sequence Properties
     evaluate 5.6: Evaluate ADFun Functions, Derivatives, and Sparsity Patterns
     example 5.8.1: ADFun Check and Re-Tape: Example and Test
     Memory deprecated 5.10.e: ADFun Object Deprecated Member Functions: Memory
     OpenMP 5.2.h: Construct an ADFun Object and Stop Recording: OpenMP
     Order deprecated 5.10.d: ADFun Object Deprecated Member Functions: Order
     object 5: ADFun Objects
     operation sequence 5.3: Stop Recording and Store Operation Sequence
     Parameter 5.5: ADFun Sequence Properties
     Range 5.5: ADFun Sequence Properties
     Size deprecated 5.10.f: ADFun Object Deprecated Member Functions: Size
     size_var 5.5: ADFun Sequence Properties
     taylor_size deprecated 5.10.g: ADFun Object Deprecated Member Functions: taylor_size
     use_VecAD 5.5: ADFun Sequence Properties
Adolc
     adouble as Base 4.7.2: Enable use of AD<Base> where Base is Adolc's adouble Type
     multiple level 4.7.2.1: Using Adolc with Multiple Levels of Taping: Example and Test
     ODE 8.1.9: Using Adolc with Taylor's Ode Solver: An Example and Test
     unix 2.1.o: Unix Download, Test and Installation: AdolcDir
Algorithmic Differentiation
     introduction 3: An Introduction by Example to Algorithmic Differentiation
Automatic Differentiation
     introduction 3: An Introduction by Example to Algorithmic Differentiation
abort
     example 5.4.1: Abort Current Recording: Example and Test
     operation sequence 5.4: Abort Recording of an Operation Sequence
     recording 5.4.1: Abort Current Recording: Example and Test
above 9.4.c: Glossary: AD Levels Above Base
abs
     AD 4.4.3.1: AD Absolute Value Function
     example 4.4.3.1.1: AD Absolute Value Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
absolute
     AD value 4.4.3.1: AD Absolute Value Function
     difference 6.2: Determine if Two Values Are Nearly Equal
aclocal 9.8.4.db: Changes and Additions to CppAD During 2006: 01-08
acos
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.1: The AD acos Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward theory 9.3.1.7: Arccosine Function Forward Taylor Polynomial Theory
     reverse theory 9.3.2.7: Arccosine Function Reverse Mode Theory
active 9.4.j.a: Glossary: Tape.Active
ad 9.4.c: Glossary: AD Levels Above Base
   9.4.b: Glossary: AD of Base
   9.4.a: Glossary: AD Function
add
     *= example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     += example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     -= example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     /= example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD example 4.4.1.3.1: AD Binary Addition: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
adolc
     link_det_lu 9.2.4.2.b: Adolc Speed: Gradient of Determinant Using Lu Factorization: Implementation
     speed lu 9.2.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
     speed minor 9.2.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
     speed polynomial 9.2.4.4: Adolc Speed: Second Derivative of a Polynomial
     speed sparse Hessian 9.2.4.5: Adolc Speed: Sparse Hessian
     speed test 9.2.4: Speed Test Derivatives Using Adolc
adouble
     as Base 4.7.2: Enable use of AD<Base> where Base is Adolc's adouble Type
algorithm
     example 3.3: An Epsilon Accurate Exponential Approximation
     example 3.2: Second Order Exponential Approximation
algorithmic differentiation : cppad-20090131.0: A Package for Differentiation of C++ Algorithms
any
     order reverse mode 5.6.2.3: Any Order Reverse Mode
arithmetic
     AD operator 4.4.1: AD Arithmetic Operators and Computed Assignments
array
     tape index operation 4.4.5.1: Taping Array Index Operation: Example and Test
asin
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.2: The AD asin Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward theory 9.3.1.6: Arcsine Function Forward Taylor Polynomial Theory
     reverse theory 9.3.2.6: Arcsine Function Reverse Mode Theory
assert
     error handler 6.1: Replacing the CppAD Error Handler
     error macro 6.1.2: CppAD Assertions During Execution
assign
     *= example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     += example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     -= example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     /= example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     conditional 4.4.4: AD Conditional Expressions
assignment
     AD 4.2.3: AD Assignment Operator: Example and Test
     AD 4.2: AD Copy Constructor and Assignment Operator
     AD computed 4.4.1: AD Arithmetic Operators and Computed Assignments
     AD computed add example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     AD computed divide example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD computed multiply example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     AD computed subtract example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     CppAD vector 6.23.d: The CppAD::vector Template Class: Assignment
     multiple 4.4.1.4.g: AD Computed Assignment Operators: Result
     operator 9.1.a: Frequently Asked Questions and Answers: Assignment and Independent
     operator 4.4.1.4: AD Computed Assignment Operators
     vector 6.7.g: Definition of a Simple Vector: Assignment
atan
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.3: The AD atan Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward theory 9.3.1.5: Arctangent Function Forward Taylor Polynomial Theory
     reverse theory 9.3.2.5: Arctangent Function Reverse Mode Theory
atan2 9.7.a: The CppAD Wish List: Atan2
     AD 4.4.3.2: AD Two Argument Inverse Tangent Function
     AD 4.4.3: Other AD Math Functions
     AD example 4.4.3.2.1: The AD atan2 Function: Example and Test
atomic 9.4.g.a: Glossary: Operation.Atomic
automatic differentiation : cppad-20090131.0: A Package for Differentiation of C++ Algorithms
B
Base
     Adolc's adouble 4.7.2: Enable use of AD<Base> where Base is Adolc's adouble Type
     convert to AD 4.2: AD Copy Constructor and Assignment Operator
     double complex 4.7.1: Enable use of AD<Base> where Base is std::complex<double>
     from AD 4.3.1: Convert From an AD Type to its Base Type
     require 4.7: AD<Base> Requirements for Base Type
     require 4.b: AD Objects: Base Type Requirements
BenderQuad 6.20: Computing Jacobian and Hessian of Bender's Reduced Objective
     example 6.20.1: BenderQuad: Example and Test
base 9.4.e: Glossary: Base Type
     9.4.d: Glossary: Base Function
     9.4.c: Glossary: AD Levels Above Base
     9.4.b: Glossary: AD of Base
     convert to AD 4.2.2: AD Constructor From Base Type: Example and Test
binary
     AD bool 4.5.3: AD Boolean Functions
     AD compare operator 4.5.1: AD Binary Comparison Operators
     operator 4.4.1.3: AD Binary Arithmetic Operators
bool
     AD function 4.5.3: AD Boolean Functions
     CppAD::vector 6.23.2: CppAD::vectorBool Class: Example and Test
boost
     unix 2.1.r: Unix Download, Test and Installation: BoostDir
bug
     gcc 3.4.9.6.a: Know Bugs and Problems Using CppAD: gcc 3.4.4 -O2
bugs
     reporting 9.1.b: Frequently Asked Questions and Answers: Bugs
     using CppAD 9.6: Know Bugs and Problems Using CppAD
C
C
     interface to 8.1.2: Interfacing to C: Example and Test
C++
     algorithm derivative : cppad-20090131.0: A Package for Differentiation of C++ Algorithms
     numerical template library 6: The CppAD General Purpose Library
CheckNumericType 6.6.1: The CheckNumericType Function: Example and Test
CheckSimpleVector 6.8.1: The CheckSimpleVector Function: Example and Test
CompareChange 9.1.c: Frequently Asked Questions and Answers: CompareChange
     ADFun 5.6.1.5: Comparison Changes During Zero Order Forward Mode
CondExp 9.7.c: The CppAD Wish List: CondExp
        4.4.4.1: Conditional Expressions: Example and Test
     Base require 4.7.e: AD<Base> Requirements for Base Type: CondExp
CPPAD_ASSERT_KNOWN 6.1.2.c.a: CppAD Assertions During Execution: Restriction.Known
CPPAD_ASSERT_UNKNOWN 6.1.2.c.b: CppAD Assertions During Execution: Restriction.Unknown
CPPAD_BOOL_BINARY 4.5.3.k: AD Boolean Functions: Create Binary
CPPAD_BOOL_UNARY 4.5.3.g: AD Boolean Functions: Create Unary
CPPAD_DISCRETE_FUNCTION 4.4.5.i: Discrete AD Functions: Create AD Version
CPPAD_TEST_VECTOR 9.1.j.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
                  8.4: Choosing The Vector Testing Template Class
CPPAD_TRACK_COUNT 6.24.m.a: Routines That Track Use of New and Delete: TrackCount.Macro
CPPAD_TRACK_DEL_VEC 6.24.k.a: Routines That Track Use of New and Delete: TrackDelVec.Macro
CPPAD_TRACK_EXTEND 6.24.l.a: Routines That Track Use of New and Delete: TrackExtend.Macro
CPPAD_TRACK_NEW_VEC 6.24.j.a: Routines That Track Use of New and Delete: TrackNewVec.Macro
CppAD : cppad-20090131.0: A Package for Differentiation of C++ Algorithms
     install windows 2.2: Windows Download and Test
     namespace f: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Namespace
     nonlinear programming 8.1.1: Nonlinear Programming Using the CppAD Interface to Ipopt
     OpenMP 5.9: OpenMP Maximum Thread Number
     preprocessor symbol e: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Preprocessor Symbols
     tar file 2.1.c.c: Unix Download, Test and Installation: Download.Unix Tar Files
     unix install 2.1: Unix Download, Test and Installation
     zip file 2.2.b: Windows Download and Test: Download
CppAD::vector
     example 6.23.1: CppAD::vector Template Class: Example and Test
CppAD::vectorBool
     example 6.23.2: CppAD::vectorBool Class: Example and Test
CppADCreateDiscrete
     deprecated 4.4.5.m: Discrete AD Functions: Deprecated
CppADTrackDelVec 6.24.k.b: Routines That Track Use of New and Delete: TrackDelVec.Deprecated
CppADTrackExtend 6.24.l.b: Routines That Track Use of New and Delete: TrackExtend.Deprecated
CppADTrackNewVec 6.24.m.b: Routines That Track Use of New and Delete: TrackCount.Deprecated
                 6.24.j.b: Routines That Track Use of New and Delete: TrackNewVec.Deprecated
CppADvector
     deprecated 8.4.g: Choosing The Vector Testing Template Class: Deprecated
calculate
     forward mode 5.6.1.3: Any Order Forward Mode
capacity
     Forward 5.6.1.6: Controlling taylor_ Coefficients Memory Allocation
capacity_taylor 5.6.1.6: Controlling taylor_ Coefficients Memory Allocation
central difference 8.1.2: Interfacing to C: Example and Test
check
     ADFun 5.8: Check an ADFun Sequence of Operations
     determinant correct 9.2.2.6: Check Gradient of Determinant of 3 by 3 matrix
     determinant correct 9.2.2.5: Check Determinant of 3 by 3 matrix
     numeric 6.6: Check NumericType Class Concept
     simple vector 6.8: Check Simple Vector Concept
class
     simple vector 6.7: Definition of a Simple Vector
     template CppAD vector 6.23: The CppAD::vector Template Class
coefficient 9.4.k: Glossary: Taylor Coefficient
compare
     AD binary operator 4.5.1: AD Binary Comparison Operators
     AD example 4.5.1.1: AD Binary Comparison Operators: Example and Test
     change 5.6.1.5.1: CompareChange and Re-Tape: Example and Test
compile
     OpenMP example 5.9.1: Compile and Run the OpenMP Test
     unix flags 2.1.t: Unix Download, Test and Installation: CompilerFlags
complex
     double Base 4.7.1: Enable use of AD<Base> where Base is std::complex<double>
     faq 9.1.d: Frequently Asked Questions and Answers: Complex Types
     LuSolve 6.12.1.1: LuSolve With Complex Arguments: Example and Test
     polynomial 4.7.1.2: Not Complex Differentiable: Example and Test
     polynomial 4.7.1.1: Complex Polynomial: Example and Test
computed
     *= example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     += example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     -= example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     /= example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD assignment 4.4.1: AD Arithmetic Operators and Computed Assignments
     AD assignment add example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     AD assignment divide example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD assignment multiply example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     AD assignment subtract example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
concept
     C++ 6.c: The CppAD General Purpose Library: C++ Concepts
     check numeric 6.6: Check NumericType Class Concept
     check simple vector 6.8: Check Simple Vector Concept
conditional
     expression 4.4.4: AD Conditional Expressions
configure 2.1.d: Unix Download, Test and Installation: Configure
     postfix directory 2.1.n: Unix Download, Test and Installation: PostfixDir
     prefix directory 2.1.f: Unix Download, Test and Installation: PrefixDir
construct
     , AD default 4.1.1: Default AD Constructor: Example and Test
     ADFun 5.2: Construct an ADFun Object and Stop Recording
     default 4.1: AD Default Constructor
     from base type 4.2.2: AD Constructor From Base Type: Example and Test
constructor
     AD 4.2: AD Copy Constructor and Assignment Operator
     copy vector 6.7.e: Definition of a Simple Vector: Copy Constructor
     element 6.7.f: Definition of a Simple Vector: Element Constructor and Destructor
     numeric 6.5.c: Definition of a Numeric Type: Constructor From Integer
     numeric 6.5.b: Definition of a Numeric Type: Default Constructor
     numeric copy 6.5.d: Definition of a Numeric Type: Copy Constructor
     size vector 6.7.d: Definition of a Simple Vector: Sizing Constructor
     vector default 6.7.c: Definition of a Simple Vector: Default Constructor
control
     ODE error 6.17: An Error Controller for ODE Solvers
     Ode Gear 6.19: An Error Controller for Gear's Ode Solvers
convert
     AD to Base 4.3.1: Convert From an AD Type to its Base Type
     AD to integer 4.3.2: Convert From AD to Integer
     from AD 4.3: Conversion and Printing of AD Objects
     to AD 4.2: AD Copy Constructor and Assignment Operator
copy
     AD object 4.2.1: AD Copy Constructor: Example and Test
     numeric constructor 6.5.d: Definition of a Numeric Type: Copy Constructor
     vector constructor 6.7.e: Definition of a Simple Vector: Copy Constructor
correct
     determinant check 9.2.2.6: Check Gradient of Determinant of 3 by 3 matrix
     determinant check 9.2.2.5: Check Determinant of 3 by 3 matrix
cos
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.4: The AD cos Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     reverse 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
cosh
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.5: The AD cosh Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     reverse 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
cppad
     link_det_lu 9.2.5.2.b: CppAD Speed: Gradient of Determinant Using Lu Factorization: Implementation
     profile speed 2.1.k.c: Unix Download, Test and Installation: --with-Speed.profile
     speed lu 9.2.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
     speed minor 9.2.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
     speed minor 9.2.3.1: Double Speed: Determinant by Minor Expansion
     speed ode gradient 9.2.5.3: CppAD Speed: Gradient of Ode Solution
     speed polynomial 9.2.5.4: CppAD Speed: Second Derivative of a Polynomial
     speed polynomial 9.2.3.4: Double Speed: Evaluate a Polynomial
     speed sparse Hessian 9.2.5.5: CppAD Speed: Sparse Hessian
     speed test 9.2.5: Speed Test Derivatives Using CppAD
     speed test 9.2.1: Speed Testing Main Program
     test speed 2.2.h: Windows Download and Test: CppAD Speed Test
     test speed 2.1.k.a: Unix Download, Test and Installation: --with-Speed.cppad
cppad.hpp
     include d: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Include File
cppad.spec 2.1.b: Unix Download, Test and Installation: RPM
D
Dependent 5.3: Stop Recording and Store Operation Sequence
     ADFun deprecated 5.10.c: ADFun Object Deprecated Member Functions: Dependent
     example 5.8.1: ADFun Check and Re-Tape: Example and Test
     OpenMP 5.2.h: Construct an ADFun Object and Stop Recording: OpenMP
Domain
     ADFun 5.5.1: ADFun Sequence Properties: Example and Test
     ADFun 5.5: ADFun Sequence Properties
default
     constructor 4.1: AD Default Constructor
     numeric constructor 6.5.b: Definition of a Numeric Type: Default Constructor
     vector constructor 6.7.c: Definition of a Simple Vector: Default Constructor
delete
     example 6.24.1: Tracking Use of New and Delete: Example and Test
     track 6.24: Routines That Track Use of New and Delete
dependent 9.4.j.d: Glossary: Tape.Dependent Variables
          9.4.g.c: Glossary: Operation.Dependent
deprecated
     CppADCreateDiscrete 4.4.5.m: Discrete AD Functions: Deprecated
     CppADvector 8.4.g: Choosing The Vector Testing Template Class: Deprecated
     Dependent ADFun 5.10.c: ADFun Object Deprecated Member Functions: Dependent
     include file 9.9: Deprecated Include Files
     Memory ADFun 5.10.e: ADFun Object Deprecated Member Functions: Memory
     Order ADFun 5.10.d: ADFun Object Deprecated Member Functions: Order
     Size ADFun 5.10.f: ADFun Object Deprecated Member Functions: Size
     taylor_size ADFun 5.10.g: ADFun Object Deprecated Member Functions: taylor_size
derivative
     directional abs 4.4.3.1.g: AD Absolute Value Function: Directional Derivative
     directional example 4.4.3.1.1: AD Absolute Value Function: Example and Test
     easy 5.7.3: First Order Derivative: Driver Routine
     example 5.7.3.1: First Order Derivative Driver: Example and Test
     first order driver 5.7.3: First Order Derivative: Driver Routine
     forward mode 5.6.1.3: Any Order Forward Mode
     polynomial template 6.11: Evaluate a Polynomial or its Derivative
     reverse mode 5.6.2.3: Any Order Reverse Mode
     reverse mode 5.6.2.2: Second Order Reverse Mode
     reverse mode 5.6.2.1: First Order Reverse Mode
destructor
     element 6.7.f: Definition of a Simple Vector: Element Constructor and Destructor
det_33 9.2.2.5: Check Determinant of 3 by 3 matrix
     source 9.2.2.5.1: Source: det_33
det_by_lu 9.2.2.4: Determinant Using Expansion by Lu Factorization
     source 9.2.2.4.2: Source: det_by_lu
det_by_minor
     source 9.2.2.3.2: Source: det_by_minor
det_grad_33 9.2.2.6: Check Gradient of Determinant of 3 by 3 matrix
     source 9.2.2.6.1: Source: det_grad_33
det_lu
     speed test 9.2.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
det_minor
     speed test 9.2.1.2: Speed Testing Gradient of Determinant by Minor Expansion
det_of_minor 9.2.2.2: Determinant of a Minor
     example 9.2.2.2.1: Determinant of a Minor: Example and Test
     source 9.2.2.2.2: Source: det_of_minor
determinant 6.21.h.f: LU Factorization of A Square Matrix and Stability Calculation: LU.Determinant
            6.12.2.h.f: LU Factorization of A Square Matrix: LU.Determinant
            6.12: Compute Determinants and Solve Equations by LU Factorization
     by minors 9.2.2.4.1: Determinant Using Lu Factorization: Example and Test
     by minors 9.2.2.3.1: Determinant Using Expansion by Minors: Example and Test
     check correct 9.2.2.6: Check Gradient of Determinant of 3 by 3 matrix
     check correct 9.2.2.5: Check Determinant of 3 by 3 matrix
     Lu 8.2.3: Lu Factor and Solve with Recorded Pivoting
     Lu 6.12.1: Compute Determinant and Solve Linear Equations
     Lu factor 6.21: LU Factorization of A Square Matrix and Stability Calculation
     Lu factor 6.12.2: LU Factorization of A Square Matrix
     lu factor 9.2.2.4: Determinant Using Expansion by Lu Factorization
     matrix minor 9.2.2.2: Determinant of a Minor
     minor expansion 9.2.2.3: Determinant Using Expansion by Minors
difference
     absolute 6.2: Determine if Two Values Are Nearly Equal
     central 8.1.2: Interfacing to C: Example and Test
     relative 6.2: Determine if Two Values Are Nearly Equal
differential
     equation 6.18: An Arbitrary Order Gear Method
     equation 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
     equation 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
     ODE error control 6.17: An Error Controller for ODE Solvers
     Ode Gear control 6.19: An Error Controller for Gear's Ode Solvers
dimension
     multi Romberg integration 6.14: Multi-dimensional Romberg Integration
direction
     times Hessian 5.6.2.2.2: Hessian Times Direction: Example and Test
directional
     derivative abs 4.4.3.1.g: AD Absolute Value Function: Directional Derivative
     derivative example 4.4.3.1.1: AD Absolute Value Function: Example and Test
directory
     configure postfix 2.1.n: Unix Download, Test and Installation: PostfixDir
     configure prefix 2.1.f: Unix Download, Test and Installation: PrefixDir
discrete
     AD function 4.4.5: Discrete AD Functions
disk
     tape 9.1.l: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
divide
     AD example 4.4.1.3.4: AD Binary Division: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
documentation
     install 2.1.g: Unix Download, Test and Installation: --with-Documentation
double
     complex Base 4.7.1: Enable use of AD<Base> where Base is std::complex<double>
     convert to AD 4.2: AD Copy Constructor and Assignment Operator
     link_det_lu 9.2.3.2.b: Double Speed: Determinant Using Lu Factorization: Implementation
     speed lu 9.2.3.2.a: Double Speed: Determinant Using Lu Factorization: Specifications
     speed ode 9.2.3.3: Double Speed: Ode Solution
     speed sparse hessian 9.2.3.5: Double Speed: Sparse Hessian
     speed test 9.2.3: Speed Test Functions in Double
     test speed 2.2.i: Windows Download and Test: Double Speed Test
     test speed 2.1.k.b: Unix Download, Test and Installation: --with-Speed.double
download
     subversion 2.1.1: Using Subversion To Download Source Code
     unix 2.1.c: Unix Download, Test and Installation: Download
     windows 2.2.b: Windows Download and Test: Download
driver
     easy 5.7: First and Second Derivatives: Easy Drivers
     easy derivative 5.7.3: First Order Derivative: Driver Routine
     easy partial 5.7.6: Reverse Mode Second Partial Derivative Driver
     easy partial 5.7.5: Forward Mode Second Partial Derivative Driver
     easy partial 5.7.2: First Order Partial Derivative: Driver Routine
     first order derivative 5.7.3: First Order Derivative: Driver Routine
     first order partial 5.7.2: First Order Partial Derivative: Driver Routine
     Hessian 5.7.4: Hessian: Easy Driver
     Jacobian 5.7.1: Jacobian: Driver Routine
     second order partial 5.7.6: Reverse Mode Second Partial Derivative Driver
     second order partial 5.7.5: Forward Mode Second Partial Derivative Driver
E
EqualOpSeq 4.5.5: Check if Equal and Correspond to Same Operation Sequence
     Base require 4.7.f: AD<Base> Requirements for Base Type: EqualOpSeq
     example 4.5.5.1: EqualOpSeq: Example and Test
ErrorHandler 9.1.e: Frequently Asked Questions and Answers: Exceptions
easy
     derivative 5.7.3: First Order Derivative: Driver Routine
     driver 5.7: First and Second Derivatives: Easy Drivers
     partial 5.7.6: Reverse Mode Second Partial Derivative Driver
     partial 5.7.5: Forward Mode Second Partial Derivative Driver
     partial 5.7.2: First Order Partial Derivative: Driver Routine
efficient
     sparsity 9.4.i: Glossary: Sparsity Pattern
elementary 9.4.f: Glossary: Elementary Vector
equal
     near 6.2: Determine if Two Values Are Nearly Equal
     operation sequence 4.5.5: Check if Equal and Correspond to Same Operation Sequence
equation
     differential 6.18: An Arbitrary Order Gear Method
     differential 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
     differential 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
     Lu factor 6.21: LU Factorization of A Square Matrix and Stability Calculation
     Lu factor 6.12.2: LU Factorization of A Square Matrix
     Lu invert 6.12.3: Invert an LU Factored Equation
     linear 6.12.1: Compute Determinant and Solve Linear Equations
     linear 6.12: Compute Determinants and Solve Equations by LU Factorization
     ODE error control 6.17: An Error Controller for ODE Solvers
     Ode Gear control 6.19: An Error Controller for Gear's Ode Solvers
     solve linear 8.2.3: Lu Factor and Solve with Recorded Pivoting
erf 9.8.2.g: Changes and Additions to CppAD During 2008: 11-20
     AD function 4.4.3.3: The AD Error Function
     example 4.4.3.3.1: The AD erf Function: Example and Test
error
     AD function 4.4.3.3: The AD Error Function
     assert macro 6.1.2: CppAD Assertions During Execution
     control ODE 6.17: An Error Controller for ODE Solvers
     Gear Ode 6.19: An Error Controller for Gear's Ode Solvers
     handler 6.1.1: Replacing The CppAD Error Handler: Example and Test
     handler 6.1: Replacing the CppAD Error Handler
evaluate
     ADFun 5.6: Evaluate ADFun Functions, Derivatives, and Sparsity Patterns
example 8: Examples
     AD acos 4.4.2.1: The AD acos Function: Example and Test
     AD add 4.4.1.3.1: AD Binary Addition: Example and Test
     AD asin 4.4.2.2: The AD asin Function: Example and Test
     AD assignment 4.2.3: AD Assignment Operator: Example and Test
     AD atan 4.4.2.3: The AD atan Function: Example and Test
     AD atan2 4.4.3.2.1: The AD atan2 Function: Example and Test
     AD bool 4.5.3.1: AD Boolean Functions: Example and Test
     AD compare 4.5.1.1: AD Binary Comparison Operators: Example and Test
     AD computed assignment add 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     AD computed assignment divide 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD computed assignment multiply 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     AD computed assignment subtract 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     AD cos 4.4.2.4: The AD cos Function: Example and Test
     AD cosh 4.4.2.5: The AD cosh Function: Example and Test
     AD divide 4.4.1.3.4: AD Binary Division: Example and Test
     AD exp 4.4.2.6: The AD exp Function: Example and Test
     AD log 4.4.2.7: The AD log Function: Example and Test
     AD log10 4.4.2.8: The AD log10 Function: Example and Test
     AD multiply 4.4.1.3.3: AD Binary Multiplication: Example and Test
     AD output 4.3.3.1: AD Output Operator: Example and Test
     AD pow 4.4.3.4.1: The AD Power Function: Example and Test
     AD sin 4.4.2.9: The AD sin Function: Example and Test
     AD sinh 4.4.2.10: The AD sinh Function: Example and Test
     AD sqrt 4.4.2.11: The AD sqrt Function: Example and Test
     ADFun 5.8.1: ADFun Check and Re-Tape: Example and Test
     ADFun default constructor 5.7.4.2: Hessian of Lagrangian and  ADFun Default Constructor: Example and Test
     abort 5.4.1: Abort Current Recording: Example and Test
     abs 4.4.3.1.1: AD Absolute Value Function: Example and Test
     algorithm 3.3: An Epsilon Accurate Exponential Approximation
     algorithm 3.2: Second Order Exponential Approximation
     BenderQuad 6.20.1: BenderQuad: Example and Test
     CompareChange 5.6.1.5.1: CompareChange and Re-Tape: Example and Test
     CondExp 4.4.4.1: Conditional Expressions: Example and Test
     CppAD::vector 6.23.1: CppAD::vector Template Class: Example and Test
     CppAD::vectorBool 6.23.2: CppAD::vectorBool Class: Example and Test
     check NumericType 6.6.1: The CheckNumericType Function: Example and Test
     check SimpleVector 6.8.1: The CheckSimpleVector Function: Example and Test
     compile OpenMP 5.9.1: Compile and Run the OpenMP Test
     complex 6.12.1.1: LuSolve With Complex Arguments: Example and Test
     complex polynomial 4.7.1.1: Complex Polynomial: Example and Test
     construct from base 4.2.2: AD Constructor From Base Type: Example and Test
     copy AD object 4.2.1: AD Copy Constructor: Example and Test
     Dependent 5.8.1: ADFun Check and Re-Tape: Example and Test
     Domain 5.5.1: ADFun Sequence Properties: Example and Test
     default AD construct 4.1.1: Default AD Constructor: Example and Test
     delete 6.24.1: Tracking Use of New and Delete: Example and Test
     derivative 5.7.3.1: First Order Derivative Driver: Example and Test
     det_of_minor 9.2.2.2.1: Determinant of a Minor: Example and Test
     determinant by minors 9.2.2.4.1: Determinant Using Lu Factorization: Example and Test
     determinant by minors 9.2.2.3.1: Determinant Using Expansion by Minors: Example and Test
     EqualOpSeq 4.5.5.1: EqualOpSeq: Example and Test
     erf 4.4.3.3.1: The AD erf Function: Example and Test
     error handler 6.1.1: Replacing The CppAD Error Handler: Example and Test
     Forward 5.6.1.7: Forward Mode: Example and Test
     FunCheck 5.8.1: ADFun Check and Re-Tape: Example and Test
     first order reverse 5.6.2.1.1: First Order Reverse Mode: Example and Test
     forward mode 3.3.6: exp_eps: Second Order Forward Mode
     forward mode 3.2.6: exp_2: Second Order Forward Mode
     forward mode 3.2.4: exp_2: First Order Forward Mode
     general 8.1: General Examples
     gradient 8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
     gradient 8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
     gradient 8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
     gradient 8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
     Hessian 5.7.4.1: Hessian: Example and Test
     Hessian of Lagrangian 5.7.4.2: Hessian of Lagrangian and  ADFun Default Constructor: Example and Test
     Independent 5.1.1: Independent and ADFun Constructor: Example and Test
     Integer 4.3.2.1: Convert From AD to Integer: Example and Test
     interpreter 8.1.10: Example Differentiating a Stack Machine Interpreter
     ipopt_cppad_nlp ode source 8.1.1.3.5: ipopt_cppad_nlp ODE Example Source Code
     Jacobian 5.7.1.1: Jacobian: Example and Test
     LU 8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
     Lu 8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
     Lu record pivot 8.2.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
     LuFactor 6.12.2.1: LuFactor: Example and Test
     LuInvert 6.12.3.1: LuInvert: Example and Test
     LuRatio 6.21.1: LuRatio: Example and Test
     LuSolve 6.12.1.1: LuSolve With Complex Arguments: Example and Test
     minors expansion 8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
     minors expansion 8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
     multi-thread 5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
     NearEqual 6.2.1: NearEqual Function: Example and Test
     NearEqualExt 4.5.2.1: Compare AD with Base Objects: Example and Test
     NumericType 6.5.1: The NumericType: Example and Test
     nan 6.9.1: nan: Example and Test
     new 6.24.1: Tracking Use of New and Delete: Example and Test
     nonlinear, programming 8.1.1.2: Nonlinear Programming Using CppAD and Ipopt: Example and Test
     not complex differentiable 4.7.1.2: Not Complex Differentiable: Example and Test
     ODE 8.1.8: Taylor's Ode Solver: An Example and Test
     OdeErrControl 6.17.2: OdeErrControl: Example and Test Using Maxabs Argument
     OdeErrControl 6.17.1: OdeErrControl: Example and Test
     OdeGear 6.18.1: OdeGear: Example and Test
     OdeGearControl 6.19.1: OdeGearControl: Example and Test
     OpenMP 5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
     OpenMP A.1.1c 5.9.1.1: A Simple Parallel Loop
     OpenMP Newton's method 5.9.1.2.1: Multi-Threaded Newton's Method Routine
     OpenMP program 5.9.1.3: Sum of 1/i Main Program
     OpenMP program 5.9.1.2: Multi-Threaded Newton's Method Main Program
     ode forward 8.1.1.3.1: An ODE Forward Problem Example
     ode inverse 8.1.1.3.2: An ODE Inverse Problem Example
     ode_evaluate 9.2.2.7.1: ode_evaluate: Example and test
     operation sequence 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     operation sequence 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
     Parameter 5.5.1: ADFun Sequence Properties: Example and Test
     Parameter 4.5.4.1: AD Parameter and Variable Functions: Example and Test
     partial 5.7.2.1: First Order Partial Driver: Example and Test
     polynomial 6.11.1: Polynomial Evaluation: Example and Test
     pow int 4.4.3.4.2: The Pow Integer Exponent: Example and Test
     print forward mode 4.3.4.1: Printing During Forward Mode: Example and Test
     Range 5.5.1: ADFun Sequence Properties: Example and Test
     Romberg 6.14.1: One Dimensional Romberg Integration: Example and Test
     Romberg 6.13.1: One Dimensional Romberg Integration: Example and Test
     Rosen34 6.16.1: Rosen34: Example and Test
     Runge45 6.15.1: Runge45: Example and Test
     re-tape 5.6.1.5.1: CompareChange and Re-Tape: Example and Test
     reverse any order 5.6.2.3.1: Any Order Reverse Mode: Example and Test
     reverse mode 3.3.7: exp_eps: Second Order Reverse Sweep
     reverse mode 3.3.5: exp_eps: First Order Reverse Sweep
     reverse mode 3.2.7: exp_2: Second Order Reverse Mode
     reverse mode 3.2.5: exp_2: First Order Reverse Mode
     run all 8.2.1: Program That Runs the CppAD Examples
     SpeedTest 6.4.1: Example Use of SpeedTest
     second order reverse 5.6.2.2.1: Second Order Reverse ModeExample and Test
     second partial 5.7.6.1: Second Partials Reverse Driver: Example and Test
     second partial 5.7.5.1: Subset of Second Order Partials: Example and Test
     simple 3.1: A Simple Program Using CppAD to Compute Derivatives
     simple vector 6.7.1: Simple Vector Template Class: Example and Test
     size_var 5.5.1: ADFun Sequence Properties: Example and Test
     sparse Hessian 5.7.8.1: Sparse Hessian: Example and Test
     sparse Jacobian 5.7.7.1: Sparse Jacobian: Example and Test
     sparse_evaluate 9.2.2.8.1: sparse_evaluate: Example and test
     sparsity forward 5.6.3.1.1: Forward Mode Jacobian Sparsity: Example and Test
     sparsity Hessian 5.6.3.3.1: Reverse Mode Hessian Sparsity: Example and Test
     sparsity reverse 5.6.3.2.1: Reverse Mode Jacobian Sparsity: Example and Test
     speed program 8.2.2: Program That Runs the Speed Examples
     speed utility 2.2.j: Windows Download and Test: Speed Utility Example
     speed utility 2.1.k.d: Unix Download, Test and Installation: --with-Speed.example
     speed_test 6.3.1: speed_test: Example and test
     stiff ode 8.1.7: A Stiff Ode: Example and Test
     subtract 4.4.1.3.2: AD Binary Subtraction: Example and Test
     tan 4.4.2.12: The AD tan Function: Example and Test
     tanh 4.4.2.13: The AD tanh Function: Example and Test
     unary minus 4.4.1.2.1: AD Unary Minus Operator: Example and Test
     unary plus 4.4.1.1.1: AD Unary Plus Operator: Example and Test
     unix 2.1.i: Unix Download, Test and Installation: --with-Example
     Value 4.3.1.1: Convert From AD to its Base Type: Example and Test
     Var2Par 4.3.5.1: Convert an AD Variable to a Parameter: Example and Test
     Variable 4.5.4.1: AD Parameter and Variable Functions: Example and Test
     VecAD 4.6.1: AD Vectors that Record Index Operations: Example and Test
     windows 2.2.e: Windows Download and Test: Examples and Testing
exception
     error handler 6.1: Replacing the CppAD Error Handler
     test 9.1.e: Frequently Asked Questions and Answers: Exceptions
exercise
     CppAD::vector 6.23.l: The CppAD::vector Template Class: Exercise
     NearEqual 6.2.k: Determine if Two Values Are Nearly Equal: Exercise
     numeric type 6.5.h: Definition of a Numeric Type: Exercise
     simple vector 6.7.m: Definition of a Simple Vector: Exercise
exp
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.6: The AD exp Function: Example and Test
     example 3.3: An Epsilon Accurate Exponential Approximation
     example 3.2: Second Order Exponential Approximation
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward theory 9.3.1.1: Exponential Function Forward Taylor Polynomial Theory
     reverse theory 9.3.2.1: Exponential Function Reverse Mode Theory
exp_3.2: Second Order Exponential Approximation
     first order 3.3.6.1: exp_eps: Verify Second Order Forward Sweep
     first order 3.3.4.1: exp_eps: Verify First Order Forward Sweep
     first order 3.2.4.1: exp_2: Verify First Order Forward Sweep
     forward mode 3.2.6: exp_2: Second Order Forward Mode
     forward mode 3.2.4: exp_2: First Order Forward Mode
     implementation 3.2.1: exp_2: Implementation
     operation sequence 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
     reverse mode 3.2.7.1: exp_2: Verify Second Order Reverse Sweep
     reverse mode 3.2.5.1: exp_2: Verify First Order Reverse Sweep
     reverse mode 3.2.7: exp_2: Second Order Reverse Mode
     reverse mode 3.2.5: exp_2: First Order Reverse Mode
     second order 3.2.6.1: exp_2: Verify Second Order Forward Sweep
     test 3.2.2: exp_2: Test
     zero order 3.2.3.1: exp_2: Verify Zero Order Forward Sweep
exp_apx
     main test 3.4: Run the exp_2 and exp_eps Tests
     unix 2.1.h.b: Unix Download, Test and Installation: --with-Introduction.exp_apx
exp_eps 3.3: An Epsilon Accurate Exponential Approximation
     forward mode 3.3.6: exp_eps: Second Order Forward Mode
     implementation 3.3.1: exp_eps: Implementation
     operation sequence 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     reverse 3.3.7.1: exp_eps: Verify Second Order Reverse Sweep
     reverse 3.3.5.1: exp_eps: Verify First Order Reverse Sweep
     reverse mode 3.3.7: exp_eps: Second Order Reverse Sweep
     reverse mode 3.3.5: exp_eps: First Order Reverse Sweep
     test 3.3.2: exp_eps: Test of exp_eps
     zero order 3.3.3.1: exp_eps: Verify Zero Order Forward Sweep
expansion 8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
          8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
     first order 3.3.4.a: exp_eps: First Order Forward Sweep: First Order Expansion
     first order 3.2.4.a: exp_2: First Order Forward Mode: First Order Expansion
     minor determinant 9.2.2.3: Determinant Using Expansion by Minors
     second order 3.3.6.a: exp_eps: Second Order Forward Mode: Second Order Expansion
     second order 3.2.6.a: exp_2: Second Order Forward Mode: Second Order Expansion
     zero order 3.2.3.b: exp_2: Operation Sequence and Zero Order Forward Mode: Zero Order Expansion
exponent
     AD function 4.4.3.4: The AD Power Function
     integer 6.10: The Integer Power Function
expression
     conditional 4.4.4: AD Conditional Expressions
F
FAQ 9.1: Frequently Asked Questions and Answers
Fadbad
     unix 2.1.p: Unix Download, Test and Installation: FadbadDir
Fedora
     install 2.1.a: Unix Download, Test and Installation: Fedora
ForSparseJac 5.6.3.1.1: Forward Mode Jacobian Sparsity: Example and Test
             5.6.3.1: Jacobian Sparsity Pattern: Forward Mode
Forward 5.6.1.7: Forward Mode: Example and Test
     capacity 5.6.1.6: Controlling taylor_ Coefficients Memory Allocation
     order one 5.6.1.2: First Order Forward Mode: Derivative Values
     order zero 5.6.1.1: Zero Order Forward Mode: Function Values
FunCheck 5.8: Check an ADFun Sequence of Operations
     example 5.8.1: ADFun Check and Re-Tape: Example and Test
factor
     lu determinant 9.2.2.4: Determinant Using Expansion by Lu Factorization
     matrix 6.12: Compute Determinants and Solve Equations by LU Factorization
fadbad
     link_det_lu 9.2.6.2.b: Fadbad Speed: Gradient of Determinant Using Lu Factorization: Implementation
     speed lu 9.2.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
     speed minor 9.2.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
     speed polynomial 9.2.6.4: Fadbad Speed: Second Derivative of a Polynomial
     speed test 9.2.6: Speed Test Derivatives Using Fadbad
features
     new 9.7: The CppAD Wish List
file
     deprecated include 9.9: Deprecated Include Files
first
     derivative 5.7.1: Jacobian: Driver Routine
     order derivative driver 5.7.3: First Order Derivative: Driver Routine
     order exp_3.3.6.1: exp_eps: Verify Second Order Forward Sweep
     order exp_3.3.4.1: exp_eps: Verify First Order Forward Sweep
     order exp_3.2.4.1: exp_2: Verify First Order Forward Sweep
     order expansion 3.3.4.a: exp_eps: First Order Forward Sweep: First Order Expansion
     order expansion 3.2.4.a: exp_2: First Order Forward Mode: First Order Expansion
     order forward 3.3.4: exp_eps: First Order Forward Sweep
     order partial driver 5.7.2: First Order Partial Derivative: Driver Routine
     order reverse 3.3.5: exp_eps: First Order Reverse Sweep
     order reverse 3.2.5: exp_2: First Order Reverse Mode
     order reverse mode 5.6.2.1: First Order Reverse Mode
flags
     unix compile 2.1.t: Unix Download, Test and Installation: CompilerFlags
forward 9.1.i: Frequently Asked Questions and Answers: Mode: Forward or Reverse
     acos theory 9.3.1.7: Arccosine Function Forward Taylor Polynomial Theory
     asin theory 9.3.1.6: Arcsine Function Forward Taylor Polynomial Theory
     atan theory 9.3.1.5: Arctangent Function Forward Taylor Polynomial Theory
     cos 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     cosh 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     easy driver 5.7: First and Second Derivatives: Easy Drivers
     exp theory 9.3.1.1: Exponential Function Forward Taylor Polynomial Theory
     first order 3.3.4: exp_eps: First Order Forward Sweep
     first order 3.2.4: exp_2: First Order Forward Mode
     log theory 9.3.1.2: Logarithm Function Forward Taylor Polynomial Theory
     mode 5.6.1.3: Any Order Forward Mode
     mode example 3.3.6: exp_eps: Second Order Forward Mode
     mode example 3.2.6: exp_2: Second Order Forward Mode
     mode example 3.2.4: exp_2: First Order Forward Mode
     mode print 4.3.4.1: Printing During Forward Mode: Example and Test
     mode print 4.3.4: Printing AD Values During Forward Mode
     ode example 8.1.1.3.1: An ODE Forward Problem Example
     print 2.2.g: Windows Download and Test: Printing During Forward Mode
     print 2.1.l: Unix Download, Test and Installation: --with-PrintFor
     second order 3.3.6: exp_eps: Second Order Forward Mode
     second order 3.2.6: exp_2: Second Order Forward Mode
     sin 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     sinh 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     sparsity Jacobian 5.6.3.1: Jacobian Sparsity Pattern: Forward Mode
     sqrt theory 9.3.1.3: Square Root Function Forward Taylor Polynomial Theory
     zero order 5.6.1.5: Comparison Changes During Zero Order Forward Mode
     zero order 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     zero order 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
free
     install CppAD 2.2: Windows Download and Test
     unix install 2.1: Unix Download, Test and Installation
function 9.4.d: Glossary: Base Function
         9.4.a: Glossary: AD Function
     AD Bool valued 4.5: Bool Valued Operations and Functions with AD Arguments
     AD bool 4.5.3: AD Boolean Functions
     AD valued 4.4: AD Valued Operations and Functions
     discrete AD 4.4.5: Discrete AD Functions
     error AD 4.4.3.3: The AD Error Function
     math 9.1.g: Frequently Asked Questions and Answers: Math Functions
     ode_evaluate 9.2.2.7: Evaluate a Function Defined in Terms of an ODE
     sparse_evaluate 9.2.2.8: Evaluate a Function That Has a Sparse Hessian
G
Gear
     Ode 6.18: An Arbitrary Order Gear Method
GetStarted
     windows 2.2.c: Windows Download and Test: Getting Started
GreaterThanZero
     Base require 4.7.i: AD<Base> Requirements for Base Type: Ordered
     Base require 4.7.i: AD<Base> Requirements for Base Type: Ordered
gcc 3.4.4
     bug 9.6.a: Know Bugs and Problems Using CppAD: gcc 3.4.4 -O2
general
     example 8.1: General Examples
get_started
     unix 2.1.h.a: Unix Download, Test and Installation: --with-Introduction.get_started
getstarted 3.1: A Simple Program Using CppAD to Compute Derivatives
gradient 8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
         8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
         8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
         8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
     ode speed cppad 9.2.5.3: CppAD Speed: Gradient of Ode Solution
     ode speed sacado 9.2.7.3: Sacado Speed: Gradient of Ode Solution
H
HesLagrangian 5.7.4.2: Hessian of Lagrangian and  ADFun Default Constructor: Example and Test
Hessian 5.7.4.1: Hessian: Example and Test
     Bender 6.20: Computing Jacobian and Hessian of Bender's Reduced Objective
     driver 5.7.4: Hessian: Easy Driver
     sparse 5.7.8.1: Sparse Hessian: Example and Test
     sparse speed adolc 9.2.4.5: Adolc Speed: Sparse Hessian
     sparse speed cppad 9.2.5.5: CppAD Speed: Sparse Hessian
     times direction 5.6.2.2.2: Hessian Times Direction: Example and Test
handler
     error 6.1.1: Replacing The CppAD Error Handler: Example and Test
     error 6.1: Replacing the CppAD Error Handler
hasnan 6.9: Obtain Nan and Determine if a Value is Nan
hessian
     sparse 5.7.8: Sparse Hessian: Easy Driver
I
Identical
     Base require 4.7.g: AD<Base> Requirements for Base Type: Identical
Independent 5.1: Declare Independent Variables and Start Recording
     example 5.1.1: Independent and ADFun Constructor: Example and Test
     OpenMP 5.1.g: Declare Independent Variables and Start Recording: OpenMP
Integer 4.3.2.1: Convert From AD to Integer: Example and Test
        4.3.2: Convert From AD to Integer
     Base require 4.7.h: AD<Base> Requirements for Base Type: Integer
Ipopt
     AD 8.1.1: Nonlinear Programming Using the CppAD Interface to Ipopt
     unix 2.1.s: Unix Download, Test and Installation: IpoptDir
imag() 4.7.1.2: Not Complex Differentiable: Example and Test
implementation
     exp_3.2.1: exp_2: Implementation
     exp_eps 3.3.1: exp_eps: Implementation
inactive 9.4.j.b: Glossary: Tape.Inactive
include
     cppad.hpp d: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Include File
     deprecated file 9.9: Deprecated Include Files
independent 9.4.j.c: Glossary: Tape.Independent Variable
            9.4.g.d: Glossary: Operation.Independent
            9.1.a: Frequently Asked Questions and Answers: Assignment and Independent
independent variable 9.1.f: Frequently Asked Questions and Answers: Independent Variables
index
     AD record 4.6: AD Vectors that Record Index Operations
     tape array operation 4.4.5.1: Taping Array Index Operation: Example and Test
install 2: CppAD Download, Test, and Installation Instructions
     documentation 2.1.g: Unix Download, Test and Installation: --with-Documentation
     Fedora 2.1.a: Unix Download, Test and Installation: Fedora
     unix CppAD 2.1: Unix Download, Test and Installation
     windows CppAD 2.2: Windows Download and Test
int
     numeric constructor 6.5.c: Definition of a Numeric Type: Constructor From Integer
integer
     pow 6.10: The Integer Power Function
integrate
     multi-dimensional Romberg 6.14: Multi-dimensional Romberg Integration
     Romberg 6.13: One DimensionalRomberg Integration
interface
     to 8.1.2: Interfacing to C: Example and Test
interpolate
     example 4.4.5.3: Interpolation With Retaping: Example and Test
     example 4.4.5.2: Interpolation With Out Retaping: Example and Test
     test 4.4.5.3: Interpolation With Retaping: Example and Test
     test 4.4.5.2: Interpolation With Out Retaping: Example and Test
interpreter
     example 8.1.10: Example Differentiating a Stack Machine Interpreter
introduction b: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Introduction
     AD 3: An Introduction by Example to Algorithmic Differentiation
     unix 2.1.h: Unix Download, Test and Installation: --with-Introduction
     windows 2.2.d: Windows Download and Test: Introduction
inverse
     AD tan 4.4.3.2: AD Two Argument Inverse Tangent Function
     matrix 9.1.h: Frequently Asked Questions and Answers: Matrix Inverse
     ode example 8.1.1.3.2: An ODE Inverse Problem Example
ipopt
     AD example 8.1.1.2: Nonlinear Programming Using CppAD and Ipopt: Example and Test
ipopt_cppad_nlp 9.8.2.y: Changes and Additions to CppAD During 2008: 08-29
     ode example source 8.1.1.3.5: ipopt_cppad_nlp ODE Example Source Code
     ode representation 8.1.1.3.4: ipopt_cppad_nlp ODE Problem Representation
isnan 6.9: Obtain Nan and Determine if a Value is Nan
     macro 6.9.c.a: Obtain Nan and Determine if a Value is Nan: Include.Macros
J
Jacobian 5.7.1.1: Jacobian: Example and Test
     Bender 6.20: Computing Jacobian and Hessian of Bender's Reduced Objective
     driver 5.7.1: Jacobian: Driver Routine
     sparse 5.7.7.1: Sparse Jacobian: Example and Test
jacobian
     sparse 5.7.7: Sparse Jacobian: Easy Driver
K
Kutta
     ODE 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
L
LessThanOrZero
     Base require 4.7.i: AD<Base> Requirements for Base Type: Ordered
     Base require 4.7.i: AD<Base> Requirements for Base Type: Ordered
LU 8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
Lu 8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
     linear equation 8.2.3: Lu Factor and Solve with Recorded Pivoting
     record pivot 8.2.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
LuFactor 6.12.2: LU Factorization of A Square Matrix
     source 6.12.2.2: Source: LuFactor
LuInvert 6.12.3: Invert an LU Factored Equation
     source 6.12.3.2: Source: LuInvert
LuRatio 6.21: LU Factorization of A Square Matrix and Stability Calculation
LuSolve 6.12.1: Compute Determinant and Solve Linear Equations
     source 6.12.1.2: Source: LuSolve
LuVecAD 8.2.3: Lu Factor and Solve with Recorded Pivoting
level
     AD 9.4.c: Glossary: AD Levels Above Base
     multiple AD 8.1.11.1: Multiple Tapes: Example and Test
     multiple AD 8.1.11: Using Multiple Levels of AD
     multiple Adolc 4.7.2.1: Using Adolc with Multiple Levels of Taping: Example and Test
levels 9.4.c: Glossary: AD Levels Above Base
library
     numerical C++ template 6: The CppAD General Purpose Library
linear
     equation 6.12.1: Compute Determinant and Solve Linear Equations
     equation 6.12: Compute Determinants and Solve Equations by LU Factorization
     invert Lu equation 6.12.3: Invert an LU Factored Equation
     Lu factor equation 6.21: LU Factorization of A Square Matrix and Stability Calculation
     Lu factor equation 6.12.2: LU Factorization of A Square Matrix
     solve equation 8.2.3: Lu Factor and Solve with Recorded Pivoting
link_det_lu 9.2.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
     adolc 9.2.4.2.b: Adolc Speed: Gradient of Determinant Using Lu Factorization: Implementation
     cppad 9.2.5.2.b: CppAD Speed: Gradient of Determinant Using Lu Factorization: Implementation
     double 9.2.3.2.b: Double Speed: Determinant Using Lu Factorization: Implementation
     fadbad 9.2.6.2.b: Fadbad Speed: Gradient of Determinant Using Lu Factorization: Implementation
     sacado 9.2.7.2.b: Sacado Speed: Gradient of Determinant Using Lu Factorization: Implementation
link_det_minor 9.2.7.1.a: Sacado Speed: Gradient of Determinant by Minor Expansion: link_det_minor
               9.2.6.1.a: Fadbad Speed: Gradient of Determinant by Minor Expansion: link_det_minor
               9.2.5.1.a: CppAD Speed: Gradient of Determinant by Minor Expansion: link_det_minor
               9.2.4.1.a: Adolc Speed: Gradient of Determinant by Minor Expansion: link_det_minor
               9.2.3.1.a: Double Speed: Determinant by Minor Expansion: link_det_minor
               9.2.1.2: Speed Testing Gradient of Determinant by Minor Expansion
link_ode 9.2.7.3.a: Sacado Speed: Gradient of Ode Solution: link_ode
         9.2.5.3.a: CppAD Speed: Gradient of Ode Solution: link_ode
         9.2.3.3.a: Double Speed: Ode Solution: link_ode
         9.2.1.5: Speed Testing Gradient of Ode Solution
link_poly 9.2.7.4.a: Sacado Speed: Second Derivative of a Polynomial: link_poly
          9.2.6.4.a: Fadbad Speed: Second Derivative of a Polynomial: link_poly
          9.2.5.4.a: CppAD Speed: Second Derivative of a Polynomial: link_poly
          9.2.4.4.a: Adolc Speed: Second Derivative of a Polynomial: link_poly
          9.2.3.4.a: Double Speed: Evaluate a Polynomial: link_poly
          9.2.1.3: Speed Testing Second Derivative of a Polynomial
link_sparse_hessian 9.2.5.5.c: CppAD Speed: Sparse Hessian: link_sparse_hessian
                    9.2.4.5.b: Adolc Speed: Sparse Hessian: link_sparse_hessian
                    9.2.3.5.a: Double Speed: Sparse Hessian: link_sparse_hessian
                    9.2.1.4: Speed Testing Sparse Hessian
log
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.7: The AD log Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward theory 9.3.1.2: Logarithm Function Forward Taylor Polynomial Theory
     reverse theory 9.3.2.2: Logarithm Function Reverse Mode Theory
log10
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.8: The AD log10 Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
lu
     factor determinant 9.2.2.4: Determinant Using Expansion by Lu Factorization
     speed adolc 9.2.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
     speed cppad 9.2.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
     speed double 9.2.3.2.a: Double Speed: Determinant Using Lu Factorization: Specifications
     speed fadbad 9.2.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
     speed sacado 9.2.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
M
Memory
     ADFun deprecated 5.10.e: ADFun Object Deprecated Member Functions: Memory
macro
     error assert 6.1.2: CppAD Assertions During Execution
     isnan 6.9.c.a: Obtain Nan and Determine if a Value is Nan: Include.Macros
     nan 6.9.c.a: Obtain Nan and Determine if a Value is Nan: Include.Macros
math
     AD other 4.4.3: Other AD Math Functions
     AD unary 4.4.2: AD Standard Math Unary Functions
     Base require 4.7.k: AD<Base> Requirements for Base Type: Standard Math Unary
     functions 9.1.g: Frequently Asked Questions and Answers: Math Functions
     standard function 9.1.j.b: Frequently Asked Questions and Answers: Namespace.Using
     unary 6.22: Float and Double Standard Math Unary Functions
matrix
     determinant 6.12: Compute Determinants and Solve Equations by LU Factorization
     factor 6.12: Compute Determinants and Solve Equations by LU Factorization
     inverse 9.1.h: Frequently Asked Questions and Answers: Matrix Inverse
     linear equation 6.12: Compute Determinants and Solve Equations by LU Factorization
     minor determinant 9.2.2.2: Determinant of a Minor
maxabs
     OdeErrControl 6.17.2: OdeErrControl: Example and Test Using Maxabs Argument
measurement
     simulate ode 8.1.1.3.3: Simulating ODE Measurement Values
memory
     control 5.6.1.6: Controlling taylor_ Coefficients Memory Allocation
     tape 9.1.l: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
     track 6.24: Routines That Track Use of New and Delete
minor
     expansion determinant 9.2.2.3: Determinant Using Expansion by Minors
     matrix determinant 9.2.2.2: Determinant of a Minor
     speed adolc 9.2.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
     speed cppad 9.2.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
     speed cppad 9.2.3.1: Double Speed: Determinant by Minor Expansion
     speed fadbad 9.2.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
     speed sacado 9.2.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
minors 8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
       8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
minus
     AD example 4.4.1.3.2: AD Binary Subtraction: Example and Test
     AD unary operator 4.4.1.2: AD Unary Minus Operator
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
mode 9.1.i: Frequently Asked Questions and Answers: Mode: Forward or Reverse
     any order reverse 5.6.2.3: Any Order Reverse Mode
     example forward 3.3.6: exp_eps: Second Order Forward Mode
     example forward 3.2.6: exp_2: Second Order Forward Mode
     example forward 3.2.4: exp_2: First Order Forward Mode
     example reverse 3.2.7: exp_2: Second Order Reverse Mode
     example reverse 3.2.5: exp_2: First Order Reverse Mode
     first order reverse 5.6.2.1: First Order Reverse Mode
     forward 5.6.1.3: Any Order Forward Mode
     reverse example 3.3.7: exp_eps: Second Order Reverse Sweep
     reverse example 3.3.5: exp_eps: First Order Reverse Sweep
     second order reverse 5.6.2.2: Second Order Reverse Mode
multi
     dimensional Romberg integration 6.14: Multi-dimensional Romberg Integration
multi-thread
     example 5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
     Newton's method 5.9.1.2.1: Multi-Threaded Newton's Method Routine
multi_newton
     source 5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
multiple
     AD level 8.1.11.1: Multiple Tapes: Example and Test
     AD level 8.1.11: Using Multiple Levels of AD
     Adolc 4.7.2.1: Using Adolc with Multiple Levels of Taping: Example and Test
     assignment 4.4.1.4.g: AD Computed Assignment Operators: Result
     thread 5.9: OpenMP Maximum Thread Number
multiply
     AD example 4.4.1.3.3: AD Binary Multiplication: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
N
NDEBUG 9.1.k: Frequently Asked Questions and Answers: Speed
       9.1.c: Frequently Asked Questions and Answers: CompareChange
       6.24.j: Routines That Track Use of New and Delete: TrackNewVec
       6.7.m: Definition of a Simple Vector: Exercise
     CompareChange 5.6.1.5.f: Comparison Changes During Zero Order Forward Mode: Restrictions
NearEqual 6.2: Determine if Two Values Are Nearly Equal
     AD with Base 4.5.2: Compare AD and Base Objects for Nearly Equal
     example 6.2.1: NearEqual Function: Example and Test
NearEqualExt
     example 4.5.2.1: Compare AD with Base Objects: Example and Test
NumericType
     example 6.5.1: The NumericType: Example and Test
namespace 9.1.j: Frequently Asked Questions and Answers: Namespace
     CppAD f: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Namespace
nan 6.9: Obtain Nan and Determine if a Value is Nan
     example 6.9.1: nan: Example and Test
     macro 6.9.c.a: Obtain Nan and Determine if a Value is Nan: Include.Macros
new
     example 6.24.1: Tracking Use of New and Delete: Example and Test
     features 9.7: The CppAD Wish List
     track 6.24: Routines That Track Use of New and Delete
nonlinear
     programming CppAD 8.1.1: Nonlinear Programming Using the CppAD Interface to Ipopt
numeric
     check 6.6: Check NumericType Class Concept
     type 6.5: Definition of a Numeric Type
numerical
     C++ template library 6: The CppAD General Purpose Library
O
ODE
     control error 6.17: An Error Controller for ODE Solvers
     Rosenbrock 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
     Runge-Kutta 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
     Taylor 8.1.8: Taylor's Ode Solver: An Example and Test
     Taylor Adolc 8.1.9: Using Adolc with Taylor's Ode Solver: An Example and Test
Ode
     Gear 6.18: An Arbitrary Order Gear Method
OdeErrControl 6.17: An Error Controller for ODE Solvers
     example 6.17.2: OdeErrControl: Example and Test Using Maxabs Argument
     example 6.17.1: OdeErrControl: Example and Test
     maxabs 6.17.2: OdeErrControl: Example and Test Using Maxabs Argument
OdeGear 6.18: An Arbitrary Order Gear Method
     example 6.18.1: OdeGear: Example and Test
OdeGearControl 6.19: An Error Controller for Gear's Ode Solvers
     example 6.19.1: OdeGearControl: Example and Test
OpenMP
     ADFun 5.2.h: Construct an ADFun Object and Stop Recording: OpenMP
     CppAD 5.9: OpenMP Maximum Thread Number
     compile example 5.9.1: Compile and Run the OpenMP Test
     Dependent 5.2.h: Construct an ADFun Object and Stop Recording: OpenMP
     example 5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
     example A.1.1c 5.9.1.1: A Simple Parallel Loop
     example program 5.9.1.3: Sum of 1/i Main Program
     example program 5.9.1.2: Multi-Threaded Newton's Method Main Program
     Independent 5.1.g: Declare Independent Variables and Start Recording: OpenMP
     Newton's method 5.9.1.2.1: Multi-Threaded Newton's Method Routine
     TrackCount 6.24.m.c: Routines That Track Use of New and Delete: TrackCount.OpenMP
     TrackNewDel 6.24.f.a: Routines That Track Use of New and Delete: oldptr.OpenMP
Order
     ADFun deprecated 5.10.d: ADFun Object Deprecated Member Functions: Order
object
     ADFun 5: ADFun Objects
ode
     forward example 8.1.1.3.1: An ODE Forward Problem Example
     gradient speed cppad 9.2.5.3: CppAD Speed: Gradient of Ode Solution
     gradient speed sacado 9.2.7.3: Sacado Speed: Gradient of Ode Solution
     inverse example 8.1.1.3.2: An ODE Inverse Problem Example
     ipopt_cppad_nlp example source 8.1.1.3.5: ipopt_cppad_nlp ODE Example Source Code
     ipopt_cppad_nlp representation 8.1.1.3.4: ipopt_cppad_nlp ODE Problem Representation
     simulate measurement 8.1.1.3.3: Simulating ODE Measurement Values
     speed double 9.2.3.3: Double Speed: Ode Solution
     speed test 9.2.1.5: Speed Testing Gradient of Ode Solution
     stiff 8.1.7: A Stiff Ode: Example and Test
ode_evaluate
     example 9.2.2.7.1: ode_evaluate: Example and test
     function 9.2.2.7: Evaluate a Function Defined in Terms of an ODE
     source 9.2.2.7.2: Source: ode_evaluate
of 9.4.b: Glossary: AD of Base
omp_max_thread 5.9: OpenMP Maximum Thread Number
one
     order Forward 5.6.1.2: First Order Forward Mode: Derivative Values
operation 9.4.g: Glossary: Operation
     AD Bool valued 4.5: Bool Valued Operations and Functions with AD Arguments
     AD valued 4.4: AD Valued Operations and Functions
     equal sequence 4.5.5: Check if Equal and Correspond to Same Operation Sequence
     optimize sequence 9.7.j.c: The CppAD Wish List: Optimization.Remove Operations From Tape
     sequence 9.7.i: The CppAD Wish List: Operation Sequence
     sequence abort 5.4: Abort Recording of an Operation Sequence
     sequence example 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     sequence example 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
     sequence store 5.3: Stop Recording and Store Operation Sequence
operator
     AD arithmetic 4.4.1: AD Arithmetic Operators and Computed Assignments
     AD binary compare 4.5.1: AD Binary Comparison Operators
     assignment 9.1.a: Frequently Asked Questions and Answers: Assignment and Independent
     assignment 4.4.1.4: AD Computed Assignment Operators
     binary 4.4.1.3: AD Binary Arithmetic Operators
optimize
     operation sequence 9.7.j.c: The CppAD Wish List: Optimization.Remove Operations From Tape
order
     first exp_3.3.6.1: exp_eps: Verify Second Order Forward Sweep
     first exp_3.3.4.1: exp_eps: Verify First Order Forward Sweep
     first exp_3.2.4.1: exp_2: Verify First Order Forward Sweep
     first expansion 3.3.4.a: exp_eps: First Order Forward Sweep: First Order Expansion
     first expansion 3.2.4.a: exp_2: First Order Forward Mode: First Order Expansion
     first forward 3.3.4: exp_eps: First Order Forward Sweep
     first reverse 3.3.5: exp_eps: First Order Reverse Sweep
     first reverse 3.2.5: exp_2: First Order Reverse Mode
     one Forward 5.6.1.2: First Order Forward Mode: Derivative Values
     second exp_3.2.6.1: exp_2: Verify Second Order Forward Sweep
     second expansion 3.3.6.a: exp_eps: Second Order Forward Mode: Second Order Expansion
     second expansion 3.2.6.a: exp_2: Second Order Forward Mode: Second Order Expansion
     second reverse 3.3.7: exp_eps: Second Order Reverse Sweep
     second reverse 3.2.7: exp_2: Second Order Reverse Mode
     zero exp_3.2.3.1: exp_2: Verify Zero Order Forward Sweep
     zero exp_eps 3.3.3.1: exp_eps: Verify Zero Order Forward Sweep
     zero expansion 3.2.3.b: exp_2: Operation Sequence and Zero Order Forward Mode: Zero Order Expansion
     zero Forward 5.6.1.1: Zero Order Forward Mode: Function Values
     zero forward 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     zero forward 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
other
     AD math 4.4.3: Other AD Math Functions
output
     AD 4.3.3: AD Output Stream Operator
     AD example 4.3.3.1: AD Output Operator: Example and Test
P
PACKAGE
     preprocessor symbol 7: Preprocessor Definitions Used by CppAD
Parameter 4.5.4: Is an AD Object a Parameter or Variable
     ADFun 5.5.1: ADFun Sequence Properties: Example and Test
     ADFun 5.5: ADFun Sequence Properties
     example 4.5.4.1: AD Parameter and Variable Functions: Example and Test
Poly 6.11: Evaluate a Polynomial or its Derivative
     source 6.11.2: Source: Poly
parameter 9.4.h: Glossary: Parameter
     convert from variable 4.3.5: Convert an AD Variable to a Parameter
partial
     easy 5.7.6: Reverse Mode Second Partial Derivative Driver
     easy 5.7.5: Forward Mode Second Partial Derivative Driver
     easy 5.7.2: First Order Partial Derivative: Driver Routine
     example 5.7.2.1: First Order Partial Driver: Example and Test
     first order driver 5.7.2: First Order Partial Derivative: Driver Routine
     second 5.7.6.1: Second Partials Reverse Driver: Example and Test
     second 5.7.5.1: Subset of Second Order Partials: Example and Test
     second order driver 5.7.6: Reverse Mode Second Partial Derivative Driver
     second order driver 5.7.5: Forward Mode Second Partial Derivative Driver
pattern 9.4.i: Glossary: Sparsity Pattern
     forward Jacobian 5.6.3.1: Jacobian Sparsity Pattern: Forward Mode
     reverse Hessian 5.6.3.3: Hessian Sparsity Pattern: Reverse Mode
     reverse Jacobian 5.6.3.2: Jacobian Sparsity Pattern: Reverse Mode
     sparsity 9.4.i: Glossary: Sparsity Pattern
     sparsity 5.6.3: Calculating Sparsity Patterns
plus
     *= example 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     += example 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     -= example 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     /= example 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD example 4.4.1.3.1: AD Binary Addition: Example and Test
     AD unary operator 4.4.1.1: AD Unary Plus Operator
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
polynomial 6.11: Evaluate a Polynomial or its Derivative
     complex 4.7.1.2: Not Complex Differentiable: Example and Test
     complex 4.7.1.1: Complex Polynomial: Example and Test
     example 6.11.1: Polynomial Evaluation: Example and Test
     speed adolc 9.2.4.4: Adolc Speed: Second Derivative of a Polynomial
     speed cppad 9.2.5.4: CppAD Speed: Second Derivative of a Polynomial
     speed cppad 9.2.3.4: Double Speed: Evaluate a Polynomial
     speed fadbad 9.2.6.4: Fadbad Speed: Second Derivative of a Polynomial
     speed sacado 9.2.7.4: Sacado Speed: Second Derivative of a Polynomial
     speed test 9.2.1.3: Speed Testing Second Derivative of a Polynomial
postfix
     configure directory 2.1.n: Unix Download, Test and Installation: PostfixDir
pow
     AD 4.4.3.4: The AD Power Function
     AD example 4.4.3.4.1: The AD Power Function: Example and Test
     Base require 4.7.j: AD<Base> Requirements for Base Type: pow
     int 4.4.3.4.2: The Pow Integer Exponent: Example and Test
     integer 6.10: The Integer Power Function
prefix
     configure directory 2.1.f: Unix Download, Test and Installation: PrefixDir
preprocessor
     symbol 7: Preprocessor Definitions Used by CppAD
     symbol CppAD e: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Preprocessor Symbols
     symbols 9.1.j.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
print
     example forward mode 4.3.4.1: Printing During Forward Mode: Example and Test
     forward mode 4.3.4: Printing AD Values During Forward Mode
     forward mode 2.2.g: Windows Download and Test: Printing During Forward Mode
     forward mode 2.1.l: Unix Download, Test and Installation: --with-PrintFor
problem
     using CppAD 9.6: Know Bugs and Problems Using CppAD
profile
     cppad speed 2.1.k.c: Unix Download, Test and Installation: --with-Speed.profile
program
     OpenMP example 5.9.1.3: Sum of 1/i Main Program
     OpenMP example 5.9.1.2: Multi-Threaded Newton's Method Main Program
     speed example 8.2.2: Program That Runs the Speed Examples
programming
     nonlinear 8.1.1: Nonlinear Programming Using the CppAD Interface to Ipopt
     nonlinear example 8.1.1.2: Nonlinear Programming Using CppAD and Ipopt: Example and Test
push_back
     CppAD vector 6.23.f: The CppAD::vector Template Class: push_back
push_vector
     CppAD 6.23.g: The CppAD::vector Template Class: push_vector
Q
quotient
     AD example 4.4.1.3.4: AD Binary Division: Example and Test
R
Range
     ADFun 5.5.1: ADFun Sequence Properties: Example and Test
     ADFun 5.5: ADFun Sequence Properties
RevSparseHes 5.6.3.3.1: Reverse Mode Hessian Sparsity: Example and Test
             5.6.3.3: Hessian Sparsity Pattern: Reverse Mode
RevSparseJac 5.6.3.2.1: Reverse Mode Jacobian Sparsity: Example and Test
             5.6.3.2: Jacobian Sparsity Pattern: Reverse Mode
Romberg
     example 6.14.1: One Dimensional Romberg Integration: Example and Test
     example 6.13.1: One Dimensional Romberg Integration: Example and Test
     Integrate 6.13: One DimensionalRomberg Integration
     multi-dimensional integrate 6.14: Multi-dimensional Romberg Integration
Rosen34 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
     example 6.16.1: Rosen34: Example and Test
Rosenbrock
     ODE 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
Runge
     ODE 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
Runge45 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
     example 6.15.1: Runge45: Example and Test
random
     uniform vector 9.2.2.1: Simulate a [0,1] Uniform Random Variate
re-tape
     example 5.6.1.5.1: CompareChange and Re-Tape: Example and Test
real() 4.7.1.2: Not Complex Differentiable: Example and Test
realistic
     example 8.1: General Examples
record
     AD index 4.6: AD Vectors that Record Index Operations
     avoid 9.1.f: Frequently Asked Questions and Answers: Independent Variables
     example 4.3.1.1: Convert From AD to its Base Type: Example and Test
recording
     abort 5.4.1: Abort Current Recording: Example and Test
     abort 5.4: Abort Recording of an Operation Sequence
     start 5.1: Declare Independent Variables and Start Recording
     stop 5.3: Stop Recording and Store Operation Sequence
     stop tape 5.2: Construct an ADFun Object and Stop Recording
reference
     VecAD<Base> 4.6.d: AD Vectors that Record Index Operations: VecAD<Base>::reference
relative
     difference 6.2: Determine if Two Values Are Nearly Equal
replace
     error handler 6.1: Replacing the CppAD Error Handler
representation
     ipopt_cppad_nlp ode 8.1.1.3.4: ipopt_cppad_nlp ODE Problem Representation
require
     Base type 4.7: AD<Base> Requirements for Base Type
resize
     vector 6.7.i: Definition of a Simple Vector: Resize
retape
     interpolate 4.4.5.3: Interpolation With Retaping: Example and Test
     interpolate 4.4.5.2: Interpolation With Out Retaping: Example and Test
reverse 9.1.i: Frequently Asked Questions and Answers: Mode: Forward or Reverse
     acos theory 9.3.2.7: Arccosine Function Reverse Mode Theory
     any order 5.6.2.3.1: Any Order Reverse Mode: Example and Test
     any order mode 5.6.2.3: Any Order Reverse Mode
     asin theory 9.3.2.6: Arcsine Function Reverse Mode Theory
     atan theory 9.3.2.5: Arctangent Function Reverse Mode Theory
     cos 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     cosh 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     easy driver 5.7: First and Second Derivatives: Easy Drivers
     exp theory 9.3.2.1: Exponential Function Reverse Mode Theory
     exp_3.2.7.1: exp_2: Verify Second Order Reverse Sweep
     exp_3.2.5.1: exp_2: Verify First Order Reverse Sweep
     exp_eps 3.3.7.1: exp_eps: Verify Second Order Reverse Sweep
     exp_eps 3.3.5.1: exp_eps: Verify First Order Reverse Sweep
     first order 5.6.2.1.1: First Order Reverse Mode: Example and Test
     first order 3.3.5: exp_eps: First Order Reverse Sweep
     first order 3.2.5: exp_2: First Order Reverse Mode
     first order mode 5.6.2.1: First Order Reverse Mode
     log theory 9.3.2.2: Logarithm Function Reverse Mode Theory
     mode example 3.3.7: exp_eps: Second Order Reverse Sweep
     mode example 3.3.5: exp_eps: First Order Reverse Sweep
     mode example 3.2.7: exp_2: Second Order Reverse Mode
     mode example 3.2.5: exp_2: First Order Reverse Mode
     second order 5.6.2.2.1: Second Order Reverse ModeExample and Test
     second order 3.3.7: exp_eps: Second Order Reverse Sweep
     second order 3.2.7: exp_2: Second Order Reverse Mode
     second order mode 5.6.2.2: Second Order Reverse Mode
     sin 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     sinh 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     sparse Hessian 5.6.3.3: Hessian Sparsity Pattern: Reverse Mode
     sparse Jacobian 5.6.3.2: Jacobian Sparsity Pattern: Reverse Mode
     sqrt theory 9.3.2.3: Square Root Function Reverse Mode Theory
rpm
     cppad.spec 2.1.b: Unix Download, Test and Installation: RPM
run
     exp_apx test 3.4: Run the exp_2 and exp_eps Tests
S
Sacado
     unix 2.1.q: Unix Download, Test and Installation: SacadoDir
Size
     ADFun deprecated 5.10.f: ADFun Object Deprecated Member Functions: Size
SparseHessian 5.7.8: Sparse Hessian: Easy Driver
SparseJacobian 5.7.7: Sparse Jacobian: Easy Driver
SpeedTest 6.4: Run One Speed Test and Print Results
     example 6.4.1: Example Use of SpeedTest
sacado
     link_det_lu 9.2.7.2.b: Sacado Speed: Gradient of Determinant Using Lu Factorization: Implementation
     speed lu 9.2.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
     speed minor 9.2.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
     speed ode gradient 9.2.7.3: Sacado Speed: Gradient of Ode Solution
     speed polynomial 9.2.7.4: Sacado Speed: Second Derivative of a Polynomial
     speed test 9.2.7: Speed Test Derivatives Using Sacado
second
     derivative 5.7.4: Hessian: Easy Driver
     order exp_3.2.6.1: exp_2: Verify Second Order Forward Sweep
     order expansion 3.3.6.a: exp_eps: Second Order Forward Mode: Second Order Expansion
     order expansion 3.2.6.a: exp_2: Second Order Forward Mode: Second Order Expansion
     order partial driver 5.7.6: Reverse Mode Second Partial Derivative Driver
     order partial driver 5.7.5: Forward Mode Second Partial Derivative Driver
     order reverse 3.3.7: exp_eps: Second Order Reverse Sweep
     order reverse 3.2.7: exp_2: Second Order Reverse Mode
     order reverse mode 5.6.2.2: Second Order Reverse Mode
     partial 5.7.6.1: Second Partials Reverse Driver: Example and Test
     partial 5.7.5.1: Subset of Second Order Partials: Example and Test
sequence 9.4.g.b: Glossary: Operation.Sequence
     equal operation 4.5.5: Check if Equal and Correspond to Same Operation Sequence
     example operation 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     example operation 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
     operation 9.7.i: The CppAD Wish List: Operation Sequence
     operation abort 5.4: Abort Recording of an Operation Sequence
     operation store 5.3: Stop Recording and Store Operation Sequence
     optimize operations 9.7.j.c: The CppAD Wish List: Optimization.Remove Operations From Tape
simple
     example 3.1: A Simple Program Using CppAD to Compute Derivatives
     vector 6.7: Definition of a Simple Vector
     vector check 6.8: Check Simple Vector Concept
     vector example 6.7.1: Simple Vector Template Class: Example and Test
simulate
     ode measurement 8.1.1.3.3: Simulating ODE Measurement Values
sin
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.9: The AD sin Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     reverse 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
sinh
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.10: The AD sinh Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     reverse 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
size
     vector 6.7.h: Definition of a Simple Vector: Size
     vector constructor 6.7.d: Definition of a Simple Vector: Sizing Constructor
size_var
     ADFun 5.5.1: ADFun Sequence Properties: Example and Test
     ADFun 5.5: ADFun Sequence Properties
solve
     Lu factor 6.21: LU Factorization of A Square Matrix and Stability Calculation
     Lu factor 6.12.2: LU Factorization of A Square Matrix
     linear equation 8.2.3: Lu Factor and Solve with Recorded Pivoting
     linear equation 6.12.1: Compute Determinant and Solve Linear Equations
     ODE 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
     ODE 6.15: An Embedded 4th and 5th Order Runge-Kutta ODE Solver
source
     det_33 9.2.2.5.1: Source: det_33
     det_by_lu 9.2.2.4.2: Source: det_by_lu
     det_by_minor 9.2.2.3.2: Source: det_by_minor
     det_grad_33 9.2.2.6.1: Source: det_grad_33
     det_of_minor 9.2.2.2.2: Source: det_of_minor
     ipopt_cppad_nlp ode example 8.1.1.3.5: ipopt_cppad_nlp ODE Example Source Code
     LuFactor 6.12.2.2: Source: LuFactor
     LuInvert 6.12.3.2: Source: LuInvert
     LuSolve 6.12.1.2: Source: LuSolve
     multi_newton 5.9.1.2.2: OpenMP Multi-Threading Newton's Method Source Code
     ode_evaluate 9.2.2.7.2: Source: ode_evaluate
     Poly 6.11.2: Source: Poly
     sparse_evaluate 9.2.2.8.2: Source: sparse_evaluate
     uniform_01 9.2.2.1.1: Source: uniform_01
spare
     Hessian example 5.7.8.1: Sparse Hessian: Example and Test
     Jacobian example 5.7.7.1: Sparse Jacobian: Example and Test
sparse
     Hessian speed adolc 9.2.4.5: Adolc Speed: Sparse Hessian
     Hessian speed cppad 9.2.5.5: CppAD Speed: Sparse Hessian
     hessian speed double 9.2.3.5: Double Speed: Sparse Hessian
     reverse Hessian 5.6.3.3: Hessian Sparsity Pattern: Reverse Mode
     reverse Jacobian 5.6.3.2: Jacobian Sparsity Pattern: Reverse Mode
     speed test 9.2.1.4: Speed Testing Sparse Hessian
sparse_evaluate
     example 9.2.2.8.1: sparse_evaluate: Example and test
     function 9.2.2.8: Evaluate a Function That Has a Sparse Hessian
     source 9.2.2.8.2: Source: sparse_evaluate
sparsity 9.4.i: Glossary: Sparsity Pattern
     forward example 5.6.3.1.1: Forward Mode Jacobian Sparsity: Example and Test
     forward Jacobian 5.6.3.1: Jacobian Sparsity Pattern: Forward Mode
     Hessian 5.6.3.3.1: Reverse Mode Hessian Sparsity: Example and Test
     pattern 9.4.i: Glossary: Sparsity Pattern
     pattern 5.6.3: Calculating Sparsity Patterns
     reverse example 5.6.3.2.1: Reverse Mode Jacobian Sparsity: Example and Test
speed 9.1.k: Frequently Asked Questions and Answers: Speed
     adolc lu 9.2.4.2: Adolc Speed: Gradient of Determinant Using Lu Factorization
     adolc minor 9.2.4.1: Adolc Speed: Gradient of Determinant by Minor Expansion
     adolc polynomial 9.2.4.4: Adolc Speed: Second Derivative of a Polynomial
     adolc sparse Hessian 9.2.4.5: Adolc Speed: Sparse Hessian
     avoid taping 9.1.f: Frequently Asked Questions and Answers: Independent Variables
     cppad lu 9.2.5.2: CppAD Speed: Gradient of Determinant Using Lu Factorization
     cppad minor 9.2.5.1: CppAD Speed: Gradient of Determinant by Minor Expansion
     cppad minor 9.2.3.1: Double Speed: Determinant by Minor Expansion
     cppad ode gradient 9.2.5.3: CppAD Speed: Gradient of Ode Solution
     cppad polynomial 9.2.5.4: CppAD Speed: Second Derivative of a Polynomial
     cppad polynomial 9.2.3.4: Double Speed: Evaluate a Polynomial
     cppad sparse Hessian 9.2.5.5: CppAD Speed: Sparse Hessian
     cppad test 2.2.h: Windows Download and Test: CppAD Speed Test
     cppad test 2.1.k.a: Unix Download, Test and Installation: --with-Speed.cppad
     double lu 9.2.3.2.a: Double Speed: Determinant Using Lu Factorization: Specifications
     double ode 9.2.3.3: Double Speed: Ode Solution
     double sparse hessian 9.2.3.5: Double Speed: Sparse Hessian
     double test 2.2.i: Windows Download and Test: Double Speed Test
     double test 2.1.k.b: Unix Download, Test and Installation: --with-Speed.double
     example program 8.2.2: Program That Runs the Speed Examples
     fadbad lu 9.2.6.2: Fadbad Speed: Gradient of Determinant Using Lu Factorization
     fadbad minor 9.2.6.1: Fadbad Speed: Gradient of Determinant by Minor Expansion
     fadbad polynomial 9.2.6.4: Fadbad Speed: Second Derivative of a Polynomial
     profile cppad 2.1.k.c: Unix Download, Test and Installation: --with-Speed.profile
     sacado lu 9.2.7.2: Sacado Speed: Gradient of Determinant Using Lu Factorization
     sacado minor 9.2.7.1: Sacado Speed: Gradient of Determinant by Minor Expansion
     sacado ode gradient 9.2.7.3: Sacado Speed: Gradient of Ode Solution
     sacado polynomial 9.2.7.4: Sacado Speed: Second Derivative of a Polynomial
     test 9.2: Speed Test Routines
     test adolc 9.2.4: Speed Test Derivatives Using Adolc
     test cppad 9.2.5: Speed Test Derivatives Using CppAD
     test cppad 9.2.1: Speed Testing Main Program
     test det_lu 9.2.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
     test det_minor 9.2.1.2: Speed Testing Gradient of Determinant by Minor Expansion
     test double 9.2.3: Speed Test Functions in Double
     test fadbad 9.2.6: Speed Test Derivatives Using Fadbad
     test ode 9.2.1.5: Speed Testing Gradient of Ode Solution
     test polynomial 9.2.1.3: Speed Testing Second Derivative of a Polynomial
     test sacado 9.2.7: Speed Test Derivatives Using Sacado
     test sparse 9.2.1.4: Speed Testing Sparse Hessian
     test unix 2.1.k: Unix Download, Test and Installation: --with-Speed
     test windows 9.2.b: Speed Test Routines: Windows
     utility 9.2.2: Speed Testing Utilities
     utility example 2.2.j: Windows Download and Test: Speed Utility Example
     utility example 2.1.k.d: Unix Download, Test and Installation: --with-Speed.example
speed_test 6.3: Run One Speed Test and Return Results
     example 6.3.1: speed_test: Example and test
sqrt
     AD 4.4.2: AD Standard Math Unary Functions
     AD example 4.4.2.11: The AD sqrt Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
     forward theory 9.3.1.3: Square Root Function Forward Taylor Polynomial Theory
     reverse theory 9.3.2.3: Square Root Function Reverse Mode Theory
standard
     AD math unary 4.4.2: AD Standard Math Unary Functions
     math function 9.1.j.b: Frequently Asked Questions and Answers: Namespace.Using
     math unary 6.22: Float and Double Standard Math Unary Functions
start
     recording 5.1: Declare Independent Variables and Start Recording
     using CppAD 3.1: A Simple Program Using CppAD to Compute Derivatives
status
     test return 2.1.e: Unix Download, Test and Installation: Testing Return Status
std::vector
     unix 2.1.m: Unix Download, Test and Installation: --with-stdvector
stiff
     ODE 6.16: A 3rd and 4th Order Rosenbrock ODE Solver
     Ode 6.18: An Arbitrary Order Gear Method
     ode 8.1.7: A Stiff Ode: Example and Test
storage
     tape 9.1.l: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
stream
     AD output 4.3.3: AD Output Stream Operator
subtract
     AD example 4.4.1.3.2: AD Binary Subtraction: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
subversion
     download 2.1.1: Using Subversion To Download Source Code
symbol
     preprocessor 7: Preprocessor Definitions Used by CppAD
     preprocessor CppAD e: cppad-20090131.0: A Package for Differentiation of C++ Algorithms: Preprocessor Symbols
symbols
     preprocessor 9.1.j.a: Frequently Asked Questions and Answers: Namespace.Test Vector Preprocessor Symbol
T
Taylor
     ODE 8.1.8: Taylor's Ode Solver: An Example and Test
     ODE Adolc 8.1.9: Using Adolc with Taylor's Ode Solver: An Example and Test
TrackCount 6.24.m: Routines That Track Use of New and Delete: TrackCount
     OpenMP 6.24.m.c: Routines That Track Use of New and Delete: TrackCount.OpenMP
TrackDelVec 6.24.k: Routines That Track Use of New and Delete: TrackDelVec
TrackExtend 6.24.l: Routines That Track Use of New and Delete: TrackExtend
TrackNewDel
     OpenMP 6.24.f.a: Routines That Track Use of New and Delete: oldptr.OpenMP
TrackNewVec 6.24.j: Routines That Track Use of New and Delete: TrackNewVec
tan
     AD 4.4.2: AD Standard Math Unary Functions
     AD inverse 4.4.3.2: AD Two Argument Inverse Tangent Function
     example 4.4.2.12: The AD tan Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
tanh
     AD 4.4.2: AD Standard Math Unary Functions
     example 4.4.2.13: The AD tanh Function: Example and Test
     float and double 6.22: Float and Double Standard Math Unary Functions
tape 9.4.j: Glossary: Tape
     AD index 4.6: AD Vectors that Record Index Operations
     abort recording 5.4: Abort Recording of an Operation Sequence
     array index operation 4.4.5.1: Taping Array Index Operation: Example and Test
     avoid 9.1.f: Frequently Asked Questions and Answers: Independent Variables
     interpolate 4.4.5.3: Interpolation With Retaping: Example and Test
     interpolate 4.4.5.2: Interpolation With Out Retaping: Example and Test
     optimize 9.7.j.c: The CppAD Wish List: Optimization.Remove Operations From Tape
     stop recording 5.3: Stop Recording and Store Operation Sequence
     stop recording 5.2: Construct an ADFun Object and Stop Recording
     storage 9.1.l: Frequently Asked Questions and Answers: Tape Storage: Disk or Memory
taping
     Value during 4.3.5.1: Convert an AD Variable to a Parameter: Example and Test
tar
     CppAD file 2.1.c.c: Unix Download, Test and Installation: Download.Unix Tar Files
taylor 9.4.k: Glossary: Taylor Coefficient
taylor_size
     ADFun deprecated 5.10.g: ADFun Object Deprecated Member Functions: taylor_size
template
     CppAD vector class 6.23: The CppAD::vector Template Class
     numerical C++ library 6: The CppAD General Purpose Library
     polynomial derivative 6.11: Evaluate a Polynomial or its Derivative
     simple vector class 6.7: Definition of a Simple Vector
test 2: CppAD Download, Test, and Installation Instructions
     AD acos 4.4.2.1: The AD acos Function: Example and Test
     AD add 4.4.1.3.1: AD Binary Addition: Example and Test
     AD asin 4.4.2.2: The AD asin Function: Example and Test
     AD assignment 4.2.3: AD Assignment Operator: Example and Test
     AD atan 4.4.2.3: The AD atan Function: Example and Test
     AD atan2 4.4.3.2.1: The AD atan2 Function: Example and Test
     AD bool 4.5.3.1: AD Boolean Functions: Example and Test
     AD compare 4.5.1.1: AD Binary Comparison Operators: Example and Test
     AD computed assignment add 4.4.1.4.1: AD Computed Assignment Addition: Example and Test
     AD computed assignment divide 4.4.1.4.4: AD Computed Assignment Division: Example and Test
     AD computed assignment multiply 4.4.1.4.3: AD Computed Assignment Multiplication: Example and Test
     AD computed assignment subtract 4.4.1.4.2: AD Computed Assignment Subtraction: Example and Test
     AD cos 4.4.2.4: The AD cos Function: Example and Test
     AD cosh 4.4.2.5: The AD cosh Function: Example and Test
     AD divide 4.4.1.3.4: AD Binary Division: Example and Test
     AD exp 4.4.2.6: The AD exp Function: Example and Test
     AD log 4.4.2.7: The AD log Function: Example and Test
     AD log10 4.4.2.8: The AD log10 Function: Example and Test
     AD multiply 4.4.1.3.3: AD Binary Multiplication: Example and Test
     AD output 4.3.3.1: AD Output Operator: Example and Test
     AD pow 4.4.3.4.1: The AD Power Function: Example and Test
     AD sin 4.4.2.9: The AD sin Function: Example and Test
     AD sinh 4.4.2.10: The AD sinh Function: Example and Test
     AD sqrt 4.4.2.11: The AD sqrt Function: Example and Test
     ADFun 5.8.1: ADFun Check and Re-Tape: Example and Test
     ADFun default constructor 5.7.4.2: Hessian of Lagrangian and  ADFun Default Constructor: Example and Test
     abort 5.4.1: Abort Current Recording: Example and Test
     abs 4.4.3.1.1: AD Absolute Value Function: Example and Test
     adolc speed 9.2.4: Speed Test Derivatives Using Adolc
     BenderQuad 6.20.1: BenderQuad: Example and Test
     CompareChange 5.6.1.5.1: CompareChange and Re-Tape: Example and Test
     CondExp 4.4.4.1: Conditional Expressions: Example and Test
     CppAD::vector 6.23.1: CppAD::vector Template Class: Example and Test
     CppAD::vectorBool 6.23.2: CppAD::vectorBool Class: Example and Test
     check NumericType 6.6.1: The CheckNumericType Function: Example and Test
     check SimpleVector 6.8.1: The CheckSimpleVector Function: Example and Test
     complex 6.12.1.1: LuSolve With Complex Arguments: Example and Test
     complex polynomial 4.7.1.1: Complex Polynomial: Example and Test
     construct from base 4.2.2: AD Constructor From Base Type: Example and Test
     copy AD object 4.2.1: AD Copy Constructor: Example and Test
     cppad speed 9.2.5: Speed Test Derivatives Using CppAD
     cppad speed 9.2.1: Speed Testing Main Program
     cppad speed 2.2.h: Windows Download and Test: CppAD Speed Test
     cppad speed 2.1.k.a: Unix Download, Test and Installation: --with-Speed.cppad
     Dependent 5.8.1: ADFun Check and Re-Tape: Example and Test
     Domain 5.5.1: ADFun Sequence Properties: Example and Test
     default AD construct 4.1.1: Default AD Constructor: Example and Test
     delete 6.24.1: Tracking Use of New and Delete: Example and Test
     derivative 5.7.3.1: First Order Derivative Driver: Example and Test
     det_lu speed 9.2.1.1: Speed Testing Gradient of Determinant Using Lu Factorization
     det_minor speed 9.2.1.2: Speed Testing Gradient of Determinant by Minor Expansion
     det_of_minor 9.2.2.2.1: Determinant of a Minor: Example and Test
     determinant by minors 9.2.2.4.1: Determinant Using Lu Factorization: Example and Test
     determinant by minors 9.2.2.3.1: Determinant Using Expansion by Minors: Example and Test
     double speed 9.2.3: Speed Test Functions in Double
     double speed 2.2.i: Windows Download and Test: Double Speed Test
     double speed 2.1.k.b: Unix Download, Test and Installation: --with-Speed.double
     EqualOpSeq 4.5.5.1: EqualOpSeq: Example and Test
     erf 4.4.3.3.1: The AD erf Function: Example and Test
     error handler 6.1.1: Replacing The CppAD Error Handler: Example and Test
     exception 9.1.e: Frequently Asked Questions and Answers: Exceptions
     exp_3.2.2: exp_2: Test
     exp_apx main 3.4: Run the exp_2 and exp_eps Tests
     exp_eps 3.3.2: exp_eps: Test of exp_eps
     Forward 5.6.1.7: Forward Mode: Example and Test
     FunCheck 5.8.1: ADFun Check and Re-Tape: Example and Test
     fadbad speed 9.2.6: Speed Test Derivatives Using Fadbad
     first order reverse 5.6.2.1.1: First Order Reverse Mode: Example and Test
     gradient 8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
     gradient 8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
     gradient 8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
     gradient 8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
     Hessian 5.7.4.1: Hessian: Example and Test
     Hessian of Lagrangian 5.7.4.2: Hessian of Lagrangian and  ADFun Default Constructor: Example and Test
     Independent 5.1.1: Independent and ADFun Constructor: Example and Test
     Integer 4.3.2.1: Convert From AD to Integer: Example and Test
     interpreter 8.1.10: Example Differentiating a Stack Machine Interpreter
     Jacobian 5.7.1.1: Jacobian: Example and Test
     LU 8.1.6: Gradient of Determinant Using LU Factorization: Example and Test
     Lu 8.1.4: Gradient of Determinant Using Lu Factorization: Example and Test
     Lu record pivot 8.2.3.1: Lu Factor and Solve With Recorded Pivoting: Example and Test
     LuFactor 6.12.2.1: LuFactor: Example and Test
     LuInvert 6.12.3.1: LuInvert: Example and Test
     LuRatio 6.21.1: LuRatio: Example and Test
     LuSolve 6.12.1.1: LuSolve With Complex Arguments: Example and Test
     minors expansion 8.1.5: Gradient of Determinant Using Expansion by Minors: Example and Test
     minors expansion 8.1.3: Gradient of Determinant Using Expansion by Minors: Example and Test
     NearEqual 6.2.1: NearEqual Function: Example and Test
     NearEqualExt 4.5.2.1: Compare AD with Base Objects: Example and Test
     NumericType 6.5.1: The NumericType: Example and Test
     nan 6.9.1: nan: Example and Test
     new 6.24.1: Tracking Use of New and Delete: Example and Test
     not complex differentiable 4.7.1.2: Not Complex Differentiable: Example and Test
     ODE 8.1.8: Taylor's Ode Solver: An Example and Test
     OdeErrControl 6.17.2: OdeErrControl: Example and Test Using Maxabs Argument
     OdeErrControl 6.17.1: OdeErrControl: Example and Test
     OdeGear 6.18.1: OdeGear: Example and Test
     OdeGearControl 6.19.1: OdeGearControl: Example and Test
     ode speed 9.2.1.5: Speed Testing Gradient of Ode Solution
     ode_evaluate 9.2.2.7.1: ode_evaluate: Example and test
     Parameter 5.5.1: ADFun Sequence Properties: Example and Test
     Parameter 4.5.4.1: AD Parameter and Variable Functions: Example and Test
     partial 5.7.2.1: First Order Partial Driver: Example and Test
     polynomial 6.11.1: Polynomial Evaluation: Example and Test
     polynomial speed 9.2.1.3: Speed Testing Second Derivative of a Polynomial
     pow int 4.4.3.4.2: The Pow Integer Exponent: Example and Test
     Range 5.5.1: ADFun Sequence Properties: Example and Test
     Romberg 6.14.1: One Dimensional Romberg Integration: Example and Test
     Romberg 6.13.1: One Dimensional Romberg Integration: Example and Test
     Rosen34 6.16.1: Rosen34: Example and Test
     Runge45 6.15.1: Runge45: Example and Test
     re-tape 5.6.1.5.1: CompareChange and Re-Tape: Example and Test
     return status 2.1.e: Unix Download, Test and Installation: Testing Return Status
     reverse any order 5.6.2.3.1: Any Order Reverse Mode: Example and Test
     sacado speed 9.2.7: Speed Test Derivatives Using Sacado
     second order reverse 5.6.2.2.1: Second Order Reverse ModeExample and Test
     second partial 5.7.6.1: Second Partials Reverse Driver: Example and Test
     second partial 5.7.5.1: Subset of Second Order Partials: Example and Test
     simple vector 6.7.1: Simple Vector Template Class: Example and Test
     size_var 5.5.1: ADFun Sequence Properties: Example and Test
     sparse Hessian 5.7.8.1: Sparse Hessian: Example and Test
     sparse Jacobian 5.7.7.1: Sparse Jacobian: Example and Test
     sparse speed 9.2.1.4: Speed Testing Sparse Hessian
     sparse_evaluate 9.2.2.8.1: sparse_evaluate: Example and test
     sparsity forward 5.6.3.1.1: Forward Mode Jacobian Sparsity: Example and Test
     sparsity Hessian 5.6.3.3.1: Reverse Mode Hessian Sparsity: Example and Test
     sparsity reverse 5.6.3.2.1: Reverse Mode Jacobian Sparsity: Example and Test
     speed 9.2: Speed Test Routines
     speed 6.4.1: Example Use of SpeedTest
     speed 6.3.1: speed_test: Example and test
     speed 6.4: Run One Speed Test and Print Results
     speed 6.3: Run One Speed Test and Return Results
     speed windows 9.2.b: Speed Test Routines: Windows
     stiff ode 8.1.7: A Stiff Ode: Example and Test
     subtract 4.4.1.3.2: AD Binary Subtraction: Example and Test
     tan 4.4.2.12: The AD tan Function: Example and Test
     tanh 4.4.2.13: The AD tanh Function: Example and Test
     unary minus 4.4.1.2.1: AD Unary Minus Operator: Example and Test
     unary plus 4.4.1.1.1: AD Unary Plus Operator: Example and Test
     unix 2.1.i: Unix Download, Test and Installation: --with-Example
     unix speed 2.1.k: Unix Download, Test and Installation: --with-Speed
     Value 4.3.1.1: Convert From AD to its Base Type: Example and Test
     Var2Par 4.3.5.1: Convert an AD Variable to a Parameter: Example and Test
     Variable 4.5.4.1: AD Parameter and Variable Functions: Example and Test
     VecAD 4.6.1: AD Vectors that Record Index Operations: Example and Test
     vector 8.4: Choosing The Vector Testing Template Class
     windows 2.2.e: Windows Download and Test: Examples and Testing
test more
     unix 2.1.j: Unix Download, Test and Installation: --with-TestMore
     windows 2.2.f: Windows Download and Test: More Correctness Testing
theory
     acos forward 9.3.1.7: Arccosine Function Forward Taylor Polynomial Theory
     acos reverse 9.3.2.7: Arccosine Function Reverse Mode Theory
     asin forward 9.3.1.6: Arcsine Function Forward Taylor Polynomial Theory
     asin reverse 9.3.2.6: Arcsine Function Reverse Mode Theory
     atan forward 9.3.1.5: Arctangent Function Forward Taylor Polynomial Theory
     atan reverse 9.3.2.5: Arctangent Function Reverse Mode Theory
     cos 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     cos 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     cosh 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     cosh 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     exp forward 9.3.1.1: Exponential Function Forward Taylor Polynomial Theory
     exp reverse 9.3.2.1: Exponential Function Reverse Mode Theory
     log forward 9.3.1.2: Logarithm Function Forward Taylor Polynomial Theory
     log reverse 9.3.2.2: Logarithm Function Reverse Mode Theory
     sin 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     sin 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     sinh 9.3.2.4: Trigonometric and Hyperbolic Sine and Cosine Reverse Theory
     sinh 9.3.1.4: Trigonometric and Hyperbolic Sine and Cosine Forward Theory
     sqrt forward 9.3.1.3: Square Root Function Forward Taylor Polynomial Theory
     sqrt reverse 9.3.2.3: Square Root Function Reverse Mode Theory
thread
     multiple 5.9: OpenMP Maximum Thread Number
times
     AD example 4.4.1.3.3: AD Binary Multiplication: Example and Test
     binary operator 4.4.1.3: AD Binary Arithmetic Operators
     computed assignment 4.4.1.4: AD Computed Assignment Operators
track
     new and delete 6.24: Routines That Track Use of New and Delete
type 9.4.e: Glossary: Base Type
     Base require 4.7: AD<Base> Requirements for Base Type
     numeric 6.5: Definition of a Numeric Type
U
unary
     AD bool 4.5.3: AD Boolean Functions
     AD math 4.4.2: AD Standard Math Unary Functions
     AD minus operator 4.4.1.2: AD Unary Minus Operator
     AD plus operator 4.4.1.1: AD Unary Plus Operator
     math 6.22: Float and Double Standard Math Unary Functions
unary minus
     example 4.4.1.2.1: AD Unary Minus Operator: Example and Test
unary plus
     example 4.4.1.1.1: AD Unary Plus Operator: Example and Test
uniform
     random vector 9.2.2.1: Simulate a [0,1] Uniform Random Variate
uniform_01 9.2.2.1: Simulate a [0,1] Uniform Random Variate
           9.2.1.e: Speed Testing Main Program: seed
     source 9.2.2.1.1: Source: uniform_01
unix
     CppAD install 2.1: Unix Download, Test and Installation
     download 2.1.c: Unix Download, Test and Installation: Download
     exp_apx 2.1.h.b: Unix Download, Test and Installation: --with-Introduction.exp_apx
     get_started 2.1.h.a: Unix Download, Test and Installation: --with-Introduction.get_started
     introduction 2.1.h: Unix Download, Test and Installation: --with-Introduction
     speed test 2.1.k: Unix Download, Test and Installation: --with-Speed
     test more 2.1.j: Unix Download, Test and Installation: --with-TestMore
use_VecAD
     ADFun 5.5: ADFun Sequence Properties
using
     namespace 9.1.j.b: Frequently Asked Questions and Answers: Namespace.Using
utility
     speed 9.2.2: Speed Testing Utilities
     speed example 2.2.j: Windows Download and Test: Speed Utility Example
     speed example 2.1.k.d: Unix Download, Test and Installation: --with-Speed.example
V
Value 4.3.1.1: Convert From AD to its Base Type: Example and Test
      4.3.1: Convert From an AD Type to its Base Type
     during taping 4.3.5.1: Convert an AD Variable to a Parameter: Example and Test
Var2Par 4.3.5.1: Convert an AD Variable to a Parameter: Example and Test
        4.3.5: Convert an AD Variable to a Parameter
Variable 4.5.4: Is an AD Object a Parameter or Variable
     example 4.5.4.1: AD Parameter and Variable Functions: Example and Test
VERSION
     preprocessor symbol 7: Preprocessor Definitions Used by CppAD
VecAD 4.6.1: AD Vectors that Record Index Operations: Example and Test
      4.6: AD Vectors that Record Index Operations
     convert to AD 4.2: AD Copy Constructor and Assignment Operator
VecAD<Base>::reference 4.6.d: AD Vectors that Record Index Operations: VecAD<Base>::reference
value_
     AD absolute 4.4.3.1: AD Absolute Value Function
     obtain during taping 4.3.5: Convert an AD Variable to a Parameter
value_type
     vector 6.7.j: Definition of a Simple Vector: Value Type
variable 9.4.l: Glossary: Variable
         9.4.j.c: Glossary: Tape.Independent Variable
     convert to parameter 4.3.5: Convert an AD Variable to a Parameter
     independent 5.1: Declare Independent Variables and Start Recording
variables 9.4.j.d: Glossary: Tape.Dependent Variables
vector 9.4.f: Glossary: Elementary Vector
     [] CppAD 6.23.e: The CppAD::vector Template Class: Element Access
     AD index 4.6: AD Vectors that Record Index Operations
     CppAD 6.23.1: CppAD::vector Template Class: Example and Test
     CppAD push 6.23.g: The CppAD::vector Template Class: push_vector
     CppAD push_back 6.23.f: The CppAD::vector Template Class: push_back
     CppAD template class 6.23: The CppAD::vector Template Class
     simple 6.7.1: Simple Vector Template Class: Example and Test
     simple 6.7: Definition of a Simple Vector
     simple check 6.8: Check Simple Vector Concept
     test 8.4: Choosing The Vector Testing Template Class
     uniform random 9.2.2.1: Simulate a [0,1] Uniform Random Variate
vectorBool 6.23.j: The CppAD::vector Template Class: vectorBool
     CppAD 6.23.2: CppAD::vectorBool Class: Example and Test
version
     CppAD : cppad-20090131.0: A Package for Differentiation of C++ Algorithms
W
windows
     CppAD install 2.2: Windows Download and Test
     download 2.2.b: Windows Download and Test: Download
     speed test 9.2.b: Speed Test Routines: Windows
wish list 9.7: The CppAD Wish List
write
     AD 4.3.3: AD Output Stream Operator
Z
zero
     order exp_3.2.3.1: exp_2: Verify Zero Order Forward Sweep
     order exp_eps 3.3.3.1: exp_eps: Verify Zero Order Forward Sweep
     order expansion 3.2.3.b: exp_2: Operation Sequence and Zero Order Forward Mode: Zero Order Expansion
     order Forward 5.6.1.1: Zero Order Forward Mode: Function Values
     order forward 5.6.1.5: Comparison Changes During Zero Order Forward Mode
     order forward 3.3.3: exp_eps: Operation Sequence and Zero Order Forward Sweep
     order forward 3.2.3: exp_2: Operation Sequence and Zero Order Forward Mode
zip
     CppAD file 2.2.b: Windows Download and Test: Download

12: External Internet References
Reference Location
../UWCopy040507.html9.8.5.be: whats_new_05#06-25
_printable.htm: CppAD
_printable.xml: CppAD
cppad-20090131.0.cpl.tgz2.1.c.c: InstallUnix#Download.Unix Tar Files
cppad-20090131.0.cpl.tgz2.2.b.c: InstallWindows#Download.Unix Tar Files
cppad-20090131.0.gpl.tgz2.1.c.c: InstallUnix#Download.Unix Tar Files
cppad-20090131.0.gpl.tgz2.2.b.c: InstallWindows#Download.Unix Tar Files
cppad.htm: CppAD
cppad.xml: CppAD
http://cygwin.com/setup.html#naming9.8.4.t: whats_new_06#11-30
http://en.wikipedia.org/wiki/Automatic_differentiationb: CppAD#Introduction
http://en.wikipedia.org/wiki/Automatic_differentiationb: CppAD#Introduction
http://list.coin-or.org/mailman/listinfo/cppad9.1.b: Faq#Bugs
http://list.coin-or.org/mailman/listinfo/cppad9.1.g: Faq#Math Functions
http://list.coin-or.org/pipermail/cppad/2006-February/000020.html9.8.4.cs: whats_new_06#02-21
http://list.coin-or.org/pipermail/cppad/2006q4/000076.html9.8.4.o: whats_new_06#12-07
http://projects.coin-or.org/CppAD/browser9.8.5.m: whats_new_05#12-05
http://subversion.tigris.org/2.1.1.b: subversion#Subversion
http://trilinos.sandia.gov/packages/sacado/b: CppAD#Introduction
http://trilinos.sandia.gov/packages/sacado/2.1.q: InstallUnix#SacadoDir
http://trilinos.sandia.gov/packages/sacado/9.2.a: speed#Purpose
http://trilinos.sandia.gov/packages/sacado/9.8.3.v: whats_new_07#10-22
http://valgrind.org/9.8.3.bj: whats_new_07#02-03
http://valgrind.org/9.8.4.ap: whats_new_06#08-17
http://www.7-zip.org9.8.6.ae: whats_new_04#09-02
http://www.apl.washington.edu/people/professional_staff/goddard_r.html9.8.7.y: whats_new_03#10-05
http://www.autodiff.orgb: CppAD#Introduction
http://www.autodiff.orgb: CppAD#Introduction
http://www.boost.org2.1.r: InstallUnix#BoostDir
http://www.boost.org/libs/numeric/ublas/doc/index.htmb: CppAD#Introduction
http://www.boost.org/more/lib_guide.htm#Guidelines9.7.l.a: WishList#Software Guidelines.Boost
http://www.coin-or.org/CppAD/b: CppAD#Introduction
http://www.coin-or.org/CppAD/8.1.1.b: ipopt_cppad_nlp#Purpose
http://www.coin-or.org/CppAD/9.2.a: speed#Purpose
http://www.coin-or.org/CppAD/9.8.5.l: whats_new_05#12-06
http://www.coin-or.org/CppAD/Doc/installunix.htm2.1.c.b: InstallUnix#Download.Web Link
http://www.coin-or.org/CppAD/Doc/installwindows.htm2.2.b.b: InstallWindows#Download.Web Link
http://www.coin-or.org/download/binary/CoinAll/CoinAll-1.2-VisualStudio.zip8.1.1.1.a: ipopt_cppad_windows#Purpose
http://www.coin-or.org/foundation.htmlb: CppAD#Introduction
http://www.coin-or.org/projects/Ipopt.xml2.1.s: InstallUnix#IpoptDir
http://www.cygwin.com9.2.b: speed#Windows
http://www.cygwin.com/ml/cygwin-apps/2005-06/msg00159.html9.6.a.b: Bugs#gcc 3.4.4 -O2.Adolc
http://www.fadbad.com/2.1.p: InstallUnix#FadbadDir
http://www.imm.dtu.dk/fadbad.html/b: CppAD#Introduction
http://www.imm.dtu.dk/fadbad.html/9.2.a: speed#Purpose
http://www.math.tu-dresden.de/~adol-c/b: CppAD#Introduction
http://www.math.tu-dresden.de/~adol-c/2.1.o: InstallUnix#AdolcDir
http://www.math.tu-dresden.de/~adol-c/9.2.a: speed#Purpose
http://www.math.tu-dresden.de/~adol-c/9.8.1.c: whats_new_09#01-18
http://www.math.tu-dresden.de/~adol-c/9.8.6.cs: whats_new_04#01-22
http://www.mingw.org9.2.b: speed#Windows
http://www.mingw.org9.8.7.c: whats_new_03#12-22
http://www.opensource.org/licenses/cpl1.0.phpb: CppAD#Introduction
http://www.opensource.org/licenses/gpl-license.phpb: CppAD#Introduction
http://www.rpm.org/max-rpm/ch-rpm-file-format.html9.8.4.t: whats_new_06#11-30
http://www.seanet.com/~bradbell/omhelp/2.1.1.c: subversion#OMhelp
http://www.swig.org/9.7.k: WishList#Scripting Languages
http://www.winzip.com9.8.6.ae: whats_new_04#09-02
http://www.winzip.com/index.htm2.2.b.c: InstallWindows#Download.Unix Tar Files
https://projects.coin-or.org/CppAD/browser/trunk/cppad.spec2.1.b: InstallUnix#RPM
https://www.coin-or.org/projects/Ipopt8.1.1.b: ipopt_cppad_nlp#Purpose
mailto:Jean-Pierre.Dussault@Usherbrooke.ca9.8.5.bz: whats_new_05#02-24
mailto:magister@u.washington.edu9.8.6.bl: whats_new_04#04-19
mailto:magister@u.washington.edu9.8.7.y: whats_new_03#10-05