cppad-20171217: A Package for Differentiation of C++ Algorithms
 One section per web page    All sections in one web page (fast to load)    (slow to load) Math displayed using MathJax    cppad.htm    _printable.htm Math displayed using MathML    cppad.xml    _printable.xml

a: Syntax  # include <cppad/cppad.hpp>

b: Introduction
We refer to the step by step conversion from an algorithm that computes function values to an algorithm that computes derivative values as Algorithmic Differentiation (often referred to as Automatic Differentiation.) Given a C++ algorithm that computes function values, CppAD generates an algorithm that computes its derivative values. A brief introduction to Algorithmic Differentiation can be found in wikipedia (http://en.wikipedia.org/wiki/Automatic_differentiation) . The web site autodiff.org (http://www.autodiff.org) is dedicated to research about, and promoting the use of, AD.
1. CppAD (http://www.coin-or.org/CppAD/) uses operator overloading to compute derivatives of algorithms defined in C++. It is distributed by the COIN-OR Foundation (http://www.coin-or.org/foundation.html) with the Eclipse Public License EPL-1.0 (http://www.opensource.org/licenses/EPL-1.0) or the GNU General Public License GPL-3.0 (http://www.opensource.org/licenses/AGPL-3.0) . Testing and installation is supported for Unix, Microsoft, and Apple operating systems. Extensive user and developer documentation is included.
2. An AD of Base 12.4.g.b: operation sequence is stored as an 5: AD function object which can evaluate function values and derivatives. Arbitrary order 5.3: forward and 5.4: reverse mode derivative calculations can be preformed on the operation sequence. Logical comparisons can be included in an operation sequence using AD 4.4.4: conditional expressions . Evaluation of user defined unary 4.4.5: discrete functions can also be included in the sequence of operations; i.e., functions that depend on the 12.4.k.c: independent variables but which have identically zero derivatives (e.g., a step function).
3. Derivatives of functions that are defined in terms of other derivatives can be computed using multiple levels of AD; see 10.2.10.1: mul_level.cpp for a simple example and 10.2.12: mul_level_ode.cpp for a more realistic example. To this end, CppAD can also be used with other AD types; for example see 10.2.13: mul_level_adolc_ode.cpp .
5. Includes a set of C++ 8: utilities that are useful for general operator overloaded numerical method. Allows for replacement of the 10.5: testvector template vector class which is used for extensive testing; for example, you can do your testing with the uBlas (http://www.boost.org/libs/numeric/ublas/doc/index.htm) template vector class.
6. See 12.7: whats_new for a list of recent extensions and bug fixes.
You can find out about other algorithmic differentiation tools and about algorithmic differentiation in general at the following web sites: wikipedia (http://en.wikipedia.org/wiki/Automatic_differentiation) , autodiff.org (http://www.autodiff.org) .

c: Example
The file 10.1: get_started.cpp contains an example and test of using CppAD to compute the derivative of a polynomial. There are many other 10: examples .

d: Include File
The following include directive       # include <cppad/cppad.hpp> includes the CppAD package for the rest of the current compilation unit.

e: Preprocessor Symbols
All the 6: preprocessor symbols used by CppAD begin with eight CppAD or CPPAD_.

f: Namespace
All of the functions and objects defined by CppAD are in the CppAD namespace; for example, you can access the 4: AD types as       size_t n = 2;      CppAD::vector< CppAD::AD<Base> > x(n) You can abbreviate access to one object or function a using command of the form       using CppAD::AD      CppAD::vector< AD<Base> > x(n) You can abbreviate access to all CppAD objects and functions with a command of the form       using namespace CppAD      vector< AD<Base> > x(n) If you include other namespaces in a similar manner, this can cause naming conflicts.

g: Contents

Input File: doc.omh
cppad-20171217: A Package for Differentiation of C++ Algorithms: : CppAD
Using CMake to Configure CppAD: 2.2: cmake
Including the ColPack Sparsity Calculations: 2.2.2: colpack_prefix
ColPack: Sparse Jacobian Example and Test: 2.2.2.1: colpack_jac.cpp
ColPack: Sparse Jacobian Example and Test: 2.2.2.2: colpack_jacobian.cpp
ColPack: Sparse Hessian Example and Test: 2.2.2.3: colpack_hes.cpp
ColPack: Sparse Hessian Example and Test: 2.2.2.4: colpack_hessian.cpp
Including the Eigen Examples and Tests: 2.2.3: eigen_prefix
Including the cppad_ipopt Library and Tests: 2.2.5: ipopt_prefix
Checking the CppAD Examples and Tests: 2.3: cmake_check
An Introduction by Example to Algorithmic Differentiation: 3: Introduction
Second Order Exponential Approximation: 3.1: exp_2
exp_2: Implementation: 3.1.1: exp_2.hpp
exp_2: Test: 3.1.2: exp_2.cpp
exp_2: Operation Sequence and Zero Order Forward Mode: 3.1.3: exp_2_for0
exp_2: Verify Zero Order Forward Sweep: 3.1.3.1: exp_2_for0.cpp
exp_2: First Order Forward Mode: 3.1.4: exp_2_for1
exp_2: Verify First Order Forward Sweep: 3.1.4.1: exp_2_for1.cpp
exp_2: First Order Reverse Mode: 3.1.5: exp_2_rev1
exp_2: Verify First Order Reverse Sweep: 3.1.5.1: exp_2_rev1.cpp
exp_2: Second Order Forward Mode: 3.1.6: exp_2_for2
exp_2: Verify Second Order Forward Sweep: 3.1.6.1: exp_2_for2.cpp
exp_2: Second Order Reverse Mode: 3.1.7: exp_2_rev2
exp_2: Verify Second Order Reverse Sweep: 3.1.7.1: exp_2_rev2.cpp
An Epsilon Accurate Exponential Approximation: 3.2: exp_eps
exp_eps: Implementation: 3.2.1: exp_eps.hpp
exp_eps: Test of exp_eps: 3.2.2: exp_eps.cpp
exp_eps: Operation Sequence and Zero Order Forward Sweep: 3.2.3: exp_eps_for0
exp_eps: Verify Zero Order Forward Sweep: 3.2.3.1: exp_eps_for0.cpp
exp_eps: First Order Forward Sweep: 3.2.4: exp_eps_for1
exp_eps: Verify First Order Forward Sweep: 3.2.4.1: exp_eps_for1.cpp
exp_eps: First Order Reverse Sweep: 3.2.5: exp_eps_rev1
exp_eps: Verify First Order Reverse Sweep: 3.2.5.1: exp_eps_rev1.cpp
exp_eps: Second Order Forward Mode: 3.2.6: exp_eps_for2
exp_eps: Verify Second Order Forward Sweep: 3.2.6.1: exp_eps_for2.cpp
exp_eps: Second Order Reverse Sweep: 3.2.7: exp_eps_rev2
exp_eps: Verify Second Order Reverse Sweep: 3.2.7.1: exp_eps_rev2.cpp
Correctness Tests For Exponential Approximation in Introduction: 3.3: exp_apx.cpp
Conversion and I/O of AD Objects: 4.3: Convert
Convert From an AD Type to its Base Type: 4.3.1: Value
Convert From AD to its Base Type: Example and Test: 4.3.1.1: value.cpp
Convert From AD to Integer: 4.3.2: Integer
Convert From AD to Integer: Example and Test: 4.3.2.1: integer.cpp
Printing AD Values During Forward Mode: 4.3.6: PrintFor
Printing During Forward Mode: Example and Test: 4.3.6.1: print_for_cout.cpp
Print During Zero Order Forward Mode: Example and Test: 4.3.6.2: print_for_string.cpp
Convert an AD Variable to a Parameter: 4.3.7: Var2Par
Convert an AD Variable to a Parameter: Example and Test: 4.3.7.1: var2par.cpp
AD Arithmetic Operators and Compound Assignments: 4.4.1: Arithmetic
AD Unary Plus Operator: 4.4.1.1: UnaryPlus
AD Unary Plus Operator: Example and Test: 4.4.1.1.1: unary_plus.cpp
AD Unary Minus Operator: 4.4.1.2: UnaryMinus
AD Unary Minus Operator: Example and Test: 4.4.1.2.1: unary_minus.cpp
AD Binary Subtraction: Example and Test: 4.4.1.3.2: sub.cpp
AD Binary Multiplication: Example and Test: 4.4.1.3.3: mul.cpp
AD Binary Division: Example and Test: 4.4.1.3.4: div.cpp
AD Compound Assignment Operators: 4.4.1.4: compound_assign
AD Compound Assignment Subtraction: Example and Test: 4.4.1.4.2: sub_eq.cpp
AD Compound Assignment Multiplication: Example and Test: 4.4.1.4.3: mul_eq.cpp
AD Compound Assignment Division: Example and Test: 4.4.1.4.4: div_eq.cpp
The Unary Standard Math Functions: 4.4.2: unary_standard_math
Inverse Sine Function: acos: 4.4.2.1: acos
The AD acos Function: Example and Test: 4.4.2.1.1: acos.cpp
Inverse Sine Function: asin: 4.4.2.2: asin
The AD asin Function: Example and Test: 4.4.2.2.1: asin.cpp
Inverse Tangent Function: atan: 4.4.2.3: atan
The AD atan Function: Example and Test: 4.4.2.3.1: atan.cpp
The Cosine Function: cos: 4.4.2.4: cos
The AD cos Function: Example and Test: 4.4.2.4.1: cos.cpp
The Hyperbolic Cosine Function: cosh: 4.4.2.5: cosh
The AD cosh Function: Example and Test: 4.4.2.5.1: cosh.cpp
The Exponential Function: exp: 4.4.2.6: exp
The AD exp Function: Example and Test: 4.4.2.6.1: exp.cpp
The Exponential Function: log: 4.4.2.7: log
The AD log Function: Example and Test: 4.4.2.7.1: log.cpp
The Base 10 Logarithm Function: log10: 4.4.2.8: log10
The AD log10 Function: Example and Test: 4.4.2.8.1: log10.cpp
The Sine Function: sin: 4.4.2.9: sin
The AD sin Function: Example and Test: 4.4.2.9.1: sin.cpp
The Hyperbolic Sine Function: sinh: 4.4.2.10: sinh
The AD sinh Function: Example and Test: 4.4.2.10.1: sinh.cpp
The Square Root Function: sqrt: 4.4.2.11: sqrt
The AD sqrt Function: Example and Test: 4.4.2.11.1: sqrt.cpp
The Tangent Function: tan: 4.4.2.12: tan
The AD tan Function: Example and Test: 4.4.2.12.1: tan.cpp
The Hyperbolic Tangent Function: tanh: 4.4.2.13: tanh
The AD tanh Function: Example and Test: 4.4.2.13.1: tanh.cpp
AD Absolute Value Functions: abs, fabs: 4.4.2.14: abs
AD Absolute Value Function: Example and Test: 4.4.2.14.1: fabs.cpp
The Inverse Hyperbolic Cosine Function: acosh: 4.4.2.15: acosh
The AD acosh Function: Example and Test: 4.4.2.15.1: acosh.cpp
The Inverse Hyperbolic Sine Function: asinh: 4.4.2.16: asinh
The AD asinh Function: Example and Test: 4.4.2.16.1: asinh.cpp
The Inverse Hyperbolic Tangent Function: atanh: 4.4.2.17: atanh
The AD atanh Function: Example and Test: 4.4.2.17.1: atanh.cpp
The Error Function: 4.4.2.18: erf
The AD erf Function: Example and Test: 4.4.2.18.1: erf.cpp
The Exponential Function Minus One: expm1: 4.4.2.19: expm1
The AD exp Function: Example and Test: 4.4.2.19.1: expm1.cpp
The Logarithm of One Plus Argument: log1p: 4.4.2.20: log1p
The AD log1p Function: Example and Test: 4.4.2.20.1: log1p.cpp
The Sign: sign: 4.4.2.21: sign
Sign Function: Example and Test: 4.4.2.21.1: sign.cpp
The Binary Math Functions: 4.4.3: binary_math
AD Two Argument Inverse Tangent Function: 4.4.3.1: atan2
The AD atan2 Function: Example and Test: 4.4.3.1.1: atan2.cpp
The AD Power Function: 4.4.3.2: pow
The AD Power Function: Example and Test: 4.4.3.2.1: pow.cpp
Absolute Zero Multiplication: 4.4.3.3: azmul
AD Absolute Zero Multiplication: Example and Test: 4.4.3.3.1: azmul.cpp
Conditional Expressions: Example and Test: 4.4.4.1: cond_exp.cpp
Taping Array Index Operation: Example and Test: 4.4.5.1: tape_index.cpp
Interpolation With Out Retaping: Example and Test: 4.4.5.2: interp_onetape.cpp
Interpolation With Retaping: Example and Test: 4.4.5.3: interp_retape.cpp
Numeric Limits For an AD and Base Types: 4.4.6: numeric_limits
Numeric Limits: Example and Test: 4.4.6.1: num_limits.cpp
Checkpointing Functions: 4.4.7.1: checkpoint
Simple Checkpointing: Example and Test: 4.4.7.1.1: checkpoint.cpp
Atomic Operations and Multiple-Levels of AD: Example and Test: 4.4.7.1.2: atomic_mul_level.cpp
Checkpointing an ODE Solver: Example and Test: 4.4.7.1.3: checkpoint_ode.cpp
Checkpointing an Extended ODE Solver: Example and Test: 4.4.7.1.4: checkpoint_extended_ode.cpp
User Defined Atomic AD Functions: 4.4.7.2: atomic_base
Atomic Function Constructor: 4.4.7.2.1: atomic_ctor
Set Atomic Function Options: 4.4.7.2.2: atomic_option
Using AD Version of Atomic Function: 4.4.7.2.3: atomic_afun
Atomic Forward Mode: 4.4.7.2.4: atomic_forward
Atomic Forward: Example and Test: 4.4.7.2.4.1: atomic_forward.cpp
Atomic Reverse Mode: 4.4.7.2.5: atomic_reverse
Atomic Reverse: Example and Test: 4.4.7.2.5.1: atomic_reverse.cpp
Atomic Forward Jacobian Sparsity Patterns: 4.4.7.2.6: atomic_for_sparse_jac
Atomic Forward Jacobian Sparsity: Example and Test: 4.4.7.2.6.1: atomic_for_sparse_jac.cpp
Atomic Reverse Jacobian Sparsity Patterns: 4.4.7.2.7: atomic_rev_sparse_jac
Atomic Reverse Jacobian Sparsity: Example and Test: 4.4.7.2.7.1: atomic_rev_sparse_jac.cpp
Atomic Forward Hessian Sparsity Patterns: 4.4.7.2.8: atomic_for_sparse_hes
Atomic Forward Hessian Sparsity: Example and Test: 4.4.7.2.8.1: atomic_for_sparse_hes.cpp
Atomic Reverse Hessian Sparsity Patterns: 4.4.7.2.9: atomic_rev_sparse_hes
Atomic Reverse Hessian Sparsity: Example and Test: 4.4.7.2.9.1: atomic_rev_sparse_hes.cpp
Free Static Variables: 4.4.7.2.10: atomic_base_clear
Getting Started with Atomic Operations: Example and Test: 4.4.7.2.11: atomic_get_started.cpp
Atomic Euclidean Norm Squared: Example and Test: 4.4.7.2.12: atomic_norm_sq.cpp
Reciprocal as an Atomic Operation: Example and Test: 4.4.7.2.13: atomic_reciprocal.cpp
Atomic Sparsity with Set Patterns: Example and Test: 4.4.7.2.14: atomic_set_sparsity.cpp
Tan and Tanh as User Atomic Operations: Example and Test: 4.4.7.2.15: atomic_tangent.cpp
Atomic Eigen Matrix Multiply: Example and Test: 4.4.7.2.16: atomic_eigen_mat_mul.cpp
Atomic Eigen Matrix Multiply Class: 4.4.7.2.16.1: atomic_eigen_mat_mul.hpp
Atomic Eigen Matrix Inverse: Example and Test: 4.4.7.2.17: atomic_eigen_mat_inv.cpp
Atomic Eigen Matrix Inversion Class: 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp
Atomic Eigen Cholesky Factorization: Example and Test: 4.4.7.2.18: atomic_eigen_cholesky.cpp
AD Theory for Cholesky Factorization: 4.4.7.2.18.1: cholesky_theory
Atomic Eigen Cholesky Factorization Class: 4.4.7.2.18.2: atomic_eigen_cholesky.hpp
User Atomic Matrix Multiply: Example and Test: 4.4.7.2.19: atomic_mat_mul.cpp
Matrix Multiply as an Atomic Operation: 4.4.7.2.19.1: atomic_mat_mul.hpp
Bool Valued Operations and Functions with AD Arguments: 4.5: BoolValued
AD Binary Comparison Operators: 4.5.1: Compare
AD Binary Comparison Operators: Example and Test: 4.5.1.1: compare.cpp
Compare AD and Base Objects for Nearly Equal: 4.5.2: NearEqualExt
Compare AD with Base Objects: Example and Test: 4.5.2.1: near_equal_ext.cpp
AD Boolean Functions: Example and Test: 4.5.3.1: bool_fun.cpp
Is an AD Object a Parameter or Variable: 4.5.4: ParVar
AD Parameter and Variable Functions: Example and Test: 4.5.4.1: par_var.cpp
Check if Two Value are Identically Equal: 4.5.5: EqualOpSeq
EqualOpSeq: Example and Test: 4.5.5.1: equal_op_seq.cpp
Required Base Class Member Functions: 4.7.1: base_member
Base Type Requirements for Conditional Expressions: 4.7.2: base_cond_exp
Base Type Requirements for Identically Equal Comparisons: 4.7.3: base_identical
Base Type Requirements for Ordered Comparisons: 4.7.4: base_ordered
Base Type Requirements for Standard Math Functions: 4.7.5: base_std_math
Base Type Requirements for Numeric Limits: 4.7.6: base_limits
Extending to_string To Another Floating Point Type: 4.7.7: base_to_string
Base Type Requirements for Hash Coding Values: 4.7.8: base_hash
Example AD<Base> Where Base Constructor Allocates Memory: 4.7.9.1: base_alloc.hpp
Using a User Defined AD Base Type: Example and Test: 4.7.9.2: base_require.cpp
Using Adolc with Multiple Levels of Taping: Example and Test: 4.7.9.3.1: mul_level_adolc.cpp
Enable use of AD<Base> where Base is float: 4.7.9.4: base_float.hpp
Enable use of AD<Base> where Base is double: 4.7.9.5: base_double.hpp
Enable use of AD<Base> where Base is std::complex<double>: 4.7.9.6: base_complex.hpp
Complex Polynomial: Example and Test: 4.7.9.6.1: complex_poly.cpp
Declare Independent Variables and Start Recording: 5.1.1: Independent
Independent and ADFun Constructor: Example and Test: 5.1.1.1: independent.cpp
Construct an ADFun Object and Stop Recording: 5.1.2: FunConstruct
ADFun Assignment: Example and Test: 5.1.2.1: fun_assign.cpp
Stop Recording and Store Operation Sequence: 5.1.3: Dependent
Abort Recording of an Operation Sequence: 5.1.4: abort_recording
Abort Current Recording: Example and Test: 5.1.4.1: abort_recording.cpp
ADFun Sequence Properties: Example and Test: 5.1.5.1: seq_property.cpp
First and Second Order Derivatives: Easy Drivers: 5.2: drivers
Jacobian: Driver Routine: 5.2.1: Jacobian
Jacobian: Example and Test: 5.2.1.1: jacobian.cpp
Hessian: Easy Driver: 5.2.2: Hessian
Hessian: Example and Test: 5.2.2.1: hessian.cpp
Hessian of Lagrangian and ADFun Default Constructor: Example and Test: 5.2.2.2: hes_lagrangian.cpp
First Order Partial Derivative: Driver Routine: 5.2.3: ForOne
First Order Partial Driver: Example and Test: 5.2.3.1: for_one.cpp
First Order Derivative: Driver Routine: 5.2.4: RevOne
First Order Derivative Driver: Example and Test: 5.2.4.1: rev_one.cpp
Forward Mode Second Partial Derivative Driver: 5.2.5: ForTwo
Subset of Second Order Partials: Example and Test: 5.2.5.1: for_two.cpp
Reverse Mode Second Partial Derivative Driver: 5.2.6: RevTwo
Second Partials Reverse Driver: Example and Test: 5.2.6.1: rev_two.cpp
Forward Mode: 5.3: Forward
Zero Order Forward Mode: Function Values: 5.3.1: forward_zero
First Order Forward Mode: Derivative Values: 5.3.2: forward_one
Second Order Forward Mode: Derivative Values: 5.3.3: forward_two
Multiple Order Forward Mode: 5.3.4: forward_order
Forward Mode: Example and Test: 5.3.4.1: forward.cpp
Forward Mode: Example and Test of Multiple Orders: 5.3.4.2: forward_order.cpp
Multiple Directions Forward Mode: 5.3.5: forward_dir
Forward Mode: Example and Test of Multiple Directions: 5.3.5.1: forward_dir.cpp
Number Taylor Coefficient Orders Currently Stored: 5.3.6: size_order
Comparison Changes Between Taping and Zero Order Forward: 5.3.7: compare_change
CompareChange and Re-Tape: Example and Test: 5.3.7.1: compare_change.cpp
Controlling Taylor Coefficients Memory Allocation: 5.3.8: capacity_order
Controlling Taylor Coefficient Memory Allocation: Example and Test: 5.3.8.1: capacity_order.cpp
Number of Variables that Can be Skipped: 5.3.9: number_skip
Number of Variables That Can be Skipped: Example and Test: 5.3.9.1: number_skip.cpp
Reverse Mode: 5.4: Reverse
First Order Reverse Mode: 5.4.1: reverse_one
First Order Reverse Mode: Example and Test: 5.4.1.1: reverse_one.cpp
Second Order Reverse Mode: 5.4.2: reverse_two
Second Order Reverse ModeExample and Test: 5.4.2.1: reverse_two.cpp
Hessian Times Direction: Example and Test: 5.4.2.2: hes_times_dir.cpp
Any Order Reverse Mode: 5.4.3: reverse_any
Third Order Reverse Mode: Example and Test: 5.4.3.1: reverse_three.cpp
Reverse Mode General Case (Checkpointing): Example and Test: 5.4.3.2: reverse_checkpoint.cpp
Reverse Mode Using Subgraphs: 5.4.4: subgraph_reverse
Computing Reverse Mode on Subgraphs: Example and Test: 5.4.4.1: subgraph_reverse.cpp
Calculating Sparsity Patterns: 5.5: sparsity_pattern
Forward Mode Jacobian Sparsity Patterns: 5.5.1: for_jac_sparsity
Forward Mode Jacobian Sparsity: Example and Test: 5.5.1.1: for_jac_sparsity.cpp
Jacobian Sparsity Pattern: Forward Mode: 5.5.2: ForSparseJac
Forward Mode Jacobian Sparsity: Example and Test: 5.5.2.1: for_sparse_jac.cpp
Reverse Mode Jacobian Sparsity Patterns: 5.5.3: rev_jac_sparsity
Reverse Mode Jacobian Sparsity: Example and Test: 5.5.3.1: rev_jac_sparsity.cpp
Jacobian Sparsity Pattern: Reverse Mode: 5.5.4: RevSparseJac
Reverse Mode Jacobian Sparsity: Example and Test: 5.5.4.1: rev_sparse_jac.cpp
Reverse Mode Hessian Sparsity Patterns: 5.5.5: rev_hes_sparsity
Reverse Mode Hessian Sparsity: Example and Test: 5.5.5.1: rev_hes_sparsity.cpp
Hessian Sparsity Pattern: Reverse Mode: 5.5.6: RevSparseHes
Reverse Mode Hessian Sparsity: Example and Test: 5.5.6.1: rev_sparse_hes.cpp
Sparsity Patterns For a Subset of Variables: Example and Test: 5.5.6.2: sparsity_sub.cpp
Forward Mode Hessian Sparsity Patterns: 5.5.7: for_hes_sparsity
Forward Mode Hessian Sparsity: Example and Test: 5.5.7.1: for_hes_sparsity.cpp
Hessian Sparsity Pattern: Forward Mode: 5.5.8: ForSparseHes
Forward Mode Hessian Sparsity: Example and Test: 5.5.8.1: for_sparse_hes.cpp
Computing Dependency: Example and Test: 5.5.9: dependency.cpp
Preferred Sparsity Patterns: Row and Column Indices: Example and Test: 5.5.10: rc_sparsity.cpp
Subgraph Dependency Sparsity Patterns: 5.5.11: subgraph_sparsity
Subgraph Dependency Sparsity Patterns: Example and Test: 5.5.11.1: subgraph_sparsity.cpp
Calculating Sparse Derivatives: 5.6: sparse_derivative
Computing Sparse Jacobians: 5.6.1: sparse_jac
Computing Sparse Jacobian Using Forward Mode: Example and Test: 5.6.1.1: sparse_jac_for.cpp
Computing Sparse Jacobian Using Reverse Mode: Example and Test: 5.6.1.2: sparse_jac_rev.cpp
Sparse Jacobian: 5.6.2: sparse_jacobian
Sparse Jacobian: Example and Test: 5.6.2.1: sparse_jacobian.cpp
Computing Sparse Hessians: 5.6.3: sparse_hes
Computing Sparse Hessian: Example and Test: 5.6.3.1: sparse_hes.cpp
Sparse Hessian: 5.6.4: sparse_hessian
Sparse Hessian: Example and Test: 5.6.4.1: sparse_hessian.cpp
Computing Sparse Hessian for a Subset of Variables: 5.6.4.2: sub_sparse_hes.cpp
Subset of a Sparse Hessian: Example and Test: 5.6.4.3: sparse_sub_hes.cpp
Compute Sparse Jacobians Using Subgraphs: 5.6.5: subgraph_jac_rev
Computing Sparse Jacobian Using Reverse Mode: Example and Test: 5.6.5.1: subgraph_jac_rev.cpp
Sparse Hessian Using Subgraphs and Jacobian: Example and Test: 5.6.5.2: subgraph_hes2jac.cpp
Optimize an ADFun Object Tape: 5.7: optimize
Example Optimization and Forward Activity Analysis: 5.7.1: optimize_forward_active.cpp
Example Optimization and Reverse Activity Analysis: 5.7.2: optimize_reverse_active.cpp
Example Optimization and Comparison Operators: 5.7.3: optimize_compare_op.cpp
Example Optimization and Print Forward Operators: 5.7.4: optimize_print_for.cpp
Example Optimization and Conditional Expressions: 5.7.5: optimize_conditional_skip.cpp
Example Optimization and Nested Conditional Expressions: 5.7.6: optimize_nest_conditional.cpp
Example Optimization and Cumulative Sum Operations: 5.7.7: optimize_cumulative_sum.cpp
Abs-normal Representation of Non-Smooth Functions: 5.8: abs_normal
Create An Abs-normal Representation of a Function: 5.8.1: abs_normal_fun
abs_normal Getting Started: Example and Test: 5.8.1.1: abs_get_started.cpp
abs_normal: Print a Vector or Matrix: 5.8.2: abs_print_mat
abs_normal: Evaluate First Order Approximation: 5.8.3: abs_eval
abs_eval: Example and Test: 5.8.3.1: abs_eval.cpp
abs_eval Source Code: 5.8.3.2: abs_eval.hpp
abs_normal: Solve a Linear Program Using Simplex Method: 5.8.4: simplex_method
abs_normal simplex_method: Example and Test: 5.8.4.1: simplex_method.cpp
simplex_method Source Code: 5.8.4.2: simplex_method.hpp
abs_normal: Solve a Linear Program With Box Constraints: 5.8.5: lp_box
abs_normal lp_box: Example and Test: 5.8.5.1: lp_box.cpp
lp_box Source Code: 5.8.5.2: lp_box.hpp
abs_normal: Minimize a Linear Abs-normal Approximation: 5.8.6: abs_min_linear
abs_min_linear: Example and Test: 5.8.6.1: abs_min_linear.cpp
abs_min_linear Source Code: 5.8.6.2: abs_min_linear.hpp
Non-Smooth Optimization Using Abs-normal Linear Approximations: 5.8.7: min_nso_linear
abs_normal min_nso_linear: Example and Test: 5.8.7.1: min_nso_linear.cpp
min_nso_linear Source Code: 5.8.7.2: min_nso_linear.hpp
Solve a Quadratic Program Using Interior Point Method: 5.8.8: qp_interior
abs_normal qp_interior: Example and Test: 5.8.8.1: qp_interior.cpp
qp_interior Source Code: 5.8.8.2: qp_interior.hpp
abs_normal: Solve a Quadratic Program With Box Constraints: 5.8.9: qp_box
abs_normal qp_box: Example and Test: 5.8.9.1: qp_box.cpp
qp_box Source Code: 5.8.9.2: qp_box.hpp
abs_normal: Minimize a Linear Abs-normal Approximation: 5.8.10: abs_min_quad
Check an ADFun Sequence of Operations: 5.9: FunCheck
ADFun Check and Re-Tape: Example and Test: 5.9.1: fun_check.cpp
Check an ADFun Object For Nan Results: 5.10: check_for_nan
ADFun Checking For Nan: Example and Test: 5.10.1: check_for_nan.cpp
CppAD API Preprocessor Symbols: 6: preprocessor
A Simple OpenMP Example and Test: 7.2.1: a11c_openmp.cpp
Multi-Threading Harmonic Summation Example / Test: 7.2.8: harmonic.cpp
Common Variables Used by Multi-threading Sum of 1/i: 7.2.8.1: harmonic_common
Set Up Multi-threading Sum of 1/i: 7.2.8.2: harmonic_setup
Do One Thread's Work for Sum of 1/i: 7.2.8.3: harmonic_worker
Take Down Multi-threading Sum of 1/i: 7.2.8.4: harmonic_takedown
Multi-Threaded Implementation of Summation of 1/i: 7.2.8.5: harmonic_sum
Timing Test of Multi-Threaded Summation of 1/i: 7.2.8.6: harmonic_time
Multi-Threading User Atomic Example / Test: 7.2.9: multi_atomic.cpp
Defines a User Atomic Operation that Computes Square Root: 7.2.9.1: multi_atomic_user
Multi-Threaded User Atomic Common Information: 7.2.9.2: multi_atomic_common
Multi-Threaded User Atomic Set Up: 7.2.9.3: multi_atomic_setup
Multi-Threaded User Atomic Worker: 7.2.9.4: multi_atomic_worker
Multi-Threaded User Atomic Take Down: 7.2.9.5: multi_atomic_takedown
Run Multi-Threaded User Atomic Calculation: 7.2.9.6: multi_atomic_run
Timing Test for Multi-Threaded User Atomic Calculation: 7.2.9.7: multi_atomic_time
Multi-Threaded Newton Method Example / Test: 7.2.10: multi_newton.cpp
Common Variables use by Multi-Threaded Newton Method: 7.2.10.1: multi_newton_common
Set Up Multi-Threaded Newton Method: 7.2.10.2: multi_newton_setup
Take Down Multi-threaded Newton Method: 7.2.10.4: multi_newton_takedown
A Multi-Threaded Newton's Method: 7.2.10.5: multi_newton_run
Timing Test of Multi-Threaded Newton Method: 7.2.10.6: multi_newton_time
Some General Purpose Utilities: 8: utility
Replacing the CppAD Error Handler: 8.1: ErrorHandler
Replacing The CppAD Error Handler: Example and Test: 8.1.1: error_handler.cpp
Determine if Two Values Are Nearly Equal: 8.2: NearEqual
NearEqual Function: Example and Test: 8.2.1: near_equal.cpp
Run One Speed Test and Return Results: 8.3: speed_test
speed_test: Example and test: 8.3.1: speed_test.cpp
Run One Speed Test and Print Results: 8.4: SpeedTest
Example Use of SpeedTest: 8.4.1: speed_program.cpp
Determine Amount of Time to Execute a Test: 8.5: time_test
Returns Elapsed Number of Seconds: 8.5.1: elapsed_seconds
Elapsed Seconds: Example and Test: 8.5.1.1: elapsed_seconds.cpp
time_test: Example and test: 8.5.2: time_test.cpp
Object that Runs a Group of Tests: 8.6: test_boolofvoid
Definition of a Numeric Type: 8.7: NumericType
The NumericType: Example and Test: 8.7.1: numeric_type.cpp
Check NumericType Class Concept: 8.8: CheckNumericType
The CheckNumericType Function: Example and Test: 8.8.1: check_numeric_type.cpp
Definition of a Simple Vector: 8.9: SimpleVector
Simple Vector Template Class: Example and Test: 8.9.1: simple_vector.cpp
Check Simple Vector Concept: 8.10: CheckSimpleVector
The CheckSimpleVector Function: Example and Test: 8.10.1: check_simple_vector.cpp
Obtain Nan or Determine if a Value is Nan: 8.11: nan
nan: Example and Test: 8.11.1: nan.cpp
The Integer Power Function: 8.12: pow_int
The Pow Integer Exponent: Example and Test: 8.12.1: pow_int.cpp
Evaluate a Polynomial or its Derivative: 8.13: Poly
Polynomial Evaluation: Example and Test: 8.13.1: poly.cpp
Source: Poly: 8.13.2: poly.hpp
Compute Determinants and Solve Equations by LU Factorization: 8.14: LuDetAndSolve
Compute Determinant and Solve Linear Equations: 8.14.1: LuSolve
LuSolve With Complex Arguments: Example and Test: 8.14.1.1: lu_solve.cpp
Source: LuSolve: 8.14.1.2: lu_solve.hpp
LU Factorization of A Square Matrix: 8.14.2: LuFactor
LuFactor: Example and Test: 8.14.2.1: lu_factor.cpp
Source: LuFactor: 8.14.2.2: lu_factor.hpp
Invert an LU Factored Equation: 8.14.3: LuInvert
LuInvert: Example and Test: 8.14.3.1: lu_invert.cpp
Source: LuInvert: 8.14.3.2: lu_invert.hpp
One DimensionalRomberg Integration: 8.15: RombergOne
One Dimensional Romberg Integration: Example and Test: 8.15.1: romberg_one.cpp
Multi-dimensional Romberg Integration: 8.16: RombergMul
One Dimensional Romberg Integration: Example and Test: 8.16.1: Rombergmul.cpp
An Embedded 4th and 5th Order Runge-Kutta ODE Solver: 8.17: Runge45
Runge45: Example and Test: 8.17.1: runge45_1.cpp
Runge45: Example and Test: 8.17.2: runge45_2.cpp
A 3rd and 4th Order Rosenbrock ODE Solver: 8.18: Rosen34
Rosen34: Example and Test: 8.18.1: rosen_34.cpp
An Error Controller for ODE Solvers: 8.19: OdeErrControl
OdeErrControl: Example and Test: 8.19.1: ode_err_control.cpp
OdeErrControl: Example and Test Using Maxabs Argument: 8.19.2: ode_err_maxabs.cpp
An Arbitrary Order Gear Method: 8.20: OdeGear
OdeGear: Example and Test: 8.20.1: ode_gear.cpp
An Error Controller for Gear's Ode Solvers: 8.21: OdeGearControl
OdeGearControl: Example and Test: 8.21.1: ode_gear_control.cpp
CppAD::vectorBool Class: Example and Test: 8.22.2: vector_bool.cpp
Is The Current Execution in Parallel Mode: 8.23.4: ta_in_parallel
Get At Least A Specified Amount of Memory: 8.23.6: ta_get_memory
Return Memory to thread_alloc: 8.23.7: ta_return_memory
Free Memory Currently Available for Quick Use by a Thread: 8.23.8: ta_free_available
Control When Thread Alloc Retains Memory For Future Use: 8.23.9: ta_hold_memory
Amount of Memory a Thread is Currently Using: 8.23.10: ta_inuse
Amount of Memory Available for Quick Use by a Thread: 8.23.11: ta_available
Allocate An Array and Call Default Constructor for its Elements: 8.23.12: ta_create_array
Deallocate An Array and Call Destructor for its Elements: 8.23.13: ta_delete_array
Free All Memory That Was Allocated for Use by thread_alloc: 8.23.14: ta_free_all
Returns Indices that Sort a Vector: 8.24: index_sort
Index Sort: Example and Test: 8.24.1: index_sort.cpp
Convert Certain Types to a String: 8.25: to_string
to_string: Example and Test: 8.25.1: to_string.cpp
Union of Standard Sets: 8.26: set_union
Set Union: Example and Test: 8.26.1: set_union.cpp
Row and Column Index Sparsity Patterns: 8.27: sparse_rc
sparse_rc: Example and Test: 8.27.1: sparse_rc.cpp
Sparse Matrix Row, Column, Value Representation: 8.28: sparse_rcv
sparse_rcv: Example and Test: 8.28.1: sparse_rcv.cpp
Use Ipopt to Solve a Nonlinear Programming Problem: 9: ipopt_solve
Nonlinear Programming Using CppAD and Ipopt: Example and Test: 9.1: ipopt_solve_get_started.cpp
Nonlinear Programming Retaping: Example and Test: 9.2: ipopt_solve_retape.cpp
ODE Inverse Problem Definitions: Source Code: 9.3: ipopt_solve_ode_inverse.cpp
Examples: 10: Example
Getting Started Using CppAD to Compute Derivatives: 10.1: get_started.cpp
General Examples: 10.2: General
Source Code for eigen_plugin.hpp: 10.2.4.1: eigen_plugin.hpp
Using Eigen Arrays: Example and Test: 10.2.4.2: eigen_array.cpp
Using Eigen To Compute Determinant: Example and Test: 10.2.4.3: eigen_det.cpp
Gradient of Determinant Using Expansion by Minors: Example and Test: 10.2.5: hes_minor_det.cpp
Gradient of Determinant Using LU Factorization: Example and Test: 10.2.6: hes_lu_det.cpp
Interfacing to C: Example and Test: 10.2.7: interface2c.cpp
Gradient of Determinant Using Expansion by Minors: Example and Test: 10.2.8: jac_minor_det.cpp
Gradient of Determinant Using Lu Factorization: Example and Test: 10.2.9: jac_lu_det.cpp
Using Multiple Levels of AD: 10.2.10: mul_level
Multiple Level of AD: Example and Test: 10.2.10.1: mul_level.cpp
Computing a Jacobian With Constants that Change: 10.2.10.2: change_param.cpp
A Stiff Ode: Example and Test: 10.2.11: ode_stiff.cpp
Taylor's Ode Solver: A Multi-Level AD Example and Test: 10.2.12: mul_level_ode.cpp
Taylor's Ode Solver: An Example and Test: 10.2.14: ode_taylor.cpp
Example Differentiating a Stack Machine Interpreter: 10.2.15: stack_machine.cpp
Utility Routines used by CppAD Examples: 10.3: ExampleUtility
CppAD Examples and Tests: 10.3.1: general.cpp
Run the Speed Examples: 10.3.2: speed_example.cpp
Lu Factor and Solve with Recorded Pivoting: 10.3.3: lu_vec_ad.cpp
Lu Factor and Solve With Recorded Pivoting: Example and Test: 10.3.3.1: lu_vec_ad_ok.cpp
List All (Except Deprecated) CppAD Examples: 10.4: ListAllExamples
Using The CppAD Test Vector Template Class: 10.5: testvector
Suppress Suspect Implicit Conversion Warnings: 10.6: wno_conversion
Running the Speed Test Program: 11.1: speed_main
Speed Testing Derivative of Matrix Multiply: 11.1.3: link_mat_mul
Speed Testing the Jacobian of Ode Solution: 11.1.4: link_ode
Speed Testing Second Derivative of a Polynomial: 11.1.5: link_poly
Speed Testing Sparse Hessian: 11.1.6: link_sparse_hessian
Speed Testing Sparse Jacobian: 11.1.7: link_sparse_jacobian
Microsoft Version of Elapsed Number of Seconds: 11.1.8: microsoft_timer
Speed Testing Utilities: 11.2: speed_utility
Determinant Using Expansion by Lu Factorization: 11.2.1: det_by_lu
Determinant Using Lu Factorization: Example and Test: 11.2.1.1: det_by_lu.cpp
Source: det_by_lu: 11.2.1.2: det_by_lu.hpp
Determinant of a Minor: 11.2.2: det_of_minor
Determinant of a Minor: Example and Test: 11.2.2.1: det_of_minor.cpp
Source: det_of_minor: 11.2.2.2: det_of_minor.hpp
Determinant Using Expansion by Minors: 11.2.3: det_by_minor
Determinant Using Expansion by Minors: Example and Test: 11.2.3.1: det_by_minor.cpp
Source: det_by_minor: 11.2.3.2: det_by_minor.hpp
Check Determinant of 3 by 3 matrix: 11.2.4: det_33
Source: det_33: 11.2.4.1: det_33.hpp
Sum Elements of a Matrix Times Itself: 11.2.6: mat_sum_sq
Sum of the Elements of the Square of a Matrix: Example and Test: 11.2.6.1: mat_sum_sq.cpp
Source: mat_sum_sq: 11.2.6.2: mat_sum_sq.hpp
Evaluate a Function Defined in Terms of an ODE: 11.2.7: ode_evaluate
ode_evaluate: Example and test: 11.2.7.1: ode_evaluate.cpp
Source: ode_evaluate: 11.2.7.2: ode_evaluate.hpp
Evaluate a Function That Has a Sparse Jacobian: 11.2.8: sparse_jac_fun
sparse_jac_fun: Example and test: 11.2.8.1: sparse_jac_fun.cpp
Source: sparse_jac_fun: 11.2.8.2: sparse_jac_fun.hpp
Evaluate a Function That Has a Sparse Hessian: 11.2.9: sparse_hes_fun
sparse_hes_fun: Example and test: 11.2.9.1: sparse_hes_fun.cpp
Source: sparse_hes_fun: 11.2.9.2: sparse_hes_fun.hpp
Simulate a [0,1] Uniform Random Variate: 11.2.10: uniform_01
Source: uniform_01: 11.2.10.1: uniform_01.hpp
Speed Test of Functions in Double: 11.3: speed_double
Double Speed: Determinant by Minor Expansion: 11.3.1: double_det_minor.cpp
Double Speed: Determinant Using Lu Factorization: 11.3.2: double_det_lu.cpp
CppAD Speed: Matrix Multiplication (Double Version): 11.3.3: double_mat_mul.cpp
Double Speed: Ode Solution: 11.3.4: double_ode.cpp
Double Speed: Evaluate a Polynomial: 11.3.5: double_poly.cpp
Double Speed: Sparse Hessian: 11.3.6: double_sparse_hessian.cpp
Double Speed: Sparse Jacobian: 11.3.7: double_sparse_jacobian.cpp
Adolc Test Utility: Allocate and Free Memory For a Matrix: 11.4.8: adolc_alloc_mat
Appendix: 12: Appendix
Directory Structure: 12.2: directory
The Theory of Derivative Calculations: 12.3: Theory
The Theory of Forward Mode: 12.3.1: ForwardTheory
Exponential Function Forward Mode Theory: 12.3.1.1: exp_forward
Logarithm Function Forward Mode Theory: 12.3.1.2: log_forward
Square Root Function Forward Mode Theory: 12.3.1.3: sqrt_forward
Trigonometric and Hyperbolic Sine and Cosine Forward Theory: 12.3.1.4: sin_cos_forward
Inverse Tangent and Hyperbolic Tangent Forward Mode Theory: 12.3.1.5: atan_forward
Inverse Sine and Hyperbolic Sine Forward Mode Theory: 12.3.1.6: asin_forward
Inverse Cosine and Hyperbolic Cosine Forward Mode Theory: 12.3.1.7: acos_forward
Tangent and Hyperbolic Tangent Forward Taylor Polynomial Theory: 12.3.1.8: tan_forward
Error Function Forward Taylor Polynomial Theory: 12.3.1.9: erf_forward
The Theory of Reverse Mode: 12.3.2: ReverseTheory
Exponential Function Reverse Mode Theory: 12.3.2.1: exp_reverse
Logarithm Function Reverse Mode Theory: 12.3.2.2: log_reverse
Square Root Function Reverse Mode Theory: 12.3.2.3: sqrt_reverse
Trigonometric and Hyperbolic Sine and Cosine Reverse Theory: 12.3.2.4: sin_cos_reverse
Inverse Tangent and Hyperbolic Tangent Reverse Mode Theory: 12.3.2.5: atan_reverse
Inverse Sine and Hyperbolic Sine Reverse Mode Theory: 12.3.2.6: asin_reverse
Inverse Cosine and Hyperbolic Cosine Reverse Mode Theory: 12.3.2.7: acos_reverse
Tangent and Hyperbolic Tangent Reverse Mode Theory: 12.3.2.8: tan_reverse
Error Function Reverse Mode Theory: 12.3.2.9: erf_reverse
An Important Reverse Mode Identity: 12.3.3: reverse_identity
Glossary: 12.4: glossary
Bibliography: 12.5: Bib
The CppAD Wish List: 12.6: wish_list
CppAD Deprecated API Features: 12.8: deprecated
Deprecated Include Files: 12.8.1: include_deprecated
ADFun Object Deprecated Member Functions: 12.8.2: FunDeprecated
Comparison Changes During Zero Order Forward Mode: 12.8.3: CompareChange
Routines That Track Use of New and Delete: 12.8.5: TrackNewDel
Tracking Use of New and Delete: Example and Test: 12.8.5.1: TrackNewDel.cpp
A Quick OpenMP Memory Allocator Used by CppAD: 12.8.6: omp_alloc
Set and Get Maximum Number of Threads for omp_alloc Allocator: 12.8.6.1: omp_max_num_threads
Is The Current Execution in OpenMP Parallel Mode: 12.8.6.2: omp_in_parallel
Get At Least A Specified Amount of Memory: 12.8.6.4: omp_get_memory
Return Memory to omp_alloc: 12.8.6.5: omp_return_memory
Free Memory Currently Available for Quick Use by a Thread: 12.8.6.6: omp_free_available
Amount of Memory a Thread is Currently Using: 12.8.6.7: omp_inuse
Amount of Memory Available for Quick Use by a Thread: 12.8.6.8: omp_available
Allocate Memory and Create A Raw Array: 12.8.6.9: omp_create_array
Return A Raw Array to The Available Memory for a Thread: 12.8.6.10: omp_delete_array
Check If A Memory Allocation is Efficient for Another Use: 12.8.6.11: omp_efficient
OpenMP Memory Allocator: Example and Test: 12.8.6.13: omp_alloc.cpp
Memory Leak Detection: 12.8.7: memory_leak
Machine Epsilon For AD Types: 12.8.8: epsilon
Choosing The Vector Testing Template Class: 12.8.9: test_vector
Nonlinear Programming Using CppAD and Ipopt: Example and Test: 12.8.10.1: ipopt_nlp_get_started.cpp
Example Simultaneous Solution of Forward and Inverse Problem: 12.8.10.2: ipopt_nlp_ode
An ODE Inverse Problem Example: 12.8.10.2.1: ipopt_nlp_ode_problem
ODE Inverse Problem Definitions: Source Code: 12.8.10.2.1.1: ipopt_nlp_ode_problem.hpp
ODE Fitting Using Simple Representation: 12.8.10.2.2: ipopt_nlp_ode_simple
ODE Fitting Using Simple Representation: 12.8.10.2.2.1: ipopt_nlp_ode_simple.hpp
ODE Fitting Using Fast Representation: 12.8.10.2.3: ipopt_nlp_ode_fast
ODE Fitting Using Fast Representation: 12.8.10.2.3.1: ipopt_nlp_ode_fast.hpp
Driver for Running the Ipopt ODE Example: 12.8.10.2.4: ipopt_nlp_ode_run.hpp
Correctness Check for Both Simple and Fast Representations: 12.8.10.2.5: ipopt_nlp_ode_check.cpp
Speed Test for Both Simple and Fast Representations: 12.8.10.3: ipopt_ode_speed.cpp
User Defined Atomic AD Functions: 12.8.11: old_atomic
Old Atomic Operation Reciprocal: Example and Test: 12.8.11.1: old_reciprocal.cpp
Old Tan and Tanh as User Atomic Operations: Example and Test: 12.8.11.4: old_tan.cpp
Old Matrix Multiply as a User Atomic Operation: Example and Test: 12.8.11.5: old_mat_mul.cpp
Define Matrix Multiply as a User Atomic Operation: 12.8.11.5.1: old_mat_mul.hpp
zdouble: An AD Base Type With Absolute Zero: 12.8.12: zdouble
zdouble: Example and Test: 12.8.12.1: zdouble.cpp
Autotools Unix Test and Installation: 12.8.13: autotools
Compare Speed of C and C++: 12.9: compare_c
Determinant of a Minor: 12.9.1: det_of_minor_c
Compute Determinant using Expansion by Minors: 12.9.2: det_by_minor_c
Simulate a [0,1] Uniform Random Variate: 12.9.3: uniform_01_c
Correctness Test of det_by_minor Routine: 12.9.4: correct_det_by_minor_c
Repeat det_by_minor Routine A Specified Number of Times: 12.9.5: repeat_det_by_minor_c
Returns Elapsed Number of Seconds: 12.9.6: elapsed_seconds_c
Determine Amount of Time to Execute det_by_minor: 12.9.7: time_det_by_minor_c
Main Program For Comparing C and C++ Speed: 12.9.8: main_compare_c
Computing Jacobian and Hessian of Bender's Reduced Objective: 12.10.1: BenderQuad
Jacobian and Hessian of Optimal Values: 12.10.2: opt_val_hes
opt_val_hes: Example and Test: 12.10.2.1: opt_val_hes.cpp
LU Factorization of A Square Matrix and Stability Calculation: 12.10.3: LuRatio
LuRatio: Example and Test: 12.10.3.1: lu_ratio.cpp
Alphabetic Listing of Cross Reference Tags: 13: _reference
Keyword Index: 14: _index
External Internet References: 15: _external



2.a: Instructions

2.a.b: Step 2: Cmake
Use the 2.2: cmake instructions to configure CppAD.

2.a.c: Step 3: Check
Use the 2.3: cmake_check instructions to check the CppAD examples and tests.

2.a.d: Step 4: Installation
Use the command  make install  to install CppAD. If you created nmake makefiles, you will have to use  nmake install  see the 2.2.e: generator option for the cmake command.

2.b: Contents

2.c: Deprecated
12.8.13: autotools
Input File: omh/install/install.omh

2.1.a: Purpose
CppAD is an include file library and you therefore need the source code to use it. This section discusses how to download the different versions of CppAD.

2.1.b: Distribution Directory
We refer to the CppAD source directory created by the download instructions below as the distribution directory. To be specific, the distribution directory contains the file cppad/cppad.hpp.

2.1.c: Version
A CppAD version number has the following fields: yyyy is four decimal digits denoting a year, mm is two decimal digits denoting a month, and dd is two decimal digits denoting a day. For example version = 20160101 corresponds to January 1, 2016.

2.1.d: Release
Special versions corresponding to the beginning of each year have mm and dd equal to zero. These version numbers are combined with release numbers denoted by rel . Higher release numbers correspond to more bug fixes. For example version.rel = 20160000.0 corresponds to the first release of the version for 2016, 20160000.1 corresponds to the first bug fix for 2016.

We use lic to denote the licence corresponding to an archived version of CppAD. The GNU General Public License is denoted by lic = gpl and the Eclipse Public License is denoted by lic = epl .

2.1.f: Compressed Archives
The Coin compressed archives have the documentation built into them. If you downloading an old version using another method; see 2.1.k: building documentation .

2.1.f.a: Coin
The compressed archive names on the Coin download page (http://www.coin-or.org/download/source/CppAD/) have one of the following formats:       cppad-version.rel.lic.tgz      cppad-version.lic.tgz  In Unix, you can extract these compressed archives using tar. For example,       tar -xzf cppad-version.rel.lic.tgz  No matter what the format of the name, the corresponding distribution directory is cppad-version . To see that the extraction has been done correctly, check for the following file:       cppad-version/cppad/cppad.hpp 
2.1.f.b: Github
The compressed archive names on the Github download page (https://github.com/coin-or/CppAD/releases) have the format       cppad-version.rel.tgz  These archives correspond to the Eclipse Public License.

2.1.g: Source Code Control

2.1.g.a: Git
CppAD source code development is current done using git You can a git clone of the current version using the command      git clone https://github.com/coin-or/CppAD.git cppad.git  This procedure requires that the git (https://en.wikipedia.org/wiki/Git_%28software%29) is installed on your system.

2.1.g.b: Subversion
A subversion copy of the source code is kept on the Coin web site. You can obtain this subversion copy using the command       svn checkout https://projects.coin-or.org/svn/CppAD/trunk cppad.svn/trunk  This procedure requires that the subversion (http://subversion.tigris.org/) program is installed on your system.

2.1.h: Monthly Versions
Monthly versions of the compressed tar files are available on the Coin download page (http://www.coin-or.org/download/source/CppAD/) . These are kept until the end of the current year, when the next release is created. The monthly versions have the form       cppad-yyyy0101.lic.tgz 
2.1.i: Windows File Extraction and Testing
If you know how to extract the distribution directory from the tar file, just do so. Otherwise, below is one way you can do it. (Note that if 7z.exe, cmake.exe, and nmake.exe are you your execution path, you will not need to specify their paths below.)
3. In a command window, execute the following commands:       set PATH=path_to_7_zip;%PATH%      set PATH=path_to_cmake;%PATH%      set VCDIR=path_to_vcdir;%PATH%      call "%VCDIR%\vcvarsall.bat" x86  For example, on one machine these paths had the following values:       path_to_7_zip=C:\Program Files\7-zip      path_to_cmake=C:\Program Files (x86)\CMake\bin      path_to_vcdir=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC 
4. Use the following commands to extract the distribution from the compressed archive:       7z x cppad-version.lic.tgz      7z x cppad-version.lic.tar 
5. To see if this has been done correctly, check for the following file:       cppad-version\cppad\cppad.hpp 
6. The commands below are optional. They run the CppAD tests using the default 2.2: cmake settings (except for the 2.2.e: generator option)       mkdir build      cd build      cmake -G "NMake Makefiles" ..      nmake check 

2.1.j: Install Instructions
The 2: install instructions on this web site correspond to the current version of CppAD. If you are using an old version of CppAD these instructions may work. If you have trouble (or just to be careful), you should follow the instructions in the doc subdirectory of the distribution directory. If there is no such documentation, you can build it; see 2.1.k: building documentation .

2.1.k: Building Documentation
If you are using one of these download methods, you can build the documentation to get the corresponding install instructions. The documentation for CppAD is built from the source code files using OMhelp (http://www.seanet.com/~bradbell/omhelp/) . You will need to install the omhelp command so that  which omhelp  shows it is in your path. Once you have done this, in the distribution directory execute the following command:       bin/run_omhelp.sh htm  You will then be able to follow the install instructions in the doc subdirectory of the distribution directory.
2.2: Using CMake to Configure CppAD

2.2.a: The CMake Program
The cmake (http://www.cmake.org/cmake/help/install.html) program enables one to create a single set of scripts, called CMakeLists.txt, that can be used to test and install a program on Unix, Microsoft, or Apple operating systems. For example, one can use it to automatically generate Microsoft project files.

2.2.b: CMake Command
The command below assumes that cmake is in your execution path with version greater than or equal 2.8. If not, you can put the path to the version of cmake in font the command. Only the cmake command and the path to the distribution directory (.. at the end of the command below) are required. In other words, the first and last lines below are required and all of the other lines are optional.

2.2.b.a: Build Directory
Create a build subdirectory of the 2.1.b: distribution directory , change into the build directory, and execute the following command:  cmake                                                                      \     -D CMAKE_VERBOSE_MAKEFILE=cmake_verbose_makefile                       \     -G generator                                                           \      \     -D cppad_prefix=cppad_prefix                                           \     -D cppad_postfix=cppad_postfix                                         \      \     -D cmake_install_includedirs=cmake_install_includedirs                 \     -D cmake_install_libdirs=cmake_install_libdirs                         \      \     -D cmake_install_datadir=cmake_install_datadir                         \     -D cmake_install_docdir=cmake_install_docdir                           \     \     -D adolc_prefix=adolc_prefix                                           \     -D colpack_prefix=colpack_prefix                                       \     -D eigen_prefix=eigen_prefix                                           \     -D fadbad_prefix=fadbad_prefix                                         \     -D ipopt_prefix=ipopt_prefix                                           \     -D sacado_prefix=sacado_prefix                                         \     \     -D cppad_cxx_flags=cppad_cxx_flags                                     \     -D cppad_profile_flag=cppad_profile_flag                               \     \     -D cppad_testvector=cppad_testvector                                   \     -D cppad_max_num_threads=cppad_max_num_threads                         \     -D cppad_tape_id_type=cppad_tape_id_type                               \     -D cppad_tape_addr_type=cppad_tape_addr_type                           \     -D cppad_debug_which=cppad_debug_which                                 \     -D cppad_deprecated=cppad_deprecated                                   \     \     .. 
2.2.c: make check
Important information about the CppAD configuration is output by this command. If you have the grep program, and store the output in cmake.log, you can get a list of all the test options with the command:  grep 'make check' cmake.log 
2.2.d: cmake_verbose_makefile
This value should be either YES or NO. The default value, when it is not present, is NO. If it is YES, then the output of the make commands will include all of the files and flags used to run the compiler and linker. This can be useful for seeing how to compile and link your own applications.

2.2.e: generator
The CMake program is capable of generating different kinds of files. Below is a table with a few of the possible files
 generator Description "Unix Makefiles" make files for unix operating system "NMake Makefiles" make files for Visual Studio
Other generator choices are available; see the cmake generators (http://www.cmake.org/cmake/help/cmake2.6docs.html#section_Generators) documentation.

This is the top level absolute path below which all of the CppAD files are installed by the command       make install  For example, if cppad_prefix is /usr, cmake_install_includedirs is include, and cppad_postfix is not specified, the file cppad.hpp is installed in the location       /usr/include/cppad/cppad.hpp  The default value for cppad_prefix is /usr.

This is the bottom level relative path below which all of the CppAD files are installed. For example, if cppad_prefix is /usr, cmake_install_includedirs is include, and cppad_postfix is coin, the file cppad.hpp is installed in the location       /usr/include/coin/cppad/cppad.hpp  The default value for cppad_postfix is empty; i.e, there is no bottom level relative directory for the installed files.

2.2.h: cmake_install_includedirs
This is one directory, or a list of directories separated by spaces or by semi-colons. This first entry in the list is the middle level relative path below which the CppAD include files are installed. The entire list is used for searching for include files. For example, if cppad_prefix is /usr, cmake_install_includedirs is include, and cppad_postfix is not specified, the file cppad.hpp is installed in the location       /usr/include/cppad/cppad.hpp  The default value for this directory list is include.

2.2.i: cmake_install_libdirs
This is one directory, or a list of directories separated by spaces or by semi-colons. This first entry in the list is the middle level relative path below which the CppAD library files are installed. The entire list is used for searching for library files. For example, if cppad_prefix is /usr, cmake_install_libdirs is lib, cppad_postfix is not specified, and ipopt_prefix is specified, the file libcppad_ipopt.a is installed in the location       /usr/lib/libcppad_ipopt.a  The default value for this directory list is lib.

This is the middle level relative path below which the CppAD data files are installed. For example, if cppad_prefix is /usr, cmake_install_datadir is share, and cppad_postfix is not specified, the 2.4: pkgconfig file cppad.pc is installed in the location       /usr/share/pkgconfig/cppad.pc  The default value for cmake_install_datadir is share.

2.2.k: cmake_install_docdir
This is the middle level relative path below which the CppAD documentation files are installed. For example, if cppad_prefix is /usr, cmake_install_docdir is share/doc, and cppad_postfix is not specified, the file cppad.xml is installed in the location       /usr/share/doc/cppad/cppad.xml  There is no default value for cmake_install_docdir . If it is not specified, the documentation files are not installed.

2.2.l: package_prefix
Each of these packages corresponds to optional CppAD examples, that can be compiled and tested if the corresponding prefix is provided:

This specifies the addition compiler flags that are used when compiling the CppAD examples and tests. The default value for these flags is the empty string "". These flags must be valid for the C++ compiler on your system. For example, if you are using g++ you could specify  -D cppad_cxx_flags="-Wall -ansi -pedantic-errors -std=c++11 -Wshadow" 
2.2.m.a: C++11
In order for the compiler to take advantage of features that are new in C++11, the cppad_cxx_flags must enable these features. The compiler may still be used with a flag that disables the new features (unless it is a Microsoft compiler; i.e., _MSC_VER is defined).

2.2.m.b: debug and release
The CppAD examples and tests decide which files to compile for debugging and which to compile for release. Hence debug and release flags should not be included in cppad_cxx_flags . See also the 6.b.a: CPPAD_DEBUG_AND_RELEASE compiler flag (which should not be included in cppad_cxx_flags ).

This specifies an addition compiler and link flag that is used for 11.1.c.c: profiling the speed tests. A profile version of the speed test is only build when this argument is present.

The packages 2.2.3: eigen and 2.2.4: fadbad currently generate a lot of shadowed variable warnings. If the -Wshadow flag is present, it is automatically removed when compiling examples and test that use these packages.

See 2.2.7: Choosing the CppAD Test Vector Template Class.

The value cppad_max_num_threads must be greater than or equal to four; i.e., max_num_threads >= 4 . The current default value for cppad_max_num_threads is 48, but it may change in future versions of CppAD. The value cppad_max_num_threads in turn specifies the default value for the preprocessor symbol 7.b: CPPAD_MAX_NUM_THREADS .

The type cppad_tape_id_type is used for identifying different tapes. The valid values for this type are unsigned char, unsigned short int, unsigned int, and size_t. The smaller the value of sizeof(cppad_tape_id_type) , the less memory is used. On the other hand, the value       std::numeric_limits<cppad_tape_id_type>::max()  must be larger than the maximum number of tapes used by one thread times 7.b: CPPAD_MAX_NUM_THREADS .

2.2.q.a: cstdint
If all of the following cstdint types are defined, they can also be used as the value of cppad_tape_addr_type : uint8_t, uint16_t, uint32_t, uint64_t.

The type cppad_tape_addr_type is used for address in the AD recordings (tapes). The valid values for this argument are unsigned char, unsigned short int, unsigned int, size_t. The smaller the value of sizeof(cppad_tape_addr_type) , the less memory is used. On the other hand, the value       std::numeric_limits<cppad_tape_addr_type>::max()  must be larger than any of the following: 5.1.5.i: size_op , 5.1.5.j: size_op_arg , 5.1.5.h: size_par , 5.1.5.k: size_text , 5.1.5.l: size_VecAD .

2.2.r.a: cstdint
If all of the following cstdint types are defined, they can also be used as the value of cppad_tape_addr_type : uint8_t, uint16_t, uint32_t, uint64_t.

All of the CppAD examples and test can optionally be tested in debug or release mode (see exception below). This option controls which mode is chosen for the corresponding files. The value cppad_debug_which be one of the following: debug_even, debug_odd, debug_all, debug_none. If it is debug_even (debug_odd), files with an even (old) index in a list for each case will be compiled in debug mode. The remaining files will be compiled in release mode. If it is debug_all (debug_none), all the files will be compiled in debug (release) mode. If cppad_debug_which does not appear on the command line, the default value debug_all is used.

2.2.s.a: Exception
The test corresponding to make cppad_ipopt_speed always get complied in release more (to avoid the extra time it would take to run in debug mode). Note that this test corresponds a deprecated interface; see 12.8.10: cppad_ipopt_nlp .

The default value for cppad_deprecated is NO (the value YES is not currently being used).
Input File: omh/install/cmake.omh
2.2.1: Including the ADOL-C Examples and Tests

2.2.1.a: Purpose

If ADOL-C is installed on your system, you can specify a value for its install adolc_prefix on the 2.2: cmake command line. The value of adolc_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,       adolc_prefix/dir/adolc/adouble.h  is a valid way to reference to the include file adouble.h; Note that CppAD assumes ADOL-C has been configured with its sparse matrix computations enabled; i.e, using       --with-colpack=adolc_prefix  In other words ColPack is installed and with the same prefix as ACOL-C; see 2.2.2.5: get_colpack.sh .

2.2.1.c: Examples
If you include adolc_prefix on the 2.2: cmake command line, you will be able to run the ADOL-C examples listed above by executing the following commands starting in the 2.1.b: distribution directory :       cd build/example      make check_example  If you do this, you will see an indication that the examples mul_level_adolc and mul_level_adolc_ode have passed their correctness check.

2.2.1.d: Speed Tests
If you include adolc_prefix on the 2.2: cmake command line, you will be able to run the ADOL-C speed correctness tests by executing the following commands starting in the 2.1.b: distribution directory :       cd build/speed/adolc      make check_speed_adolc  After executing make check_speed_adolc, you can run a specific ADOL-C speed tests by executing the command ./speed_adolc; see 11.1: speed_main for the meaning of the command line options to this program.

2.2.1.e: Unix
If you are using Unix, you may have to add adolc_prefix to LD_LIBRARY_PATH. For example, if you use the bash shell to run your programs, you could include       LD_LIBRARY_PATH=adolc_prefix/lib:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH  in your $HOME/.bashrc file.

2.2.1.f: Cygwin
If you are using Cygwin, you may have to add to following lines to the file .bashrc in your home directory:       PATH=adolc_prefix/bin:\${PATH}      export PATH  in order for ADOL-C to run properly. If adolc_prefix begins with a disk specification, you must use the Cygwin format for the disk specification. For example, if d:/adolc_base is the proper directory, /cygdrive/d/adolc_base should be used for adolc_prefix .

If you are using Unix, you can download and install a copy of Adolc using 2.2.1.1: get_adolc.sh . The corresponding adolc_prefix would be build/prefix.

2.2.1.1.a: Syntax  bin/get_adolc.sh

2.2.1.1.b: Purpose
If you are using Unix, this command will download and install ADOL-C (https://projects.coin-or.org/ADOL-C) in the CppAD build directory.

2.2.1.1.c: Requirements
You must first use 2.2.2.5: get_colpack.sh to download and install ColPack (coloring algorithms used for sparse matrix derivatives).

2.2.1.1.d: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.1.1.e: External Directory
The Adolc source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.1.1.f: Prefix Directory
The Adolc include files are installed in the sub-directory build/prefix/include/adolc below the distribution directory.

2.2.1.1.g: Reuse
The files build/external/ADOL-C-version.tgz and the directory build/external/ADOL-C-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
2.2.2: Including the ColPack Sparsity Calculations

2.2.2.a: Purpose
If you specify a colpack_prefix on the 2.2.b: cmake command line, the CppAD 5.6.2: sparse_jacobian calculations use the ColPack (http://cscapes.cs.purdue.edu/dox/ColPack/html) package.

2.2.2.b: colpack_prefix
If ColPack is installed on your system, you can specify a value for its install colpack_prefix on the 2.2: cmake command line. The value of colpack_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,       colpack_prefix/dir/ColPack/ColPackHeaders.h  is a valid way to reference to the include file ColPackHeaders.h.

The ColPack header files has a       using namespace std  at the global level. For this reason, CppAD does not include these files. It is therefore necessary to link the object library cppad_lib when using ColPack.

2.2.2.d: Example
The file 2.2.2.1: colpack_jac.cpp (2.2.2.3: colpack_hes.cpp ) contains an example and test of using ColPack to compute the coloring for sparse Jacobians (Hessians). It returns true, if it succeeds and false otherwise.

2.2.2.e: get_colpack
If you are using Unix, you can download and install a copy of ColPack using 2.2.2.5: get_colpack.sh . The corresponding colpack_prefix would be build/prefix.
Input File: omh/install/colpack_prefix.omh
2.2.2.1: ColPack: Sparse Jacobian Example and Test
 # include <cppad/cppad.hpp> bool colpack_jac(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; typedef CPPAD_TESTVECTOR(AD<double>) a_vector; typedef CPPAD_TESTVECTOR(double) d_vector; typedef CppAD::vector<size_t> i_vector; typedef CppAD::sparse_rc<i_vector> sparsity; typedef CppAD::sparse_rcv<i_vector, d_vector> sparse_matrix; // domain space vector size_t n = 4; a_vector a_x(n); for(size_t j = 0; j < n; j++) a_x[j] = AD<double> (0); // declare independent variables and starting recording CppAD::Independent(a_x); size_t m = 3; a_vector a_y(m); a_y[0] = a_x[0] + a_x[1]; a_y[1] = a_x[2] + a_x[3]; a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.; // create f: x -> y and stop tape recording CppAD::ADFun<double> f(a_x, a_y); // new value for the independent variable vector d_vector x(n); for(size_t j = 0; j < n; j++) x[j] = double(j); /* [ 1 1 0 0 ] jac = [ 0 0 1 1 ] [ 1 1 1 x_3] */ // Normally one would use CppAD to compute sparsity pattern, but for this // example we set it directly size_t nr = m; size_t nc = n; size_t nnz = 8; sparsity pattern(nr, nc, nnz); d_vector check(nnz); for(size_t k = 0; k < nnz; k++) { size_t r, c; if( k < 2 ) { r = 0; c = k; } else if( k < 4 ) { r = 1; c = k; } else { r = 2; c = k - 4; } pattern.set(k, r, c); if( k == nnz - 1 ) check[k] = x[3]; else check[k] = 1.0; } // using row and column indices to compute non-zero in rows 1 and 2 sparse_matrix subset( pattern ); // check results for both CppAD and Colpack for(size_t i_method = 0; i_method < 4; i_method++) { // coloring method std::string coloring; if( i_method % 2 == 0 ) coloring = "cppad"; else coloring = "colpack"; // CppAD::sparse_jac_work work; size_t group_max = 1; if( i_method / 2 == 0 ) { size_t n_sweep = f.sparse_jac_for( group_max, x, subset, pattern, coloring, work ); ok &= n_sweep == 4; } else { size_t n_sweep = f.sparse_jac_rev( x, subset, pattern, coloring, work ); ok &= n_sweep == 2; } const d_vector& hes( subset.val() ); for(size_t k = 0; k < nnz; k++) ok &= check[k] == hes[k]; } return ok; } 
Input File: example/sparse/colpack_jac.cpp
2.2.2.2: ColPack: Sparse Jacobian Example and Test
 # include <cppad/cppad.hpp> bool colpack_jacobian(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; typedef CPPAD_TESTVECTOR(AD<double>) a_vector; typedef CPPAD_TESTVECTOR(double) d_vector; typedef CppAD::vector<size_t> i_vector; size_t i, j, k, ell; double eps = 10. * CppAD::numeric_limits<double>::epsilon(); // domain space vector size_t n = 4; a_vector a_x(n); for(j = 0; j < n; j++) a_x[j] = AD<double> (0); // declare independent variables and starting recording CppAD::Independent(a_x); size_t m = 3; a_vector a_y(m); a_y[0] = a_x[0] + a_x[1]; a_y[1] = a_x[2] + a_x[3]; a_y[2] = a_x[0] + a_x[1] + a_x[2] + a_x[3] * a_x[3] / 2.; // create f: x -> y and stop tape recording CppAD::ADFun<double> f(a_x, a_y); // new value for the independent variable vector d_vector x(n); for(j = 0; j < n; j++) x[j] = double(j); /* [ 1 1 0 0 ] jac = [ 0 0 1 1 ] [ 1 1 1 x_3] */ d_vector check(m * n); check[0] = 1.; check[1] = 1.; check[2] = 0.; check[3] = 0.; check[4] = 0.; check[5] = 0.; check[6] = 1.; check[7] = 1.; check[8] = 1.; check[9] = 1.; check[10] = 1.; check[11] = x[3]; // Normally one would use f.ForSparseJac or f.RevSparseJac to compute // sparsity pattern, but for this example we extract it from check. std::vector< std::set<size_t> > p(m); // using row and column indices to compute non-zero in rows 1 and 2 i_vector row, col; for(i = 0; i < m; i++) { for(j = 0; j < n; j++) { ell = i * n + j; if( check[ell] != 0. ) { row.push_back(i); col.push_back(j); p[i].insert(j); } } } size_t K = row.size(); d_vector jac(K); // empty work structure CppAD::sparse_jacobian_work work; ok &= work.color_method == "cppad"; // choose to use ColPack work.color_method = "colpack"; // forward mode size_t n_sweep = f.SparseJacobianForward(x, p, row, col, jac, work); for(k = 0; k < K; k++) { ell = row[k] * n + col[k]; ok &= NearEqual(check[ell], jac[k], eps, eps); } ok &= n_sweep == 4; // reverse mode work.clear(); work.color_method = "colpack"; n_sweep = f.SparseJacobianReverse(x, p, row, col, jac, work); for(k = 0; k < K; k++) { ell = row[k] * n + col[k]; ok &= NearEqual(check[ell], jac[k], eps, eps); } ok &= n_sweep == 2; return ok; } 
Input File: example/sparse/colpack_jacobian.cpp
2.2.2.3: ColPack: Sparse Hessian Example and Test
 # include <cppad/cppad.hpp> bool colpack_hes(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; typedef CPPAD_TESTVECTOR(AD<double>) a_vector; typedef CPPAD_TESTVECTOR(double) d_vector; typedef CppAD::vector<size_t> i_vector; typedef CppAD::sparse_rc<i_vector> sparsity; typedef CppAD::sparse_rcv<i_vector, d_vector> sparse_matrix; double eps = 10. * CppAD::numeric_limits<double>::epsilon(); // // domain space vector size_t n = 5; a_vector a_x(n); for(size_t j = 0; j < n; j++) a_x[j] = AD<double> (0); // // declare independent variables and starting recording CppAD::Independent(a_x); // colpack example case where hessian is a spear head // i.e, H(i, j) non zero implies i = 0, j = 0, or i = j AD<double> sum = 0.0; // partial_0 partial_j = x[j] // partial_j partial_j = x[0] for(size_t j = 1; j < n; j++) sum += a_x[0] * a_x[j] * a_x[j] / 2.0; // // partial_i partial_i = 2 * x[i] for(size_t i = 0; i < n; i++) sum += a_x[i] * a_x[i] * a_x[i] / 3.0; // declare dependent variables size_t m = 1; a_vector a_y(m); a_y[0] = sum; // create f: x -> y and stop tape recording CppAD::ADFun<double> f(a_x, a_y); // new value for the independent variable vector d_vector x(n); for(size_t j = 0; j < n; j++) x[j] = double(j + 1); /* [ 2 2 3 4 5 ] hes = [ 2 5 0 0 0 ] [ 3 0 7 0 0 ] [ 4 0 0 9 0 ] [ 5 0 0 0 11 ] */ // Normally one would use CppAD to compute sparsity pattern, but for this // example we set it directly size_t nr = n; size_t nc = n; size_t nnz = n + 2 * (n - 1); sparsity pattern(nr, nc, nnz); for(size_t k = 0; k < n; k++) { size_t r = k; size_t c = k; pattern.set(k, r, c); } for(size_t i = 1; i < n; i++) { size_t k = n + 2 * (i - 1); size_t r = i; size_t c = 0; pattern.set(k, r, c); pattern.set(k+1, c, r); } // subset of elements to compute // (only compute lower traingle) nnz = n + (n - 1); sparsity lower_triangle(nr, nc, nnz); d_vector check(nnz); for(size_t k = 0; k < n; k++) { size_t r = k; size_t c = k; lower_triangle.set(k, r, c); check[k] = 2.0 * x[k]; if( k > 0 ) check[k] += x[0]; } for(size_t j = 1; j < n; j++) { size_t k = n + (j - 1); size_t r = 0; size_t c = j; lower_triangle.set(k, r, c); check[k] = x[c]; } sparse_matrix subset( lower_triangle ); // check results for both CppAD and Colpack for(size_t i_method = 0; i_method < 4; i_method++) { // coloring method std::string coloring; switch(i_method) { case 0: coloring = "cppad.symmetric"; break; case 1: coloring = "cppad.general"; break; case 2: coloring = "colpack.symmetric"; break; case 3: coloring = "colpack.general"; break; } // // compute Hessian CppAD::sparse_hes_work work; d_vector w(m); w[0] = 1.0; size_t n_sweep = f.sparse_hes( x, w, subset, pattern, coloring, work ); // // check result const d_vector& hes( subset.val() ); for(size_t k = 0; k < nnz; k++) ok &= NearEqual(check[k], hes[k], eps, eps); if( coloring == "cppad.symmetric" || coloring == "colpack.symmetric" ) ok &= n_sweep == 2; else ok &= n_sweep == 5; } return ok; } 
Input File: example/sparse/colpack_hes.cpp
2.2.2.4: ColPack: Sparse Hessian Example and Test
 # include <cppad/cppad.hpp> bool colpack_hessian(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; typedef CPPAD_TESTVECTOR(AD<double>) a_vector; typedef CPPAD_TESTVECTOR(double) d_vector; typedef CppAD::vector<size_t> i_vector; size_t i, j, k, ell; double eps = 10. * CppAD::numeric_limits<double>::epsilon(); // domain space vector size_t n = 5; a_vector a_x(n); for(j = 0; j < n; j++) a_x[j] = AD<double> (0); // declare independent variables and starting recording CppAD::Independent(a_x); // colpack example case where hessian is a spear head // i.e, H(i, j) non zero implies i = 0, j = 0, or i = j AD<double> sum = 0.0; // partial_0 partial_j = x[j] // partial_j partial_j = x[0] for(j = 1; j < n; j++) sum += a_x[0] * a_x[j] * a_x[j] / 2.0; // // partial_i partial_i = 2 * x[i] for(i = 0; i < n; i++) sum += a_x[i] * a_x[i] * a_x[i] / 3.0; // declare dependent variables size_t m = 1; a_vector a_y(m); a_y[0] = sum; // create f: x -> y and stop tape recording CppAD::ADFun<double> f(a_x, a_y); // new value for the independent variable vector d_vector x(n); for(j = 0; j < n; j++) x[j] = double(j + 1); /* [ 2 2 3 4 5 ] hes = [ 2 5 0 0 0 ] [ 3 0 7 0 0 ] [ 4 0 0 9 0 ] [ 5 0 0 0 11 ] */ d_vector check(n * n); for(i = 0; i < n; i++) { for(j = 0; j < n; j++) { size_t index = i * n + j; check[index] = 0.0; if( i == 0 && 1 <= j ) check[index] += x[j]; if( 1 <= i && j == 0 ) check[index] += x[i]; if( i == j ) { check[index] += 2.0 * x[i]; if( i != 0 ) check[index] += x[0]; } } } // Normally one would use f.RevSparseHes to compute // sparsity pattern, but for this example we extract it from check. std::vector< std::set<size_t> > p(n); i_vector row, col; for(i = 0; i < n; i++) { for(j = 0; j < n; j++) { ell = i * n + j; if( check[ell] != 0. ) { // insert this non-zero entry in sparsity pattern p[i].insert(j); // the Hessian is symmetric, so only lower triangle if( j <= i ) { row.push_back(i); col.push_back(j); } } } } size_t K = row.size(); d_vector hes(K); // default coloring method is cppad.symmetric CppAD::sparse_hessian_work work; ok &= work.color_method == "cppad.symmetric"; // contrast and check results for both CppAD and Colpack for(size_t i_method = 0; i_method < 4; i_method++) { // empty work structure switch(i_method) { case 0: work.color_method = "cppad.symmetric"; break; case 1: work.color_method = "cppad.general"; break; case 2: work.color_method = "colpack.symmetric"; break; case 3: work.color_method = "colpack.general"; break; } // compute Hessian d_vector w(m); w[0] = 1.0; size_t n_sweep = f.SparseHessian(x, w, p, row, col, hes, work); // // check result for(k = 0; k < K; k++) { ell = row[k] * n + col[k]; ok &= NearEqual(check[ell], hes[k], eps, eps); } if( work.color_method == "cppad.symmetric" || work.color_method == "colpack.symmetric" ) ok &= n_sweep == 2; else ok &= n_sweep == 5; // // check that clear resets color_method to cppad.symmetric work.clear(); ok &= work.color_method == "cppad.symmetric"; } return ok; } 
Input File: example/sparse/colpack_hessian.cpp

2.2.2.5.a: Syntax  bin/get_colpack.sh

2.2.2.5.b: Purpose
If you are using Unix, this command will download and install ColPack (http://cscapes.cs.purdue.edu/dox/ColPack/html/) in the CppAD build directory.

2.2.2.5.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.2.5.d: External Directory
The ColPack source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.2.5.e: Prefix Directory
The ColPack include files are installed in the sub-directory build/prefix/include/ColPack below the distribution directory.

2.2.2.5.f: Reuse
The file build/external/ColPack-version.tar.gz and the directory build/external/ColPack-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_colpack.sh
2.2.3: Including the Eigen Examples and Tests

2.2.3.a: Purpose
CppAD can include the following examples and tests that use the linear algebra package Eigen (http://eigen.tuxfamily.org) :
 10.2.4: cppad_eigen.hpp Enable Use of Eigen Linear Algebra Package with CppAD 10.2.4.2: eigen_array.cpp Using Eigen Arrays: Example and Test 10.2.4.3: eigen_det.cpp Using Eigen To Compute Determinant: Example and Test 4.4.7.2.16.1: atomic_eigen_mat_mul.hpp Atomic Eigen Matrix Multiply Class 4.4.7.2.17.1: atomic_eigen_mat_inv.hpp Atomic Eigen Matrix Inversion Class

2.2.3.b: eigen_prefix
If Eigen is installed on your system, you can specify a value for its install eigen_prefix on the 2.2: cmake command line. The value of eigen_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,       eigen_prefix/dir/Eigen/Core  is a valid way to reference to the include file Core;

2.2.3.c: Examples
If you include eigen_prefix on the 2.2: cmake command line, you will be able to run the Eigen examples list above by executing the following commands starting in the 2.1.b: distribution directory :       cd build/example      make check_example  If you do this, you will see an indication that the examples eigen_array and eigen_det have passed their correctness check. options to this program.

2.2.3.d: Test Vector
If you have specified eigen_prefix you can choose       -D cppad_testvector=eigen  on the 2.2.b: cmake command line. This we set the CppAD 10.5: testvector to use Eigen vectors.

2.2.3.e: get_eigen
If you are using Unix, you can download and install a copy of Eigen using 2.2.3.1: get_eigen.sh . The corresponding eigen_prefix would be build/prefix.
Input File: omh/install/eigen_prefix.omh

2.2.3.1.a: Syntax  bin/get_eigen.sh

2.2.3.1.b: Purpose
If you are using Unix, this command will download and install Eigen (http://eigen.tuxfamily.org) in the CppAD build directory.

2.2.3.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.3.1.d: External Directory
The Eigen source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.3.1.e: Prefix Directory
The Eigen include files are installed in the sub-directory build/prefix/include/Eigen below the distribution directory.

2.2.3.1.f: Reuse
The file build/external/eigen-version.tar.gz and the directory build/external/eigen-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_eigen.sh

2.2.4.a: Purpose

If FADBAD is installed on your system, you can specify a value for its install fadbad_prefix on the 2.2: cmake command line. The value of fadbad_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,       fadbad_prefix/dir/FADBAD++/badiff.h  is a valid way to reference to the include file badiff.h;

2.2.4.c: Speed Tests
If you include fadbad_prefix on the 2.2: cmake command line, you will be able to run the FADBAD speed correctness tests by executing the following commands starting in the 2.1.b: distribution directory :       cd build/speed/fadbad      make check_speed_fadbad  After executing make check, you can run a specific FADBAD speed test by executing the command ./speed_fadbad; see 11.1: speed_main for the meaning of the command line options to this program.

If you are using Unix, you can download and install a copy of Fadbad using 2.2.4.1: get_fadbad.sh . The corresponding fadbad_prefix would be build/prefix.

2.2.4.1.a: Syntax  bin/get_fadbad.sh

2.2.4.1.b: Purpose
If you are using Unix, this command will download and install Fadbad (http://www.fadbad.com) in the CppAD build directory.

2.2.4.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.4.1.d: External Directory
The Fadbad source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.4.1.e: Prefix Directory
The Fadbad include files are installed in the sub-directory build/prefix/include/FADBAD++ below the distribution directory.
2.2.5: Including the cppad_ipopt Library and Tests

2.2.5.a: Purpose
Includes the 12.8.10: cppad_ipopt_nlp example and tests as well as installing the cppad_ipopt library during the make install step.

2.2.5.b: ipopt_prefix
If you have Ipopt (http://www.coin-or.org/projects/Ipopt.xml) installed on your system, you can specify the value for ipopt_prefix on the 2.2: cmake command line. The value of ipopt_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,       ipopt_prefix/dir/coin/IpIpoptApplication.hpp  is a valid way to reference to the include file IpIpoptApplication.hpp.

2.2.5.c: Examples and Tests
If you include ipopt_prefix on the 2.2: cmake command line, you will be able to run the Ipopt examples and tests by executing the following commands starting in the 2.1.b: distribution directory :       cd cppad_ipopt      make check_ipopt 
2.2.5.d: get_ipopt
If you are using Unix, you can download and install a copy of Ipopt using 2.2.5.1: get_ipopt.sh . The corresponding ipopt_prefix would be build/prefix.
Input File: omh/install/ipopt_prefix.omh

2.2.5.1.a: Syntax  bin/get_ipopt.sh

2.2.5.1.b: Purpose
If you are using Unix, this command will download and install Ipopt (http://www.coin-or.org/projects/Ipopt.xml) in the CppAD build directory.

2.2.5.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.5.1.d: External Directory
The Ipopt source code is downloaded and compiled in the sub-directory build/external below the distribution directory.

2.2.5.1.e: Prefix Directory
The Ipopt libraries and include files are installed in the sub-directory build/prefix below the distribution directory.

2.2.5.1.f: Reuse
The file build/external/Ipopt-version.tgz and the directory build/external/Ipopt-version will be reused if they exist. Delete this file and directory to get a complete rebuild.
Input File: bin/get_ipopt.sh
2.2.6: Including the Sacado Speed Tests

2.2.6.a: Purpose

If Sacado is installed on your system, you can specify a value for its install sacado_prefix on the 2.2: cmake command line. The value of sacado_prefix must be such that, for one of the directories dir in 2.2.h: cmake_install_includedirs ,       sacado_prefix/dir/Sacado.hpp  is a valid way to reference to the include file Sacado.hpp;

2.2.6.c: Speed Tests
If you include sacado_prefix on the 2.2: cmake command line, you will be able to run the Sacado speed correctness tests by executing the following commands starting in the 2.1.b: distribution directory :       cd build/speed/sacado      make check_speed_sacado  After executing make check_speed_sacado, you can run a specific Sacado speed test by executing the command ./speed_sacado; see 11.1: speed_main for the meaning of the command line options to this program.

If you are using Unix, you can download and install a copy of Sacado using 2.2.6.1: get_sacado.sh . The corresponding sacado_prefix would be build/prefix.

2.2.6.1.a: Syntax  bin/get_sacado.sh

2.2.6.1.b: Purpose
If you are using Unix, this command will download and install Sacado (http://trilinos.sandia.gov/packages/sacado) in the CppAD build directory.

2.2.6.1.c: Distribution Directory
This command must be executed in the 2.1.b: distribution directory .

2.2.6.1.d: External Directory
The Sacado source code is downloaded into the sub-directory build/external below the distribution directory.

2.2.6.1.e: Prefix Directory
The Sacado libraries and include files are installed in the sub-directory build/prefix below the distribution directory.

2.2.6.1.f: Reuse
The file build/external/trilinos-version-Source.tar.gz and the directory build/external/trilinos-version-Source will be reused if they exist. Delete this file and directory to get a complete rebuild.
2.2.7: Choosing the CppAD Test Vector Template Class

2.2.7.a: Purpose
The value cppad_testvector in the 2.2.b: cmake command must be one of the following: boost, cppad, eigen, or std. It specifies which type of vector is corresponds to the template class 10.5: CPPAD_TESTVECTOR which is used for many of the CppAD examples and tests.

2.2.7.b: std
If cppad_testvector is std , the std::vector template class is used to define CPPAD_TESTVECTOR.

If cppad_testvector is cppad , the 8.22: cppad_vector template class is used to define CPPAD_TESTVECTOR.

2.2.7.d: boost
If cppad_testvector is boost , boost ublas vector (http://www.boost.org/doc/libs/1_52_0/libs/numeric/ublas/doc/vector.htm) template class is used to define CPPAD_TESTVECTOR. In this case, the cmake FindBoost (http://www.cmake.org/cmake/help/cmake2.6docs.html#module:FindBoost) module must be able to automatically figure out where Boost is installed.

2.2.7.e: eigen
If cppad_testvector is eigen , one of the eigen template classes is used to define CPPAD_TESTVECTOR. In this case, 2.2.3: eigen_prefix must be specified on the cmake command line.
Input File: omh/install/testvector.omh
2.3: Checking the CppAD Examples and Tests

2.3.a: Purpose
After you configure your system with the 2.2.b: cmake command you can run the CppAD example and tests to make sure that CppAD functions properly on your system.

2.3.b: Check All
In the build subdirectory of the 2.1.b: distribution directory execute the command  make check  This will build and run all of the tests that are support by your system and the 2.2: cmake command options.

2.3.b.a: Windows
If you created nmake makefiles, you will have to use nmake instead of make in the commands above and below; see 2.1.i: windows file extraction and testing .

2.3.c: Subsets of make check
In unix, you can determine which subsets of make check are available by putting the output of the 2.2.b: cmake command in a file (called cmake.out below) and executing:       grep 'make check.*available' cmake.out 
2.3.d: First Level
The first level of subsets of make check are described below:
 Command Description make check_introduction the 3: Introduction functions make check_example the normal 10.4: example functions plus some deprecated examples. make check_test_more correctness tests that are not examples make check_speed correctness for single thread 11: speed tests make check_cppad_ipopt the deprecated 12.8.10: cppad_ipopt_nlp speed and correctness tests
Note that make check_example_multi_thread is used for the 7: multi-threading speed tests.
Input File: omh/install/cmake_check.omh

2.4.a: Purpose
The pkg-config package helps with the use of installed libraries; see its guide (http://people.freedesktop.org/~dbn/pkg-config-guide.html) for more information.

2.4.b: Usage
The necessary flags for compiling code that includes CppAD can be obtained with the command  pkg-config --cflags cppad  Note that this command assumes 2.4: cppad.pc is in the search path PKG_CONFIG_PATH. If 2.2.5: ipopt_prefix is specified, the necessary flags for linking 12.8.10: cppad_ipopt can be obtained with the commands  pkg-config --libs cppad  Note that this command assumes ipopt.pc is in the search path PKG_CONFIG_PATH.

2.4.c: Defined Fields
 Name A human-readable name for the CppAD package. Description A brief description of the CppAD package. URL A URL where people can get more information about the CppAD package. Version A string specifically defining the version of the CppAD package. Cflags The necessary flags for using any of the CppAD include files. Libs If 2.2.5: ipopt_prefix is specified, the necessary flags for using the 12.8.10: cppad_ipopt library Requires If 2.2.5: ipopt_prefix is specified, the packages required to use the 12.8.10: cppad_ipopt library

In the table below, builddir is the build directory; i.e., the directory where the CppAD 2.2.b: cmake command is executed. The directory prefix is the value of 2.2.f: cppad_prefix during configuration. The directory datadir is the value of 2.2.j: cmake_install_datadir . The following configuration files contain the information above
 File Description prefix/datadir/pkgconfig/cppad.pc for use after 2.a.d: make install builddir/pkgconfig/cppad-uninstalled.pc for testing before make install

Input File: omh/install/pkgconfig.omh
3: An Introduction by Example to Algorithmic Differentiation

3.a: Purpose
This is an introduction by example to Algorithmic Differentiation. Its purpose is to aid in understand what AD calculates, how the calculations are preformed, and the amount of computation and memory required for a forward or reverse sweep.

3.b: Preface

3.b.a: Algorithmic Differentiation
Algorithmic Differentiation (often referred to as Automatic Differentiation or just AD) uses the software representation of a function to obtain an efficient method for calculating its derivatives. These derivatives can be of arbitrary order and are analytic in nature (do not have any truncation error).

3.b.b: Forward Mode
A forward mode sweep computes the partial derivative of all the dependent variables with respect to one independent variable (or independent variable direction).

3.b.c: Reverse Mode
A reverse mode sweep computes the derivative of one dependent variable (or one dependent variable direction) with respect to all the independent variables.

3.b.d: Operation Count
The number of floating point operations for either a forward or reverse mode sweep is a small multiple of the number required to evaluate the original function. Thus, using reverse mode, you can evaluate the derivative of a scalar valued function with respect to thousands of variables in a small multiple of the work to evaluate the original function.

3.b.e: Efficiency
AD automatically takes advantage of the speed of your algorithmic representation of a function. For example, if you calculate a determinant using LU factorization, AD will use the LU representation for the derivative of the determinant (which is faster than using the definition of the determinant).

3.c: Outline
1. Demonstrate the use of CppAD to calculate derivatives of a polynomial: 10.1: get_started.cpp .
2. Present two algorithms that approximate the exponential function. The first algorithm 3.1.1: exp_2.hpp is simpler and does not include any logical variables or loops. The second algorithm 3.2.1: exp_eps.hpp includes logical operations and a while loop. For each of these algorithms, do the following:
1. Define the mathematical function corresponding to the algorithm (3.1: exp_2 and 3.2: exp_eps ).
2. Write out the floating point operation sequence, and corresponding values, that correspond to executing the algorithm for a specific input (3.1.3: exp_2_for0 and 3.2.3: exp_eps_for0 ).
3. Compute a forward sweep derivative of the operation sequence (3.1.4: exp_2_for1 and 3.2.4: exp_eps_for1 ).
4. Compute a reverse sweep derivative of the operation sequence (3.1.5: exp_2_rev1 and 3.2.5: exp_eps_rev1 ).
5. Use CppAD to compute both a forward and reverse sweep of the operation sequence (3.1.8: exp_2_cppad and 3.2.8: exp_eps_cppad ).
3. The program 3.3: exp_apx.cpp runs all of the test routines that validate the calculations in the 3.1: exp_2 and 3.2: exp_eps presentation.

3.d: Reference
An in-depth review of AD theory and methods can be found in the book Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation  , Andreas Griewank, SIAM Frontiers in Applied Mathematics, 2000.

3.e: Contents
 exp_2: 3.1 Second Order Exponential Approximation exp_eps: 3.2 An Epsilon Accurate Exponential Approximation exp_apx.cpp: 3.3 Correctness Tests For Exponential Approximation in Introduction

Input File: omh/introduction.omh
3.1: Second Order Exponential Approximation

3.1.a: Syntax
# include "exp_2.hpp"   y = exp_2(x)

3.1.b: Purpose
This is a simple example algorithm that is used to demonstrate Algorithmic Differentiation (see 3.2: exp_eps for a more complex example).

3.1.c: Mathematical Form
The exponential function can be defined by $$\exp (x) = 1 + x^1 / 1 ! + x^2 / 2 ! + \cdots$$ The second order approximation for the exponential function is $${\rm exp\_2} (x) = 1 + x + x^2 / 2$$

3.1.d: include
The include command in the syntax is relative to       cppad-yyyymmdd/introduction/exp_apx  where cppad-yyyymmdd is the distribution directory created during the beginning steps of the 2: installation of CppAD.

3.1.e: x
The argument x has prototype       const Type &x  (see Type below). It specifies the point at which to evaluate the approximation for the second order exponential approximation.

3.1.f: y
The result y has prototype       Type y  It is the value of the exponential function approximation defined above.

3.1.g: Type
If u and v are Type objects and i is an int:
 Operation Result Type Description Type(i) Type construct object with value equal to i Type u = v Type construct u with value equal to v u * v Type result is value of $u * v$ u / v Type result is value of $u / v$ u + v Type result is value of $u + v$

3.1.h: Contents
 exp_2.hpp: 3.1.1 exp_2: Implementation exp_2.cpp: 3.1.2 exp_2: Test exp_2_for0: 3.1.3 exp_2: Operation Sequence and Zero Order Forward Mode exp_2_for1: 3.1.4 exp_2: First Order Forward Mode exp_2_rev1: 3.1.5 exp_2: First Order Reverse Mode exp_2_for2: 3.1.6 exp_2: Second Order Forward Mode exp_2_rev2: 3.1.7 exp_2: Second Order Reverse Mode exp_2_cppad: 3.1.8 exp_2: CppAD Forward and Reverse Sweeps

3.1.i: Implementation
The file 3.1.1: exp_2.hpp contains a C++ implementation of this function.

3.1.j: Test
The file 3.1.2: exp_2.cpp contains a test of this implementation. It returns true for success and false for failure.

3.1.k: Exercises
1. Suppose that we make the call  double x = .1; double y = exp_2(x);  What is the value assigned to v1, v2, ... ,v5 in 3.1.1: exp_2.hpp ?
2. Extend the routine exp_2.hpp to a routine exp_3.hpp that computes $$1 + x^2 / 2 ! + x^3 / 3 !$$ Do this in a way that only assigns one value to each variable (as exp_2 does).
3. Suppose that we make the call  double x = .5; double y = exp_3(x);  using exp_3 created in the previous problem. What is the value assigned to the new variables in exp_3 (variables that are in exp_3 and not in exp_2) ?

Input File: introduction/exp_2.hpp
3.1.1: exp_2: Implementation
template <class Type> Type exp_2(const Type &x) { Type v1 = x; // v1 = x Type v2 = Type(1) + v1; // v2 = 1 + x Type v3 = v1 * v1; // v3 = x^2 Type v4 = v3 / Type(2); // v4 = x^2 / 2 Type v5 = v2 + v4; // v5 = 1 + x + x^2 / 2 return v5; // exp_2(x) = 1 + x + x^2 / 2 } 
Input File: introduction/exp_2.omh
3.1.2: exp_2: Test
# include <cmath> // define fabs function # include "exp_2.hpp" // definition of exp_2 algorithm bool exp_2(void) { double x = .5; double check = 1 + x + x * x / 2.; bool ok = std::fabs( exp_2(x) - check ) <= 1e-10; return ok; } 
Input File: introduction/exp_2.omh
3.1.3: exp_2: Operation Sequence and Zero Order Forward Mode

3.1.3.a: Mathematical Form
The operation sequence (see below) corresponding to the algorithm 3.1.1: exp_2.hpp is the same for all values of x . The mathematical form for the corresponding function is $$f(x) = 1 + x + x^2 / 2$$ An algorithmic differentiation package does not operate on the mathematical function $f(x)$ but rather on the particular algorithm used to compute the function (in this case 3.1.1: exp_2.hpp ).

3.1.3.b: Zero Order Expansion
In general, a zero order forward sweep is given a vector $x^{(0)} \in \B{R}^n$ and it returns the corresponding vector $y^{(0)} \in \B{R}^m$ given by $$y^{(0)} = f( x^{(0)} )$$ The superscript $(0)$ denotes zero order derivative; i.e., it is equal to the value of the corresponding variable. For the example we are considering here, both $n$ and $m$ are equal to one.

3.1.3.c: Operation Sequence
An atomic Type operation is an operation that has a Type result and is not made up of other more basic operations. A sequence of atomic Type operations is called a Type operation sequence. Given an C++ algorithm and its inputs, there is a corresponding Type operation sequence for each type. If Type is clear from the context, we drop it and just refer to the operation sequence.  We consider the case where 3.1.1: exp_2.hpp is executed with $x^{(0)} = .5$. The table below contains the corresponding operation sequence and the results of a zero order sweep.

3.1.3.c.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.

3.1.3.c.b: Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.

3.1.3.c.c: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.1.3.c.d: Zero Order
The Zero Order column contains the zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e, in order of increasing index in the operation sequence.

3.1.3.c.e: Sweep
 Index    Code    Operation    Zero Order 1    Type v1 = x; $v_1 = x$ $v_1^{(0)} = 0.5$ 2    Type v2 = Type(1) + v1; $v_2 = 1 + v_1$ $v_2^{(0)} = 1.5$ 3    Type v3 = v1 * v1; $v_3 = v_1 * v_1$ $v_3^{(0)} = 0.25$ 4    Type v4 = v3 / Type(2); $v_4 = v_3 / 2$ $v_4^{(0)} = 0.125$ 5     Type v5 = v2 + v4; $v_5 = v_2 + v_4$ $v_5^{(0)} = 1.625$
3.1.3.d: Return Value
The return value for this case is $$1.625 = v_5^{(0)} = f( x^{(0)} )$$

3.1.3.e: Verification
The file 3.1.3.1: exp_2_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.1.3.f: Exercises
1. Suppose that $x^{(0)} = .2$, what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for $$v_1^{(0)} , v_2^{(0)} , \cdots , v_5^{(0)}$$
2. Create a modified version of 3.1.3.1: exp_2_for0.cpp that verifies the values you obtained for the previous exercise.
3. Create and run a main program that reports the result of calling the modified version of 3.1.3.1: exp_2_for0.cpp in the previous exercise.

Input File: introduction/exp_2.omh
3.1.3.1: exp_2: Verify Zero Order Forward Sweep
# include <cmath>            // for fabs function
bool exp_2_for0(double *v0)  // double v0[6]
{     bool  ok = true;
double x = .5;

v0[1] = x;                                  // v1 = x
ok  &= std::fabs( v0[1] - 0.5) < 1e-10;

v0[2] = 1. + v0[1];                         // v2 = 1 + v1
ok  &= std::fabs( v0[2] - 1.5) < 1e-10;

v0[3] = v0[1] * v0[1];                      // v3 = v1 * v1
ok  &= std::fabs( v0[3] - 0.25) < 1e-10;

v0[4] = v0[3] / 2.;                         // v4 = v3 / 2
ok  &= std::fabs( v0[4] - 0.125) < 1e-10;

v0[5] = v0[2] + v0[4];                      // v5  = v2 + v4
ok  &= std::fabs( v0[5] - 1.625) < 1e-10;

return ok;
}
bool exp_2_for0(void)
{     double v0[6];
return exp_2_for0(v0);
}

Input File: introduction/exp_2_for0.cpp
3.1.4: exp_2: First Order Forward Mode

3.1.4.a: First Order Expansion
We define $x(t)$ near $t = 0$ by the first order expansion $$x(t) = x^{(0)} + x^{(1)} * t$$ it follows that $x^{(0)}$ is the zero, and $x^{(1)}$ the first, order derivative of $x(t)$ at $t = 0$.

3.1.4.b: Purpose
In general, a first order forward sweep is given the 3.1.3.b: zero order derivative for all of the variables in an operation sequence, and the first order derivatives for the independent variables. It uses these to compute the first order derivatives, and thereby obtain the first order expansion, for all the other variables in the operation sequence.

3.1.4.c: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute $$f(x) = 1 + x + x^2 / 2$$ The corresponding derivative function is $$\partial_x f (x) = 1 + x$$ An algorithmic differentiation package does not operate on the mathematical form of the function, or its derivative, but rather on the 3.1.3.c: operation sequence for the for the algorithm that is used to evaluate the function.

3.1.4.d: Operation Sequence
We consider the case where 3.1.1: exp_2.hpp is executed with $x = .5$. The corresponding operation sequence and zero order forward mode values (see 3.1.3.c.e: zero order sweep ) are inputs and are used by a first order forward sweep.

3.1.4.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.1.4.d.b: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.1.4.d.c: Zero Order
The Zero Order column contains the zero order derivatives for the corresponding variable in the operation sequence (see 3.1.3.c.e: zero order sweep ).

3.1.4.d.d: Derivative
The Derivative column contains the mathematical function corresponding to the derivative with respect to $t$, at $t = 0$, for each variable in the sequence.

3.1.4.d.e: First Order
The First Order column contains the first order derivatives for the corresponding variable in the operation sequence; i.e., $$v_j (t) = v_j^{(0)} + v_j^{(1)} t$$ We use $x^{(1)} = 1$ so that differentiation with respect to $t$, at $t = 0$, is the same as partial differentiation with respect to $x$ at $x = x^{(0)}$.

3.1.4.d.f: Sweep
 Index    Operation    Zero Order    Derivative    First Order 1    $v_1 = x$ 0.5 $v_1^{(1)} = x^{(1)}$ $v_1^{(1)} = 1$ 2    $v_2 = 1 + v_1$ 1.5 $v_2^{(1)} = v_1^{(1)}$ $v_2^{(1)} = 1$ 3    $v_3 = v_1 * v_1$ 0.25 $v_3^{(1)} = 2 * v_1^{(0)} * v_1^{(1)}$ $v_3^{(1)} = 1$ 4    $v_4 = v_3 / 2$ 0.125 $v_4^{(1)} = v_3^{(1)} / 2$ $v_4^{(1)} = 0.5$ 5   $v_5 = v_2 + v_4$ 1.625 $v_5^{(1)} = v_2^{(1)} + v_4^{(1)}$ $v_5^{(1)} = 1.5$
3.1.4.e: Return Value
The derivative of the return value for this case is $$\begin{array}{rcl} 1.5 & = & v_5^{(1)} = \left[ \D{v_5}{t} \right]_{t=0} = \left[ \D{}{t} f ( x^{(0)} + x^{(1)} t ) \right]_{t=0} \\ & = & f^{(1)} ( x^{(0)} ) * x^{(1)} = f^{(1)} ( x^{(0)} ) \end{array}$$ (We have used the fact that $x^{(1)} = 1$.)

3.1.4.f: Verification
The file 3.1.4.1: exp_2_for1.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.1.4.g: Exercises
1. Which statement in the routine defined by 3.1.4.1: exp_2_for1.cpp uses the values that are calculated by the routine defined by 3.1.3.1: exp_2_for0.cpp ?
2. Suppose that $x = .1$, what are the results of a zero and first order forward sweep for the operation sequence above; i.e., what are the corresponding values for $v_1^{(0)}, v_2^{(0)}, \cdots , v_5^{(0)}$ and $v_1^{(1)}, v_2^{(1)}, \cdots , v_5^{(1)}$ ?
3. Create a modified version of 3.1.4.1: exp_2_for1.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.4.1: exp_2_for1.cpp .

Input File: introduction/exp_2.omh
3.1.4.1: exp_2: Verify First Order Forward Sweep
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
bool exp_2_for1(double *v1)         // double v1[6]
{     bool ok = true;
double v0[6];

// set the value of v0[j] for j = 1 , ... , 5
ok &= exp_2_for0(v0);

v1[1] = 1.;                                     // v1 = x
ok    &= std::fabs( v1[1] - 1. ) <= 1e-10;

v1[2] = v1[1];                                  // v2 = 1 + v1
ok    &= std::fabs( v1[2] - 1. ) <= 1e-10;

v1[3] = v1[1] * v0[1] + v0[1] * v1[1];          // v3 = v1 * v1
ok    &= std::fabs( v1[3] - 1. ) <= 1e-10;

v1[4] = v1[3] / 2.;                             // v4 = v3 / 2
ok    &= std::fabs( v1[4] - 0.5) <= 1e-10;

v1[5] = v1[2] + v1[4];                          // v5 = v2 + v4
ok    &= std::fabs( v1[5] - 1.5) <= 1e-10;

return ok;
}
bool exp_2_for1(void)
{     double v1[6];
return exp_2_for1(v1);
}

Input File: introduction/exp_2_for1.cpp
3.1.5: exp_2: First Order Reverse Mode

3.1.5.a: Purpose
First order reverse mode uses the 3.1.3.c: operation sequence , and zero order forward sweep values, to compute the first order derivative of one dependent variable with respect to all the independent variables. The computations are done in reverse of the order of the computations in the original algorithm.

3.1.5.b: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute $$f(x) = 1 + x + x^2 / 2$$ The corresponding derivative function is $$\partial_x f (x) = 1 + x$$

3.1.5.c: f_5
For our example, we chose to compute the derivative of the value returned by 3.1.1: exp_2.hpp which is equal to the symbol $v_5$ in the 3.1.3.c: exp_2 operation sequence . We begin with the function $f_5$ where $v_5$ is both an argument and the value of the function; i.e., $$\begin{array}{rcl} f_5 ( v_1 , v_2 , v_3 , v_4 , v_5 ) & = & v_5 \\ \D{f_5}{v_5} & = & 1 \end{array}$$ All the other partial derivatives of $f_5$ are zero.

3.1.5.d: Index 5: f_4
Reverse mode starts with the last operation in the sequence. For the case in question, this is the operation with index 5, $$v_5 = v_2 + v_4$$ We define the function $f_4 ( v_1 , v_2 , v_3 , v_4 )$ as equal to $f_5$ except that $v_5$ is eliminated using this operation; i.e. $$f_4 = f_5 [ v_1 , v_2 , v_3 , v_4 , v_5 ( v_2 , v_4 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_4}{v_2} & = & \D{f_5}{v_2} + \D{f_5}{v_5} * \D{v_5}{v_2} & = 1 \\ \D{f_4}{v_4} & = & \D{f_5}{v_4} + \D{f_5}{v_5} * \D{v_5}{v_4} & = 1 \end{array}$$ All the other partial derivatives of $f_4$ are zero.

3.1.5.e: Index 4: f_3
The next operation has index 4, $$v_4 = v_3 / 2$$ We define the function $f_3 ( v_1 , v_2 , v_3 )$ as equal to $f_4$ except that $v_4$ is eliminated using this operation; i.e., $$f_3 = f_4 [ v_1 , v_2 , v_3 , v_4 ( v_3 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_3}{v_1} & = & \D{f_4}{v_1} & = 0 \\ \D{f_3}{v_2} & = & \D{f_4}{v_2} & = 1 \\ \D{f_3}{v_3} & = & \D{f_4}{v_3} + \D{f_4}{v_4} * \D{v_4}{v_3} & = 0.5 \end{array}$$

3.1.5.f: Index 3: f_2
The next operation has index 3, $$v_3 = v_1 * v_1$$ We define the function $f_2 ( v_1 , v_2 )$ as equal to $f_3$ except that $v_3$ is eliminated using this operation; i.e., $$f_2 = f_3 [ v_1 , v_2 , v_3 ( v_1 ) ]$$ Note that the value of $v_1$ is equal to $x$ which is .5 for this evaluation. It follows that $$\begin{array}{rcll} \D{f_2}{v_1} & = & \D{f_3}{v_1} + \D{f_3}{v_3} * \D{v_3}{v_1} & = 0.5 \\ \D{f_2}{v_2} & = & \D{f_3}{v_2} & = 1 \end{array}$$

3.1.5.g: Index 2: f_1
The next operation has index 2, $$v_2 = 1 + v_1$$ We define the function $f_1 ( v_1 )$ as equal to $f_2$ except that $v_2$ is eliminated using this operation; i.e., $$f_1 = f_2 [ v_1 , v_2 ( v_1 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_1}{v_1} & = & \D{f_2}{v_1} + \D{f_2}{v_2} * \D{v_2}{v_1} & = 1.5 \end{array}$$ Note that $v_1$ is equal to $x$, so the derivative of this is the derivative of the function defined by 3.1.1: exp_2.hpp at $x = .5$.

3.1.5.h: Verification
The file 3.1.5.1: exp_2_rev1.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of $f_j$ that might not be equal to the corresponding partials of $f_{j+1}$; i.e., the other partials of $f_j$ must be equal to the corresponding partials of $f_{j+1}$.

3.1.5.i: Exercises
1. Which statement in the routine defined by 3.1.5.1: exp_2_rev1.cpp uses the values that are calculated by the routine defined by 3.1.3.1: exp_2_for0.cpp ?
2. Consider the case where $x = .1$ and we first preform a zero order forward sweep for the operation sequence used above. What are the results of a first order reverse sweep; i.e., what are the corresponding derivatives of $f_5 , f_4 , \ldots , f_1$.
3. Create a modified version of 3.1.5.1: exp_2_rev1.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.5.1: exp_2_rev1.cpp .

Input File: introduction/exp_2.omh
3.1.5.1: exp_2: Verify First Order Reverse Sweep
# include <cstddef>                 // define size_t
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
bool exp_2_rev1(void)
{     bool ok = true;

// set the value of v0[j] for j = 1 , ... , 5
double v0[6];
ok &= exp_2_for0(v0);

// initial all partial derivatives as zero
double f_v[6];
size_t j;
for(j = 0; j < 6; j++)
f_v[j] = 0.;

// set partial derivative for f5
f_v[5] = 1.;
ok &= std::fabs( f_v[5] - 1. ) <= 1e-10; // f5_v5

// f4 = f5( v1 , v2 , v3 , v4 , v2 + v4 )
f_v[2] += f_v[5] * 1.;
f_v[4] += f_v[5] * 1.;
ok &= std::fabs( f_v[2] - 1. ) <= 1e-10; // f4_v2
ok &= std::fabs( f_v[4] - 1. ) <= 1e-10; // f4_v4

// f3 = f4( v1 , v2 , v3 , v3 / 2 )
f_v[3] += f_v[4] / 2.;
ok &= std::fabs( f_v[3] - 0.5) <= 1e-10; // f3_v3

// f2 = f3( v1 , v2 , v1 * v1 )
f_v[1] += f_v[3] * 2. * v0[1];
ok &= std::fabs( f_v[1] - 0.5) <= 1e-10; // f2_v1

// f1 = f2( v1 , 1 + v1 )
f_v[1] += f_v[2] * 1.;
ok &= std::fabs( f_v[1] - 1.5) <= 1e-10; // f1_v1

return ok;
}

Input File: introduction/exp_2_rev1.cpp
3.1.6: exp_2: Second Order Forward Mode

3.1.6.a: Second Order Expansion
We define $x(t)$ near $t = 0$ by the second order expansion $$x(t) = x^{(0)} + x^{(1)} * t + x^{(2)} * t^2 / 2$$ It follows that for $k = 0 , 1 , 2$, $$x^{(k)} = \dpow{k}{t} x (0)$$

3.1.6.b: Purpose
In general, a second order forward sweep is given the 3.1.4.a: first order expansion for all of the variables in an operation sequence, and the second order derivatives for the independent variables. It uses these to compute the second order derivative, and thereby obtain the second order expansion, for all the variables in the operation sequence.

3.1.6.c: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute $$f(x) = 1 + x + x^2 / 2$$ The corresponding second derivative function is $$\Dpow{2}{x} f (x) = 1$$

3.1.6.d: Operation Sequence
We consider the case where 3.1.1: exp_2.hpp is executed with $x = .5$. The corresponding operation sequence, zero order forward sweep values, and first order forward sweep values are inputs and are used by a second order forward sweep.

3.1.6.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.1.6.d.b: Zero
The Zero column contains the zero order sweep results for the corresponding variable in the operation sequence (see 3.1.3.c.e: zero order sweep ).

3.1.6.d.c: Operation
The Operation column contains the first order sweep operation for this variable.

3.1.6.d.d: First
The First column contains the first order sweep results for the corresponding variable in the operation sequence (see 3.1.4.d.f: first order sweep ).

3.1.6.d.e: Derivative
The Derivative column contains the mathematical function corresponding to the second derivative with respect to $t$, at $t = 0$, for each variable in the sequence.

3.1.6.d.f: Second
The Second column contains the second order derivatives for the corresponding variable in the operation sequence; i.e., the second order expansion for the i-th variable is given by $$v_i (t) = v_i^{(0)} + v_i^{(1)} * t + v_i^{(2)} * t^2 / 2$$ We use $x^{(0)} = 1$, and $x^{(2)} = 0$ so that second order differentiation with respect to $t$, at $t = 0$, is the same as the second partial differentiation with respect to $x$ at $x = x^{(0)}$.

3.1.6.d.g: Sweep
 Index    Zero    Operation    First    Derivative    Second 1 0.5    $v_1^{(1)} = x^{(1)}$ 1 $v_1^{(2)} = x^{(2)}$ $v_1^{(2)} = 0$ 2 1.5    $v_2^{(1)} = v_1^{(1)}$ 1 $v_2^{(2)} = v_1^{(2)}$ $v_2^{(2)} = 0$ 3 0.25    $v_3^{(1)} = 2 * v_1^{(0)} * v_1^{(1)}$ 1 $v_3^{(2)} = 2 * (v_1^{(1)} * v_1^{(1)} + v_1^{(0)} * v_1^{(2)} )$ $v_3^{(2)} = 2$ 4 0.125    $v_4^{(1)} = v_3^{(1)} / 2$ .5 $v_4^{(2)} = v_3^{(2)} / 2$ $v_4^{(2)} = 1$ 5 1.625   $v_5^{(1)} = v_2^{(1)} + v_4^{(1)}$ 1.5 $v_5^{(2)} = v_2^{(2)} + v_4^{(2)}$ $v_5^{(2)} = 1$
3.1.6.e: Return Value
The second derivative of the return value for this case is $$\begin{array}{rcl} 1 & = & v_5^{(2)} = \left[ \Dpow{2}{t} v_5 \right]_{t=0} = \left[ \Dpow{2}{t} f( x^{(0)} + x^{(1)} * t ) \right]_{t=0} \\ & = & x^{(1)} * \Dpow{2}{x} f ( x^{(0)} ) * x^{(1)} = \Dpow{2}{x} f ( x^{(0)} ) \end{array}$$ (We have used the fact that $x^{(1)} = 1$ and $x^{(2)} = 0$.)

3.1.6.f: Verification
The file 3.1.6.1: exp_2_for2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.1.6.g: Exercises
1. Which statement in the routine defined by 3.1.6.1: exp_2_for2.cpp uses the values that are calculated by the routine defined by 3.1.4.1: exp_2_for1.cpp ?
2. Suppose that $x = .1$, what are the results of a zero, first, and second order forward sweep for the operation sequence above; i.e., what are the corresponding values for $v_i^{(k)}$ for $i = 1, \ldots , 5$ and $k = 0, 1, 2$.
3. Create a modified version of 3.1.6.1: exp_2_for2.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.6.1: exp_2_for2.cpp .

Input File: introduction/exp_2.omh
3.1.6.1: exp_2: Verify Second Order Forward Sweep
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
extern bool exp_2_for1(double *v1); // computes first order forward sweep
bool exp_2_for2(void)
{     bool ok = true;
double v0[6], v1[6], v2[6];

// set the value of v0[j], v1[j], for j = 1 , ... , 5
ok &= exp_2_for0(v0);
ok &= exp_2_for1(v1);

v2[1] = 0.;                                     // v1 = x
ok    &= std::fabs( v2[1] - 0. ) <= 1e-10;

v2[2] = v2[1];                                  // v2 = 1 + v1
ok    &= std::fabs( v2[2] - 0. ) <= 1e-10;

v2[3] = 2.*(v0[1]*v2[1] + v1[1]*v1[1]);         // v3 = v1 * v1
ok    &= std::fabs( v2[3] - 2. ) <= 1e-10;

v2[4] = v2[3] / 2.;                             // v4 = v3 / 2
ok    &= std::fabs( v2[4] - 1. ) <= 1e-10;

v2[5] = v2[2] + v2[4];                          // v5 = v2 + v4
ok    &= std::fabs( v2[5] - 1. ) <= 1e-10;

return ok;
}

Input File: introduction/exp_2_for2.cpp
3.1.7: exp_2: Second Order Reverse Mode

3.1.7.a: Purpose
In general, a second order reverse sweep is given the 3.1.4.a: first order expansion for all of the variables in an operation sequence. Given a choice of a particular variable, it computes the derivative, of that variables first order expansion coefficient, with respect to all of the independent variables.

3.1.7.b: Mathematical Form
Suppose that we use the algorithm 3.1.1: exp_2.hpp to compute $$f(x) = 1 + x + x^2 / 2$$ The corresponding second derivative is $$\Dpow{2}{x} f (x) = 1$$

3.1.7.c: f_5
For our example, we chose to compute the derivative of $v_5^{(1)}$ with respect to all the independent variable. For the case computed for the 3.1.4.d.f: first order sweep , $v_5^{(1)}$ is the derivative of the value returned by 3.1.1: exp_2.hpp . This the value computed will be the second derivative of the value returned by 3.1.1: exp_2.hpp . We begin with the function $f_5$ where $v_5^{(1)}$ is both an argument and the value of the function; i.e., $$\begin{array}{rcl} f_5 \left( v_1^{(0)}, v_1^{(1)} , \ldots , v_5^{(0)} , v_5^{(1)} \right) & = & v_5^{(1)} \\ \D{f_5}{v_5^{(1)}} & = & 1 \end{array}$$ All the other partial derivatives of $f_5$ are zero.

3.1.7.d: Index 5: f_4
Second order reverse mode starts with the last operation in the sequence. For the case in question, this is the operation with index 5. The zero and first order sweep representations of this operation are $$\begin{array}{rcl} v_5^{(0)} & = & v_2^{(0)} + v_4^{(0)} \\ v_5^{(1)} & = & v_2^{(1)} + v_4^{(1)} \end{array}$$ We define the function $f_4 \left( v_1^{(0)} , \ldots , v_4^{(1)} \right)$ as equal to $f_5$ except that $v_5^{(0)}$ and $v_5^{(1)}$ are eliminated using this operation; i.e. $$f_4 = f_5 \left[ v_1^{(0)} , \ldots , v_4^{(1)} , v_5^{(0)} \left( v_2^{(0)} , v_4^{(0)} \right) , v_5^{(1)} \left( v_2^{(1)} , v_4^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_4}{v_2^{(1)}} & = & \D{f_5}{v_2^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_2^{(1)}} & = 1 \\ \D{f_4}{v_4^{(1)}} & = & \D{f_5}{v_4^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5}{v_4^{(1)}} & = 1 \end{array}$$ All the other partial derivatives of $f_4$ are zero.

3.1.7.e: Index 4: f_3
The next operation has index 4, $$\begin{array}{rcl} v_4^{(0)} & = & v_3^{(0)} / 2 \\ v_4^{(1)} & = & v_3^{(1)} / 2 \end{array}$$ We define the function $f_3 \left( v_1^{(0)} , \ldots , v_3^{(1)} \right)$ as equal to $f_4$ except that $v_4^{(0)}$ and $v_4^{(1)}$ are eliminated using this operation; i.e., $$f_3 = f_4 \left[ v_1^{(0)} , \ldots , v_3^{(1)} , v_4^{(0)} \left( v_3^{(0)} \right) , v_4^{(1)} \left( v_3^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_3}{v_2^{(1)}} & = & \D{f_4}{v_2^{(1)}} & = 1 \\ \D{f_3}{v_3^{(1)}} & = & \D{f_4}{v_3^{(1)}} + \D{f_4}{v_4^{(1)}} * \D{v_4^{(1)}}{v_3^{(1)}} & = 0.5 \end{array}$$ All the other partial derivatives of $f_3$ are zero.

3.1.7.f: Index 3: f_2
The next operation has index 3, $$\begin{array}{rcl} v_3^{(0)} & = & v_1^{(0)} * v_1^{(0)} \\ v_3^{(1)} & = & 2 * v_1^{(0)} * v_1^{(1)} \end{array}$$ We define the function $f_2 \left( v_1^{(0)} , \ldots , v_2^{(1)} \right)$ as equal to $f_3$ except that $v_3^{(0)}$ and $v_3^{(1)}$ are eliminated using this operation; i.e., $$f_2 = f_3 \left[ v_1^{(0)} , \ldots , v_2^{(1)} , v_3^{(0)} ( v_1^{(0)} ) , v_3^{(1)} ( v_1^{(0)} , v_1^{(1)} ) \right]$$ Note that, from the 3.1.4.d.f: first order forward sweep , the value of $v_1^{(0)}$ is equal to $.5$ and $v_1^{(1)}$ is equal 1. It follows that $$\begin{array}{rcll} \D{f_2}{v_1^{(0)}} & = & \D{f_3}{v_1^{(0)}} + \D{f_3}{v_3^{(0)}} * \D{v_3^{(0)}}{v_1^{(0)}} + \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_1^{(0)}} & = 1 \\ \D{f_2}{v_1^{(1)}} & = & \D{f_3}{v_1^{(1)}} + \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_1^{(1)}} & = 0.5 \\ \D{f_2}{v_2^{(0)}} & = & \D{f_3}{v_2^{(0)}} & = 0 \\ \D{f_2}{v_2^{(1)}} & = & \D{f_3}{v_2^{(1)}} & = 1 \end{array}$$

3.1.7.g: Index 2: f_1
The next operation has index 2, $$\begin{array}{rcl} v_2^{(0)} & = & 1 + v_1^{(0)} \\ v_2^{(1)} & = & v_1^{(1)} \end{array}$$ We define the function $f_1 ( v_1^{(0)} , v_1^{(1)} )$ as equal to $f_2$ except that $v_2^{(0)}$ and $v_2^{(1)}$ are eliminated using this operation; i.e., $$f_1 = f_2 \left[ v_1^{(0)} , v_1^{(1)} , v_2^{(0)} ( v_1^{(0)} ) , v_2^{(1)} ( v_1^{(1)} ) \right]$$ It follows that $$\begin{array}{rcll} \D{f_1}{v_1^{(0)}} & = & \D{f_2}{v_1^{(0)}} + \D{f_2}{v_2^{(0)}} * \D{v_2^{(0)}}{v_1^{(0)}} & = 1 \\ \D{f_1}{v_1^{(1)}} & = & \D{f_2}{v_1^{(1)}} + \D{f_2}{v_2^{(1)}} * \D{v_2^{(1)}}{v_1^{(1)}} & = 1.5 \end{array}$$ Note that $v_1$ is equal to $x$, so the second derivative of the function defined by 3.1.1: exp_2.hpp at $x = .5$ is given by $$\Dpow{2}{x} v_5^{(0)} = \D{ v_5^{(1)} }{x} = \D{ v_5^{(1)} }{v_1^{(0)}} = \D{f_1}{v_1^{(0)}} = 1$$ There is a theorem about Algorithmic Differentiation that explains why the other partial of $f_1$ is equal to the first derivative of the function defined by 3.1.1: exp_2.hpp at $x = .5$.

3.1.7.h: Verification
The file 3.1.7.1: exp_2_rev2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of $f_j$ that might not be equal to the corresponding partials of $f_{j+1}$; i.e., the other partials of $f_j$ must be equal to the corresponding partials of $f_{j+1}$.

3.1.7.i: Exercises
1. Which statement in the routine defined by 3.1.7.1: exp_2_rev2.cpp uses the values that are calculated by the routine defined by 3.1.3.1: exp_2_for0.cpp ? Which statements use values that are calculate by the routine defined in 3.1.4.1: exp_2_for1.cpp ?
2. Consider the case where $x = .1$ and we first preform a zero order forward sweep, then a first order sweep, for the operation sequence used above. What are the results of a second order reverse sweep; i.e., what are the corresponding derivatives of $f_5 , f_4 , \ldots , f_1$.
3. Create a modified version of 3.1.7.1: exp_2_rev2.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.1.7.1: exp_2_rev2.cpp .

Input File: introduction/exp_2.omh
3.1.7.1: exp_2: Verify Second Order Reverse Sweep
# include <cstddef>                 // define size_t
# include <cmath>                   // prototype for fabs
extern bool exp_2_for0(double *v0); // computes zero order forward sweep
extern bool exp_2_for1(double *v1); // computes first order forward sweep
bool exp_2_rev2(void)
{     bool ok = true;

// set the value of v0[j], v1[j] for j = 1 , ... , 5
double v0[6], v1[6];
ok &= exp_2_for0(v0);
ok &= exp_2_for1(v1);

// initial all partial derivatives as zero
double f_v0[6], f_v1[6];
size_t j;
for(j = 0; j < 6; j++)
{     f_v0[j] = 0.;
f_v1[j] = 0.;
}

// set partial derivative for f_5
f_v1[5] = 1.;
ok &= std::fabs( f_v1[5] - 1. ) <= 1e-10; // partial f_5 w.r.t v_5^1

// f_4 = f_5( v_1^0 , ... , v_4^1 , v_2^0 + v_4^0 , v_2^1 + v_4^1 )
f_v0[2] += f_v0[5] * 1.;
f_v0[4] += f_v0[5] * 1.;
f_v1[2] += f_v1[5] * 1.;
f_v1[4] += f_v1[5] * 1.;
ok &= std::fabs( f_v0[2] - 0. ) <= 1e-10; // partial f_4 w.r.t. v_2^0
ok &= std::fabs( f_v0[4] - 0. ) <= 1e-10; // partial f_4 w.r.t. v_4^0
ok &= std::fabs( f_v1[2] - 1. ) <= 1e-10; // partial f_4 w.r.t. v_2^1
ok &= std::fabs( f_v1[4] - 1. ) <= 1e-10; // partial f_4 w.r.t. v_4^1

// f_3 = f_4( v_1^0 , ... , v_3^1, v_3^0 / 2 , v_3^1 / 2 )
f_v0[3] += f_v0[4] / 2.;
f_v1[3] += f_v1[4] / 2.;
ok &= std::fabs( f_v0[3] - 0.  ) <= 1e-10; // partial f_3 w.r.t. v_3^0
ok &= std::fabs( f_v1[3] - 0.5 ) <= 1e-10; // partial f_3 w.r.t. v_3^1

// f_2 = f_3(  v_1^0 , ... , v_2^1, v_1^0 * v_1^0 , 2 * v_1^0 * v_1^1 )
f_v0[1] += f_v0[3] * 2. * v0[1];
f_v0[1] += f_v1[3] * 2. * v1[1];
f_v1[1] += f_v1[3] * 2. * v0[1];
ok &= std::fabs( f_v0[1] - 1.  ) <= 1e-10; // partial f_2 w.r.t. v_1^0
ok &= std::fabs( f_v1[1] - 0.5 ) <= 1e-10; // partial f_2 w.r.t. v_1^1

// f_1 = f_2( v_1^0 , v_1^1 , 1 + v_1^0 , v_1^1 )
f_v0[1] += f_v0[2] * 1.;
f_v1[1] += f_v1[2] * 1.;
ok &= std::fabs( f_v0[1] - 1. ) <= 1e-10; // partial f_1 w.r.t. v_1^0
ok &= std::fabs( f_v1[1] - 1.5) <= 1e-10; // partial f_1 w.r.t. v_1^1

return ok;
}

Input File: introduction/exp_2_rev2.cpp
3.1.8: exp_2: CppAD Forward and Reverse Sweeps
.

3.1.8.a: Purpose
Use CppAD forward and reverse modes to compute the partial derivative with respect to $x$, at the point $x = .5$, of the function       exp_2(x)  as defined by the 3.1.1: exp_2.hpp include file.

3.1.8.b: Exercises
1. Create and test a modified version of the routine below that computes the same order derivatives with respect to $x$, at the point $x = .1$ of the function       exp_2(x) 
2. Create a routine called       exp_3(x)  that evaluates the function $$f(x) = 1 + x^2 / 2 + x^3 / 6$$ Test a modified version of the routine below that computes the derivative of $f(x)$ at the point $x = .5$.

# include "exp_2.hpp"        // second order exponential approximation
{     bool ok = true;
using CppAD::vector;    // can use any simple vector template class
using CppAD::NearEqual; // checks if values are nearly equal

// domain space vector
size_t n = 1; // dimension of the domain space
X[0] = .5;    // value of x for this operation sequence

// declare independent variables and start recording operation sequence

// evaluate our exponential approximation

// range space vector
size_t m = 1;  // dimension of the range space
Y[0] = apx;    // variable that represents only range space component

// Create f: X -> Y corresponding to this operation sequence
// and stop recording. This also executes a zero order forward
// sweep using values in X for x.

// first order forward sweep that computes
// partial of exp_2(x) with respect to x
vector<double> dx(n);  // differential in domain space
vector<double> dy(m);  // differential in range space
dx[0] = 1.;            // direction for partial derivative
dy    = f.Forward(1, dx);
double check = 1.5;
ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

// first order reverse sweep that computes the derivative
vector<double>  w(m);   // weights for components of the range
vector<double> dw(n);   // derivative of the weighted function
w[0] = 1.;              // there is only one weight
dw   = f.Reverse(1, w); // derivative of w[0] * exp_2(x)
check = 1.5;            // partial of exp_2(x) with respect to x
ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);

// second order forward sweep that computes
// second partial of exp_2(x) with respect to x
vector<double> x2(n);     // second order Taylor coefficients
vector<double> y2(m);
x2[0] = 0.;               // evaluate second partial .w.r.t. x
y2    = f.Forward(2, x2);
check = 0.5 * 1.;         // Taylor coef is 1/2 second derivative
ok   &= NearEqual(y2[0], check, 1e-10, 1e-10);

// second order reverse sweep that computes
// derivative of partial of exp_2(x) w.r.t. x
dw.resize(2 * n);         // space for first and second derivatives
dw    = f.Reverse(2, w);
check = 1.;               // result should be second derivative
ok   &= NearEqual(dw[0*2+1], check, 1e-10, 1e-10);

return ok;
}


3.2: An Epsilon Accurate Exponential Approximation

3.2.a: Syntax
# include "exp_eps.hpp"   y = exp_eps(x, epsilon)

3.2.b: Purpose
This is a an example algorithm that is used to demonstrate how Algorithmic Differentiation works with loops and boolean decision variables (see 3.1: exp_2 for a simpler example).

3.2.c: Mathematical Function
The exponential function can be defined by $$\exp (x) = 1 + x^1 / 1 ! + x^2 / 2 ! + \cdots$$ We define $k ( x, \varepsilon )$ as the smallest non-negative integer such that $\varepsilon \geq x^k / k !$; i.e., $$k( x, \varepsilon ) = \min \{ k \in {\rm Z}_+ \; | \; \varepsilon \geq x^k / k ! \}$$ The mathematical form for our approximation of the exponential function is $$\begin{array}{rcl} {\rm exp\_eps} (x , \varepsilon ) & = & \left\{ \begin{array}{ll} \frac{1}{ {\rm exp\_eps} (-x , \varepsilon ) } & {\rm if} \; x < 0 \\ 1 + x^1 / 1 ! + \cdots + x^{k( x, \varepsilon)} / k( x, \varepsilon ) ! & {\rm otherwise} \end{array} \right. \end{array}$$

3.2.d: include
The include command in the syntax is relative to       cppad-yyyymmdd/introduction/exp_apx  where cppad-yyyymmdd is the distribution directory created during the beginning steps of the 2: installation of CppAD.

3.2.e: x
The argument x has prototype       const Type &x  (see Type below). It specifies the point at which to evaluate the approximation for the exponential function.

3.2.f: epsilon
The argument epsilon has prototype       const Type &epsilon  It specifies the accuracy with which to approximate the exponential function value; i.e., it is the value of $\varepsilon$ in the exponential function approximation defined above.

3.2.g: y
The result y has prototype       Type y  It is the value of the exponential function approximation defined above.

3.2.h: Type
If u and v are Type objects and i is an int:
 Operation Result Type Description Type(i) Type object with value equal to i Type u = v Type construct u with value equal to v u > v bool true, if u greater than v , an false otherwise u = v Type new u (and result) is value of v u * v Type result is value of $u * v$ u / v Type result is value of $u / v$ u + v Type result is value of $u + v$ -u Type result is value of $- u$

3.2.i: Implementation
The file 3.2.1: exp_eps.hpp contains a C++ implementation of this function.

3.2.j: Test
The file 3.2.2: exp_eps.cpp contains a test of this implementation. It returns true for success and false for failure.

3.2.k: Exercises
1. Using the definition of $k( x, \varepsilon )$ above, what is the value of $k(.5, 1)$, $k(.5, .1)$, and $k(.5, .01)$ ?
2. Suppose that we make the following call to exp_eps:  double x = 1.; double epsilon = .01; double y = exp_eps(x, epsilon);  What is the value assigned to k, temp, term, and sum the first time through the while loop in 3.2.1: exp_eps.hpp ?
3. Continuing the previous exercise, what is the value assigned to k, temp, term, and sum the second time through the while loop in 3.2.1: exp_eps.hpp ?

Input File: introduction/exp_eps.hpp
3.2.1: exp_eps: Implementation
template <class Type> Type exp_eps(const Type &x, const Type &epsilon) { // abs_x = |x| Type abs_x = x; if( Type(0) > x ) abs_x = - x; // initialize int k = 0; // initial order Type term = 1.; // term = |x|^k / k ! Type sum = term; // initial sum while(term > epsilon) { k = k + 1; // order for next term Type temp = term * abs_x; // term = |x|^k / (k-1)! term = temp / Type(k); // term = |x|^k / k ! sum = sum + term; // sum = 1 + ... + |x|^k / k ! } // In the case where x is negative, use exp(x) = 1 / exp(-|x|) if( Type(0) > x ) sum = Type(1) / sum; return sum; } 
Input File: introduction/exp_eps.omh
3.2.2: exp_eps: Test of exp_eps
# include <cmath> // for fabs function # include "exp_eps.hpp" // definition of exp_eps algorithm bool exp_eps(void) { double x = .5; double epsilon = .2; double check = 1 + .5 + .125; // include 1 term less than epsilon bool ok = std::fabs( exp_eps(x, epsilon) - check ) <= 1e-10; return ok; } 
Input File: introduction/exp_eps.omh
3.2.3: exp_eps: Operation Sequence and Zero Order Forward Sweep

3.2.3.a: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(x, epsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical form for the operation sequence corresponding to the exp_eps is $$f( x , \varepsilon ) = 1 + x + x^2 / 2$$ Note that, for these particular values of x and epsilon , this is the same as the mathematical form for 3.1.3.a: exp_2 .

3.2.3.b: Operation Sequence
We consider the 12.4.g.b: operation sequence corresponding to the algorithm 3.2.1: exp_eps.hpp with the argument x is equal to .5 and epsilon is equal to .2.

3.2.3.b.a: Variable
We refer to values that depend on the input variables x and epsilon as variables.

3.2.3.b.b: Parameter
We refer to values that do not depend on the input variables x or epsilon as parameters. Operations where the result is a parameter are not included in the zero order sweep below.

3.2.3.b.c: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.

3.2.3.b.d: Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.

3.2.3.b.e: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.2.3.b.f: Zero Order
The Zero Order column contains the 3.1.3.b: zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e., in order of increasing index.

3.2.3.b.g: Sweep
 Index    Code    Operation    Zero Order 1    abs_x = x; $v_1 = x$ $v_1^{(0)} = 0.5$ 2    temp = term * abs_x; $v_2 = 1 * v_1$ $v_2^{(0)} = 0.5$ 3    term = temp / Type(k); $v_3 = v_2 / 1$ $v_3^{(0)} = 0.5$ 4    sum = sum + term; $v_4 = 1 + v_3$ $v_4^{(0)} = 1.5$ 5    temp = term * abs_x; $v_5 = v_3 * v_1$ $v_5^{(0)} = 0.25$ 6    term = temp / Type(k); $v_6 = v_5 / 2$ $v_6^{(0)} = 0.125$ 7    sum = sum + term; $v_7 = v_4 + v_6$ $v_7^{(0)} = 1.625$
3.2.3.c: Return Value
The return value for this case is $$1.625 = v_7^{(0)} = f ( x^{(0)} , \varepsilon^{(0)} )$$

3.2.3.d: Comparisons
If x were negative, or if epsilon were a much smaller or much larger value, the results of the following comparisons could be different:  if( Type(0) > x ) while(term > epsilon)  This in turn would result in a different operation sequence. Thus the operation sequence above only corresponds to 3.2.1: exp_eps.hpp for values of x and epsilon within a certain range. Note that there is a neighborhood of $x = 0.5$ for which the comparisons would have the same result and hence the operation sequence would be the same.

3.2.3.e: Verification
The file 3.2.3.1: exp_eps_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.2.3.f: Exercises
1. Suppose that $x^{(0)} = .1$, what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for $v_1^{(0)} , v_2^{(0)} , \ldots , v_7^{(0)}$.
2. Create a modified version of 3.2.3.1: exp_eps_for0.cpp that verifies the values you obtained for the previous exercise.
3. Create and run a main program that reports the result of calling the modified version of 3.2.3.1: exp_eps_for0.cpp in the previous exercise.

Input File: introduction/exp_eps.omh
3.2.3.1: exp_eps: Verify Zero Order Forward Sweep
# include <cmath>                // for fabs function
bool exp_eps_for0(double *v0)    // double v0[8]
{     bool  ok = true;
double x = .5;

v0[1] = x;                                  // abs_x = x;
ok  &= std::fabs( v0[1] - 0.5) < 1e-10;

v0[2] = 1. * v0[1];                         // temp = term * abs_x;
ok  &= std::fabs( v0[2] - 0.5) < 1e-10;

v0[3] = v0[2] / 1.;                         // term = temp / Type(k);
ok  &= std::fabs( v0[3] - 0.5) < 1e-10;

v0[4] = 1. + v0[3];                         // sum = sum + term;
ok  &= std::fabs( v0[4] - 1.5) < 1e-10;

v0[5] = v0[3] * v0[1];                      // temp = term * abs_x;
ok  &= std::fabs( v0[5] - 0.25) < 1e-10;

v0[6] = v0[5] / 2.;                         // term = temp / Type(k);
ok  &= std::fabs( v0[6] - 0.125) < 1e-10;

v0[7] = v0[4] + v0[6];                      // sum = sum + term;
ok  &= std::fabs( v0[7] - 1.625) < 1e-10;

return ok;
}
bool exp_eps_for0(void)
{     double v0[8];
return exp_eps_for0(v0);
}

Input File: introduction/exp_eps_for0.cpp
3.2.4: exp_eps: First Order Forward Sweep

3.2.4.a: First Order Expansion
We define $x(t)$ and $\varepsilon(t) ]$ near $t = 0$ by the first order expansions $$\begin{array}{rcl} x(t) & = & x^{(0)} + x^{(1)} * t \\ \varepsilon(t) & = & \varepsilon^{(0)} + \varepsilon^{(1)} * t \end{array}$$ It follows that $x^{(0)}$ ($\varepsilon^{(0)}$) is the zero, and $x^{(1)}$ ($\varepsilon^{(1)}$) the first, order derivative of $x(t)$ at $t = 0$ ($\varepsilon (t)$) at $t = 0$.

3.2.4.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(x, epsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is $$f ( x , \varepsilon ) = 1 + x + x^2 / 2$$ The corresponding partial derivative with respect to $x$, and the value of the derivative, are $$\partial_x f ( x , \varepsilon ) = 1 + x = 1.5$$

3.2.4.c: Operation Sequence

3.2.4.c.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.2.4.c.b: Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.

3.2.4.c.c: Zero Order
The Zero Order column contains the zero order derivatives for the corresponding variable in the operation sequence (see 3.1.4.d.f: zero order sweep ).

3.2.4.c.d: Derivative
The Derivative column contains the mathematical function corresponding to the derivative with respect to $t$, at $t = 0$, for each variable in the sequence.

3.2.4.c.e: First Order
The First Order column contains the first order derivatives for the corresponding variable in the operation sequence; i.e., $$v_j (t) = v_j^{(0)} + v_j^{(1)} t$$ We use $x^{(1)} = 1$ and $\varepsilon^{(1)} = 0$, so that differentiation with respect to $t$, at $t = 0$, is the same partial differentiation with respect to $x$ at $x = x^{(0)}$.

3.2.4.c.f: Sweep
 Index    Operation    Zero Order    Derivative    First Order 1    $v_1 = x$ 0.5 $v_1^{(1)} = x^{(1)}$ $v_1^{(1)} = 1$ 2    $v_2 = 1 * v_1$ 0.5 $v_2^{(1)} = 1 * v_1^{(1)}$ $v_2^{(1)} = 1$ 3    $v_3 = v_2 / 1$ 0.5 $v_3^{(1)} = v_2^{(1)} / 1$ $v_3^{(1)} = 1$ 4    $v_4 = 1 + v_3$ 1.5 $v_4^{(1)} = v_3^{(1)}$ $v_4^{(1)} = 1$ 5    $v_5 = v_3 * v_1$ 0.25 $v_5^{(1)} = v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)}$ $v_5^{(1)} = 1$ 6    $v_6 = v_5 / 2$ 0.125 $v_6^{(1)} = v_5^{(1)} / 2$ $v_6^{(1)} = 0.5$ 7    $v_7 = v_4 + v_6$ 1.625 $v_7^{(1)} = v_4^{(1)} + v_6^{(1)}$ $v_7^{(1)} = 1.5$
3.2.4.d: Return Value
The derivative of the return value for this case is $$\begin{array}{rcl} 1.5 & = & v_7^{(1)} = \left[ \D{v_7}{t} \right]_{t=0} = \left[ \D{}{t} f( x^{(0)} + x^{(1)} * t , \varepsilon^{(0)} ) \right]_{t=0} \\ & = & \partial_x f ( x^{(0)} , \varepsilon^{(0)} ) * x^{(1)} = \partial_x f ( x^{(0)} , \varepsilon^{(0)} ) \end{array}$$ (We have used the fact that $x^{(1)} = 1$ and $\varepsilon^{(1)} = 0$.)

3.2.4.e: Verification
The file 3.2.4.1: exp_eps_for1.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.

3.2.4.f: Exercises
1. Suppose that $x = .1$, what are the results of a zero and first order forward mode sweep for the operation sequence above; i.e., what are the corresponding values for $v_1^{(0)}, v_2^{(0)}, \cdots , v_7^{(0)}$ and $v_1^{(1)}, v_2^{(1)}, \cdots , v_7^{(1)}$ ?
2. Create a modified version of 3.2.4.1: exp_eps_for1.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.4.1: exp_eps_for1.cpp .
3. Suppose that $x = .1$ and $\epsilon = .2$, what is the operation sequence corresponding to       exp_eps(x, epsilon) 

Input File: introduction/exp_eps.omh
3.2.4.1: exp_eps: Verify First Order Forward Sweep
# include <cmath>                     // for fabs function
extern bool exp_eps_for0(double *v0); // computes zero order forward sweep
bool exp_eps_for1(double *v1)         // double v[8]
{     bool ok = true;
double v0[8];

// set the value of v0[j] for j = 1 , ... , 7
ok &= exp_eps_for0(v0);

v1[1] = 1.;                                      // v1 = x
ok    &= std::fabs( v1[1] - 1. ) <= 1e-10;

v1[2] = 1. * v1[1];                              // v2 = 1 * v1
ok    &= std::fabs( v1[2] - 1. ) <= 1e-10;

v1[3] = v1[2] / 1.;                              // v3 = v2 / 1
ok    &= std::fabs( v1[3] - 1. ) <= 1e-10;

v1[4] = v1[3];                                   // v4 = 1 + v3
ok    &= std::fabs( v1[4] - 1. ) <= 1e-10;

v1[5] = v1[3] * v0[1] + v0[3] * v1[1];           // v5 = v3 * v1
ok    &= std::fabs( v1[5] - 1. ) <= 1e-10;

v1[6] = v1[5] / 2.;                              // v6 = v5 / 2
ok    &= std::fabs( v1[6] - 0.5 ) <= 1e-10;

v1[7] = v1[4] + v1[6];                           // v7 = v4 + v6
ok    &= std::fabs( v1[7] - 1.5 ) <= 1e-10;

return ok;
}
bool exp_eps_for1(void)
{     double v1[8];
return exp_eps_for1(v1);
}

Input File: introduction/exp_eps_for1.cpp
3.2.5: exp_eps: First Order Reverse Sweep

3.2.5.a: Purpose
First order reverse mode uses the 3.2.3.b: operation sequence , and zero order forward sweep values, to compute the first order derivative of one dependent variable with respect to all the independent variables. The computations are done in reverse of the order of the computations in the original algorithm.

3.2.5.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(x, epsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is $$f ( x , \varepsilon ) = 1 + x + x^2 / 2$$ The corresponding partial derivatives, and the value of the derivatives, are $$\begin{array}{rcl} \partial_x f ( x , \varepsilon ) & = & 1 + x = 1.5 \\ \partial_\varepsilon f ( x , \varepsilon ) & = & 0 \end{array}$$

3.2.5.c: epsilon
Since $\varepsilon$ is an independent variable, it could included as an argument to all of the $f_j$ functions below. The result would be that all the partials with respect to $\varepsilon$ would be zero and hence we drop it to simplify the presentation.

3.2.5.d: f_7
In reverse mode we choose one dependent variable and compute its derivative with respect to all the independent variables. For our example, we chose the value returned by 3.2.1: exp_eps.hpp which is $v_7$. We begin with the function $f_7$ where $v_7$ is both an argument and the value of the function; i.e., $$\begin{array}{rcl} f_7 ( v_1 , v_2 , v_3 , v_4 , v_5 , v_6 , v_7 ) & = & v_7 \\ \D{f_7}{v_7} & = & 1 \end{array}$$ All the other partial derivatives of $f_7$ are zero.

3.2.5.e: Index 7: f_6
The last operation has index 7, $$v_7 = v_4 + v_6$$ We define the function $f_6 ( v_1 , v_2 , v_3 , v_4 , v_5 , v_6 )$ as equal to $f_7$ except that $v_7$ is eliminated using this operation; i.e. $$f_6 = f_7 [ v_1 , v_2 , v_3 , v_4 , v_5 , v_6 , v_7 ( v_4 , v_6 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_6}{v_4} & = & \D{f_7}{v_4} + \D{f_7}{v_7} * \D{v_7}{v_4} & = 1 \\ \D{f_6}{v_6} & = & \D{f_7}{v_6} + \D{f_7}{v_7} * \D{v_7}{v_6} & = 1 \end{array}$$ All the other partial derivatives of $f_6$ are zero.

3.2.5.f: Index 6: f_5
The previous operation has index 6, $$v_6 = v_5 / 2$$ We define the function $f_5 ( v_1 , v_2 , v_3 , v_4 , v_5 )$ as equal to $f_6$ except that $v_6$ is eliminated using this operation; i.e., $$f_5 = f_6 [ v_1 , v_2 , v_3 , v_4 , v_5 , v_6 ( v_5 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_5}{v_4} & = & \D{f_6}{v_4} & = 1 \\ \D{f_5}{v_5} & = & \D{f_6}{v_5} + \D{f_6}{v_6} * \D{v_6}{v_5} & = 0.5 \end{array}$$ All the other partial derivatives of $f_5$ are zero.

3.2.5.g: Index 5: f_4
The previous operation has index 5, $$v_5 = v_3 * v_1$$ We define the function $f_4 ( v_1 , v_2 , v_3 , v_4 )$ as equal to $f_5$ except that $v_5$ is eliminated using this operation; i.e., $$f_4 = f_5 [ v_1 , v_2 , v_3 , v_4 , v_5 ( v_3 , v_1 ) ]$$ Given the information from the forward sweep, we have $v_3 = 0.5$ and $v_1 = 0.5$. It follows that $$\begin{array}{rcll} \D{f_4}{v_1} & = & \D{f_5}{v_1} + \D{f_5}{v_5} * \D{v_5}{v_1} & = 0.25 \\ \D{f_4}{v_2} & = & \D{f_5}{v_2} & = 0 \\ \D{f_4}{v_3} & = & \D{f_5}{v_3} + \D{f_5}{v_5} * \D{v_5}{v_3} & = 0.25 \\ \D{f_4}{v_4} & = & \D{f_5}{v_4} & = 1 \end{array}$$

3.2.5.h: Index 4: f_3
The previous operation has index 4, $$v_4 = 1 + v_3$$ We define the function $f_3 ( v_1 , v_2 , v_3 )$ as equal to $f_4$ except that $v_4$ is eliminated using this operation; i.e., $$f_3 = f_4 [ v_1 , v_2 , v_3 , v_4 ( v_3 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_3}{v_1} & = & \D{f_4}{v_1} & = 0.25 \\ \D{f_3}{v_2} & = & \D{f_4}{v_2} & = 0 \\ \D{f_3}{v_3} & = & \D{f_4}{v_3} + \D{f_4}{v_4} * \D{v_4}{v_3} & = 1.25 \end{array}$$

3.2.5.i: Index 3: f_2
The previous operation has index 3, $$v_3 = v_2 / 1$$ We define the function $f_2 ( v_1 , v_2 )$ as equal to $f_3$ except that $v_3$ is eliminated using this operation; i.e., $$f_2 = f_4 [ v_1 , v_2 , v_3 ( v_2 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_2}{v_1} & = & \D{f_3}{v_1} & = 0.25 \\ \D{f_2}{v_2} & = & \D{f_3}{v_2} + \D{f_3}{v_3} * \D{v_3}{v_2} & = 1.25 \end{array}$$

3.2.5.j: Index 2: f_1
The previous operation has index 1, $$v_2 = 1 * v_1$$ We define the function $f_1 ( v_1 )$ as equal to $f_2$ except that $v_2$ is eliminated using this operation; i.e., $$f_1 = f_2 [ v_1 , v_2 ( v_1 ) ]$$ It follows that $$\begin{array}{rcll} \D{f_1}{v_1} & = & \D{f_2}{v_1} + \D{f_2}{v_2} * \D{v_2}{v_1} & = 1.5 \end{array}$$ Note that $v_1$ is equal to $x$, so the derivative of exp_eps(x, epsilon) at x equal to .5 and epsilon equal .2 is 1.5 in the x direction and zero in the epsilon direction. We also note that 3.2.4: forward forward mode gave the same result for the partial in the x direction.

3.2.5.k: Verification
The file 3.2.5.1: exp_eps_rev1.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of $f_j$ that might not be equal to the corresponding partials of $f_{j+1}$; i.e., the other partials of $f_j$ must be equal to the corresponding partials of $f_{j+1}$.

3.2.5.l: Exercises
1. Consider the case where $x = .1$ and we first preform a zero order forward mode sweep for the operation sequence used above (in reverse order). What are the results of a first order reverse mode sweep; i.e., what are the corresponding values for $\D{f_j}{v_k}$ for all $j, k$ such that $\D{f_j}{v_k} \neq 0$.
2. Create a modified version of 3.2.5.1: exp_eps_rev1.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.5.1: exp_eps_rev1.cpp .

Input File: introduction/exp_eps.omh
3.2.5.1: exp_eps: Verify First Order Reverse Sweep
# include <cstddef>                     // define size_t
# include <cmath>                       // for fabs function
extern bool exp_eps_for0(double *v0);   // computes zero order forward sweep
bool exp_eps_rev1(void)
{     bool ok = true;

// set the value of v0[j] for j = 1 , ... , 7
double v0[8];
ok &= exp_eps_for0(v0);

// initial all partial derivatives as zero
double f_v[8];
size_t j;
for(j = 0; j < 8; j++)
f_v[j] = 0.;

// set partial derivative for f7
f_v[7] = 1.;
ok    &= std::fabs( f_v[7] - 1. ) <= 1e-10;     // f7_v7

// f6( v1 , v2 , v3 , v4 , v5 , v6 )
f_v[4] += f_v[7] * 1.;
f_v[6] += f_v[7] * 1.;
ok     &= std::fabs( f_v[4] - 1.  ) <= 1e-10;   // f6_v4
ok     &= std::fabs( f_v[6] - 1.  ) <= 1e-10;   // f6_v6

// f5( v1 , v2 , v3 , v4 , v5 )
f_v[5] += f_v[6] / 2.;
ok     &= std::fabs( f_v[5] - 0.5 ) <= 1e-10;   // f5_v5

// f4( v1 , v2 , v3 , v4 )
f_v[1] += f_v[5] * v0[3];
f_v[3] += f_v[5] * v0[1];
ok     &= std::fabs( f_v[1] - 0.25) <= 1e-10;   // f4_v1
ok     &= std::fabs( f_v[3] - 0.25) <= 1e-10;   // f4_v3

// f3( v1 , v2 , v3 )
f_v[3] += f_v[4] * 1.;
ok     &= std::fabs( f_v[3] - 1.25) <= 1e-10;   // f3_v3

// f2( v1 , v2 )
f_v[2] += f_v[3] / 1.;
ok     &= std::fabs( f_v[2] - 1.25) <= 1e-10;   // f2_v2

// f1( v1 )
f_v[1] += f_v[2] * 1.;
ok     &= std::fabs( f_v[1] - 1.5 ) <= 1e-10;   // f1_v2

return ok;
}

Input File: introduction/exp_eps_rev1.cpp
3.2.6: exp_eps: Second Order Forward Mode

3.2.6.a: Second Order Expansion
We define $x(t)$ and $\varepsilon(t) ]$ near $t = 0$ by the second order expansions $$\begin{array}{rcl} x(t) & = & x^{(0)} + x^{(1)} * t + x^{(2)} * t^2 / 2 \\ \varepsilon(t) & = & \varepsilon^{(0)} + \varepsilon^{(1)} * t + \varepsilon^{(2)} * t^2 / 2 \end{array}$$ It follows that for $k = 0 , 1 , 2$, $$\begin{array}{rcl} x^{(k)} & = & \dpow{k}{t} x (0) \\ \varepsilon^{(k)} & = & \dpow{k}{t} \varepsilon (0) \end{array}$$

3.2.6.b: Purpose
In general, a second order forward sweep is given the 3.1.4.a: first order expansion for all of the variables in an operation sequence, and the second order derivatives for the independent variables. It uses these to compute the second order derivative, and thereby obtain the second order expansion, for all the variables in the operation sequence.

3.2.6.c: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(x, epsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is $$f ( x , \varepsilon ) = 1 + x + x^2 / 2$$ The corresponding second partial derivative with respect to $x$, and the value of the derivative, are $$\Dpow{2}{x} f ( x , \varepsilon ) = 1.$$

3.2.6.d: Operation Sequence

3.2.6.d.a: Index
The Index column contains the index in the operation sequence of the corresponding atomic operation. A Forward sweep starts with the first operation and ends with the last.

3.2.6.d.b: Zero
The Zero column contains the zero order sweep results for the corresponding variable in the operation sequence (see 3.1.3.c.e: zero order sweep ).

3.2.6.d.c: Operation
The Operation column contains the first order sweep operation for this variable.

3.2.6.d.d: First
The First column contains the first order sweep results for the corresponding variable in the operation sequence (see 3.1.4.d.f: first order sweep ).

3.2.6.d.e: Derivative
The Derivative column contains the mathematical function corresponding to the second derivative with respect to $t$, at $t = 0$, for each variable in the sequence.

3.2.6.d.f: Second
The Second column contains the second order derivatives for the corresponding variable in the operation sequence; i.e., the second order expansion for the i-th variable is given by $$v_i (t) = v_i^{(0)} + v_i^{(1)} * t + v_i^{(2)} * t^2 / 2$$ We use $x^{(1)} = 1$, $x^{(2)} = 0$, use $\varepsilon^{(1)} = 1$, and $\varepsilon^{(2)} = 0$ so that second order differentiation with respect to $t$, at $t = 0$, is the same as the second partial differentiation with respect to $x$ at $x = x^{(0)}$.

3.2.6.d.g: Sweep
 Index    Zero    Operation    First    Derivative    Second 1 0.5 $v_1^{(1)} = x^{(1)}$ 1 $v_2^{(2)} = x^{(2)}$ 0 2 0.5 $v_2^{(1)} = 1 * v_1^{(1)}$ 1 $v_2^{(2)} = 1 * v_1^{(2)}$ 0 3 0.5 $v_3^{(1)} = v_2^{(1)} / 1$ 1 $v_3^{(2)} = v_2^{(2)} / 1$ 0 4 1.5 $v_4^{(1)} = v_3^{(1)}$ 1 $v_4^{(2)} = v_3^{(2)}$ 0 5 0.25 $v_5^{(1)} = v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)}$ 1 $v_5^{(2)} = v_3^{(2)} * v_1^{(0)} + 2 * v_3^{(1)} * v_1^{(1)} + v_3^{(0)} * v_1^{(2)}$ 2 6 0.125 $v_6^{(1)} = v_5^{(1)} / 2$ 0.5 $v_6^{(2)} = v_5^{(2)} / 2$ 1 7 1.625 $v_7^{(1)} = v_4^{(1)} + v_6^{(1)}$ 1.5 $v_7^{(2)} = v_4^{(2)} + v_6^{(2)}$ 1
3.2.6.e: Return Value
The second derivative of the return value for this case is $$\begin{array}{rcl} 1 & = & v_7^{(2)} = \left[ \Dpow{2}{t} v_7 \right]_{t=0} = \left[ \Dpow{2}{t} f( x^{(0)} + x^{(1)} * t , \varepsilon^{(0)} ) \right]_{t=0} \\ & = & x^{(1)} * \Dpow{2}{x} f ( x^{(0)} , \varepsilon^{(0)} ) * x^{(1)} = \Dpow{2}{x} f ( x^{(0)} , \varepsilon^{(0)} ) \end{array}$$ (We have used the fact that $x^{(1)} = 1$, $x^{(2)} = 0$, $\varepsilon^{(1)} = 1$, and $\varepsilon^{(2)} = 0$.)

3.2.6.f: Verification
The file 3.2.6.1: exp_eps_for2.cpp contains a routine which verifies the values computed above. It returns true for success and false for failure.

3.2.6.g: Exercises
1. Which statement in the routine defined by 3.2.6.1: exp_eps_for2.cpp uses the values that are calculated by the routine defined by 3.2.4.1: exp_eps_for1.cpp ?
2. Suppose that $x = .1$, what are the results of a zero, first, and second order forward sweep for the operation sequence above; i.e., what are the corresponding values for $v_i^{(k)}$ for $i = 1, \ldots , 7$ and $k = 0, 1, 2$.
3. Create a modified version of 3.2.6.1: exp_eps_for2.cpp that verifies the derivative values from the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.6.1: exp_eps_for2.cpp .

Input File: introduction/exp_eps.omh
3.2.6.1: exp_eps: Verify Second Order Forward Sweep
# include <cmath>                     // for fabs function
extern bool exp_eps_for0(double *v0); // computes zero order forward sweep
extern bool exp_eps_for1(double *v1); // computes first order forward sweep
bool exp_eps_for2(void)
{     bool ok = true;
double v0[8], v1[8], v2[8];

// set the value of v0[j], v1[j] for j = 1 , ... , 7
ok &= exp_eps_for0(v0);
ok &= exp_eps_for1(v1);

v2[1] = 0.;                                      // v1 = x
ok    &= std::fabs( v2[1] - 0. ) <= 1e-10;

v2[2] = 1. * v2[1];                              // v2 = 1 * v1
ok    &= std::fabs( v2[2] - 0. ) <= 1e-10;

v2[3] = v2[2] / 1.;                              // v3 = v2 / 1
ok    &= std::fabs( v2[3] - 0. ) <= 1e-10;

v2[4] = v2[3];                                   // v4 = 1 + v3
ok    &= std::fabs( v2[4] - 0. ) <= 1e-10;

v2[5] = v2[3] * v0[1] + 2. * v1[3] * v1[1]       // v5 = v3 * v1
+ v0[3] * v2[1];
ok    &= std::fabs( v2[5] - 2. ) <= 1e-10;

v2[6] = v2[5] / 2.;                              // v6 = v5 / 2
ok    &= std::fabs( v2[6] - 1. ) <= 1e-10;

v2[7] = v2[4] + v2[6];                           // v7 = v4 + v6
ok    &= std::fabs( v2[7] - 1. ) <= 1e-10;

return ok;
}

Input File: introduction/exp_eps_for2.cpp
3.2.7: exp_eps: Second Order Reverse Sweep

3.2.7.a: Purpose
In general, a second order reverse sweep is given the 3.2.4.a: first order expansion for all of the variables in an operation sequence. Given a choice of a particular variable, it computes the derivative, of that variables first order expansion coefficient, with respect to all of the independent variables.

3.2.7.b: Mathematical Form
Suppose that we use the algorithm 3.2.1: exp_eps.hpp to compute exp_eps(x, epsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical function for the operation sequence corresponding to exp_eps is $$f ( x , \varepsilon ) = 1 + x + x^2 / 2$$ The corresponding derivative of the partial derivative with respect to $x$ is $$\begin{array}{rcl} \Dpow{2}{x} f ( x , \varepsilon ) & = & 1 \\ \partial_\varepsilon \partial_x f ( x , \varepsilon ) & = & 0 \end{array}$$

3.2.7.c: epsilon
Since $\varepsilon$ is an independent variable, it could included as an argument to all of the $f_j$ functions below. The result would be that all the partials with respect to $\varepsilon$ would be zero and hence we drop it to simplify the presentation.

3.2.7.d: f_7
In reverse mode we choose one dependent variable and compute its derivative with respect to all the independent variables. For our example, we chose the value returned by 3.2.1: exp_eps.hpp which is $v_7$. We begin with the function $f_7$ where $v_7$ is both an argument and the value of the function; i.e., $$\begin{array}{rcl} f_7 \left( v_1^{(0)} , v_1^{(1)} , \ldots , v_7^{(0)} , v_7^{(1)} \right) & = & v_7^{(1)} \\ \D{f_7}{v_7^{(1)}} & = & 1 \end{array}$$ All the other partial derivatives of $f_7$ are zero.

3.2.7.e: Index 7: f_6
The last operation has index 7, $$\begin{array}{rcl} v_7^{(0)} & = & v_4^{(0)} + v_6^{(0)} \\ v_7^{(1)} & = & v_4^{(1)} + v_6^{(1)} \end{array}$$ We define the function $f_6 \left( v_1^{(0)} , \ldots , v_6^{(1)} \right)$ as equal to $f_7$ except that $v_7^{(0)}$ and $v_7^{(1)}$ are eliminated using this operation; i.e. $$f_6 = f_7 \left[ v_1^{(0)} , \ldots , v_6^{(1)} , v_7^{(0)} \left( v_4^{(0)} , v_6^{(0)} \right) , v_7^{(1)} \left( v_4^{(1)} , v_6^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_6}{v_4^{(1)}} & = & \D{f_7}{v_4^{(1)}} + \D{f_7}{v_7^{(1)}} * \D{v_7^{(1)}}{v_4^{(1)}} & = 1 \\ \D{f_6}{v_6^{(1)}} & = & \D{f_7}{v_6^{(1)}} + \D{f_7}{v_7^{(1)}} * \D{v_7^{(1)}}{v_6^{(1)}} & = 1 \end{array}$$ All the other partial derivatives of $f_6$ are zero.

3.2.7.f: Index 6: f_5
The previous operation has index 6, $$\begin{array}{rcl} v_6^{(0)} & = & v_5^{(0)} / 2 \\ v_6^{(1)} & = & v_5^{(1)} / 2 \end{array}$$ We define the function $f_5 \left( v_1^{(0)} , \ldots , v_5^{(1)} \right)$ as equal to $f_6$ except that $v_6^{(0)}$ and $v_6^{(1)}$ are eliminated using this operation; i.e. $$f_5 = f_6 \left[ v_1^{(0)} , \ldots , v_5^{(1)} , v_6^{(0)} \left( v_5^{(0)} \right) , v_6^{(1)} \left( v_5^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_5}{v_4^{(1)}} & = & \D{f_6}{v_4^{(1)}} & = 1 \\ \D{f_5}{v_5^{(1)}} & = & \D{f_6}{v_5} + \D{f_6}{v_6^{(1)}} * \D{v_6^{(1)}}{v_5^{(1)}} & = 0.5 \end{array}$$ All the other partial derivatives of $f_5$ are zero.

3.2.7.g: Index 5: f_4
The previous operation has index 5, $$\begin{array}{rcl} v_5^{(0)} & = & v_3^{(0)} * v_1^{(0)} \\ v_5^{(1)} & = & v_3^{(1)} * v_1^{(0)} + v_3^{(0)} * v_1^{(1)} \end{array}$$ We define the function $f_4 \left( v_1^{(0)} , \ldots , v_4^{(1)} \right)$ as equal to $f_5$ except that $v_5^{(0)}$ and $v_5^{(1)}$ are eliminated using this operation; i.e. $$f_4 = f_5 \left[ v_1^{(0)} , \ldots , v_4^{(1)} , v_5^{(0)} \left( v_1^{(0)}, v_3^{(0)} \right) , v_5^{(1)} \left( v_1^{(0)}, v_1^{(1)}, v_3^{(0)} , v_3^{(1)} \right) , \right]$$ Given the information from the forward sweep, we have $v_1^{(0)} = 0.5$, $v_3^{(0)} = 0.5$, $v_1^{(1)} = 1$, $v_3^{(1)} = 1$, and the fact that the partial of $f_5$ with respect to $v_5^{(0)}$ is zero, we have $$\begin{array}{rcll} \D{f_4}{v_1^{(0)}} & = & \D{f_5}{v_1^{(0)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_1^{(0)}} & = 0.5 \\ \D{f_4}{v_1^{(1)}} & = & \D{f_5}{v_1^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_1^{(1)}} & = 0.25 \\ \D{f_4}{v_3^{(0)}} & = & \D{f_5}{v_3^{(0)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_3^{(0)}} & = 0.5 \\ \D{f_4}{v_3^{(1)}} & = & \D{f_3}{v_1^{(1)}} + \D{f_5}{v_5^{(1)}} * \D{v_5^{(1)}}{v_3^{(1)}} & = 0.25 \\ \D{f_4}{v_4^{(1)}} & = & \D{f_5}{v_4^{(1)}} & = 1 \end{array}$$ All the other partial derivatives of $f_5$ are zero.

3.2.7.h: Index 4: f_3
The previous operation has index 4, $$\begin{array}{rcl} v_4^{(0)} = 1 + v_3^{(0)} \\ v_4^{(1)} = v_3^{(1)} \end{array}$$ We define the function $f_3 \left( v_1^{(0)} , \ldots , v_3^{(1)} \right)$ as equal to $f_4$ except that $v_4^{(0)}$ and $v_4^{(1)}$ are eliminated using this operation; i.e. $$f_3 = f_4 \left[ v_1^{(0)} , \ldots , v_3^{(1)} , v_4^{(0)} \left( v_3^{(0)} \right) , v_4^{(1)} \left( v_3^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_3}{v_1^{(0)}} & = & \D{f_4}{v_1^{(0)}} & = 0.5 \\ \D{f_3}{v_1^{(1)}} & = & \D{f_4}{v_1^{(1)}} & = 0.25 \\ \D{f_3}{v_2^{(0)}} & = & \D{f_4}{v_2^{(0)}} & = 0 \\ \D{f_3}{v_2^{(1)}} & = & \D{f_4}{v_2^{(1)}} & = 0 \\ \D{f_3}{v_3^{(0)}} & = & \D{f_4}{v_3^{(0)}} + \D{f_4}{v_4^{(0)}} * \D{v_4^{(0)}}{v_3^{(0)}} & = 0.5 \\ \D{f_3}{v_3^{(1)}} & = & \D{f_4}{v_3^{(1)}} + \D{f_4}{v_4^{(1)}} * \D{v_4^{(1)}}{v_3^{(1)}} & = 1.25 \end{array}$$

3.2.7.i: Index 3: f_2
The previous operation has index 3, $$\begin{array}{rcl} v_3^{(0)} & = & v_2^{(0)} / 1 \\ v_3^{(1)} & = & v_2^{(1)} / 1 \end{array}$$ We define the function $f_2 \left( v_1^{(0)} , \ldots , v_2^{(1)} \right)$ as equal to $f_3$ except that $v_3^{(0)}$ and $v_3^{(1)}$ are eliminated using this operation; i.e. $$f_2 = f_3 \left[ v_1^{(0)} , \ldots , v_2^{(1)} , v_3^{(0)} \left( v_2^{(0)} \right) , v_3^{(1)} \left( v_2^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_2}{v_1^{(0)}} & = & \D{f_3}{v_1^{(0)}} & = 0.5 \\ \D{f_2}{v_1^{(1)}} & = & \D{f_3}{v_1^{(1)}} & = 0.25 \\ \D{f_2}{v_2^{(0)}} & = & \D{f_3}{v_2^{(0)}} + \D{f_3}{v_3^{(0)}} * \D{v_3^{(0)}}{v_2^{(0)}} & = 0.5 \\ \D{f_2}{v_2^{(1)}} & = & \D{f_3}{v_2^{(1)}} + \D{f_3}{v_3^{(1)}} * \D{v_3^{(1)}}{v_2^{(0)}} & = 1.25 \end{array}$$

3.2.7.j: Index 2: f_1
The previous operation has index 1, $$\begin{array}{rcl} v_2^{(0)} & = & 1 * v_1^{(0)} \\ v_2^{(1)} & = & 1 * v_1^{(1)} \end{array}$$ We define the function $f_1 \left( v_1^{(0)} , v_1^{(1)} \right)$ as equal to $f_2$ except that $v_2^{(0)}$ and $v_2^{(1)}$ are eliminated using this operation; i.e. $$f_1 = f_2 \left[ v_1^{(0)} , v_1^{(1)} , v_2^{(0)} \left( v_1^{(0)} \right) , v_2^{(1)} \left( v_1^{(1)} \right) \right]$$ It follows that $$\begin{array}{rcll} \D{f_1}{v_1^{(0)}} & = & \D{f_2}{v_1^{(0)}} + \D{f_2}{v_2^{(0)}} * \D{v_2^{(0)}}{v_1^{(0)}} & = 1 \\ \D{f_1}{v_1^{(1)}} & = & \D{f_2}{v_1^{(1)}} + \D{f_2}{v_2^{(1)}} * \D{v_2^{(1)}}{v_1^{(1)}} & = 1.5 \end{array}$$ Note that $v_1$ is equal to $x$, so the second partial derivative of exp_eps(x, epsilon) at x equal to .5 and epsilon equal .2 is $$\Dpow{2}{x} v_7^{(0)} = \D{v_7^{(1)}}{x} = \D{f_1}{v_1^{(0)}} = 1$$ There is a theorem about algorithmic differentiation that explains why the other partial of $f_1$ is equal to the first partial of exp_eps(x, epsilon) with respect to $x$.

3.2.7.k: Verification
The file 3.2.7.1: exp_eps_rev2.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure. It only tests the partial derivatives of $f_j$ that might not be equal to the corresponding partials of $f_{j+1}$; i.e., the other partials of $f_j$ must be equal to the corresponding partials of $f_{j+1}$.

3.2.7.l: Exercises
1. Consider the case where $x = .1$ and we first preform a zero order forward mode sweep for the operation sequence used above (in reverse order). What are the results of a first order reverse mode sweep; i.e., what are the corresponding values for $\D{f_j}{v_k}$ for all $j, k$ such that $\D{f_j}{v_k} \neq 0$.
2. Create a modified version of 3.2.7.1: exp_eps_rev2.cpp that verifies the values you obtained for the previous exercise. Also create and run a main program that reports the result of calling the modified version of 3.2.7.1: exp_eps_rev2.cpp .

Input File: introduction/exp_eps.omh
3.2.7.1: exp_eps: Verify Second Order Reverse Sweep
# include <cstddef>                     // define size_t
# include <cmath>                       // for fabs function
extern bool exp_eps_for0(double *v0);   // computes zero order forward sweep
extern bool exp_eps_for1(double *v1);   // computes first order forward sweep
bool exp_eps_rev2(void)
{     bool ok = true;

// set the value of v0[j], v1[j] for j = 1 , ... , 7
double v0[8], v1[8];
ok &= exp_eps_for0(v0);
ok &= exp_eps_for1(v1);

// initial all partial derivatives as zero
double f_v0[8], f_v1[8];
size_t j;
for(j = 0; j < 8; j++)
{     f_v0[j] = 0.;
f_v1[j] = 0.;
}

// set partial derivative for f_7
f_v1[7] = 1.;
ok &= std::fabs( f_v1[7] - 1.  ) <= 1e-10; // partial f_7 w.r.t. v_7^1

// f_6 = f_7( v_1^0 , ... , v_6^1 , v_4^0 + v_6^0, v_4^1 , v_6^1 )
f_v0[4] += f_v0[7];
f_v0[6] += f_v0[7];
f_v1[4] += f_v1[7];
f_v1[6] += f_v1[7];
ok &= std::fabs( f_v0[4] - 0.  ) <= 1e-10; // partial f_6 w.r.t. v_4^0
ok &= std::fabs( f_v0[6] - 0.  ) <= 1e-10; // partial f_6 w.r.t. v_6^0
ok &= std::fabs( f_v1[4] - 1.  ) <= 1e-10; // partial f_6 w.r.t. v_4^1
ok &= std::fabs( f_v1[6] - 1.  ) <= 1e-10; // partial f_6 w.r.t. v_6^1

// f_5 = f_6( v_1^0 , ... , v_5^1 , v_5^0 / 2 , v_5^1 / 2 )
f_v0[5] += f_v0[6] / 2.;
f_v1[5] += f_v1[6] / 2.;
ok &= std::fabs( f_v0[5] - 0.  ) <= 1e-10; // partial f_5 w.r.t. v_5^0
ok &= std::fabs( f_v1[5] - 0.5 ) <= 1e-10; // partial f_5 w.r.t. v_5^1

// f_4 = f_5( v_1^0 , ... , v_4^1 , v_3^0 * v_1^0 ,
//            v_3^1 * v_1^0 + v_3^0 * v_1^1 )
f_v0[1] += f_v0[5] * v0[3] + f_v1[5] * v1[3];
f_v0[3] += f_v0[5] * v0[1] + f_v1[5] * v1[1];
f_v1[1] += f_v1[5] * v0[3];
f_v1[3] += f_v1[5] * v0[1];
ok &= std::fabs( f_v0[1] - 0.5  ) <= 1e-10; // partial f_4 w.r.t. v_1^0
ok &= std::fabs( f_v0[3] - 0.5  ) <= 1e-10; // partial f_4 w.r.t. v_3^0
ok &= std::fabs( f_v1[1] - 0.25 ) <= 1e-10; // partial f_4 w.r.t. v_1^1
ok &= std::fabs( f_v1[3] - 0.25 ) <= 1e-10; // partial f_4 w.r.t. v_3^1

// f_3 = f_4(  v_1^0 , ... , v_3^1 , 1 + v_3^0 , v_3^1 )
f_v0[3] += f_v0[4];
f_v1[3] += f_v1[4];
ok &= std::fabs( f_v0[3] - 0.5 ) <= 1e-10;  // partial f_3 w.r.t. v_3^0
ok &= std::fabs( f_v1[3] - 1.25) <= 1e-10;  // partial f_3 w.r.t. v_3^1

// f_2 = f_3( v_1^0 , ... , v_2^1 , v_2^0 , v_2^1 )
f_v0[2] += f_v0[3];
f_v1[2] += f_v1[3];
ok &= std::fabs( f_v0[2] - 0.5 ) <= 1e-10;  // partial f_2 w.r.t. v_2^0
ok &= std::fabs( f_v1[2] - 1.25) <= 1e-10;  // partial f_2 w.r.t. v_2^1

// f_1 = f_2 ( v_1^0 , v_2^0 , v_1^0 , v_2^0 )
f_v0[1] += f_v0[2];
f_v1[1] += f_v1[2];
ok &= std::fabs( f_v0[1] - 1.  ) <= 1e-10;  // partial f_1 w.r.t. v_1^0
ok &= std::fabs( f_v1[1] - 1.5 ) <= 1e-10;  // partial f_1 w.r.t. v_1^1

return ok;
}

Input File: introduction/exp_eps_rev2.cpp
3.2.8: exp_eps: CppAD Forward and Reverse Sweeps
.

3.2.8.a: Purpose
Use CppAD forward and reverse modes to compute the partial derivative with respect to $x$, at the point $x = .5$ and $\varepsilon = .2$, of the function       exp_eps(x, epsilon)  as defined by the 3.2.1: exp_eps.hpp include file.

3.2.8.b: Exercises
1. Create and test a modified version of the routine below that computes the same order derivatives with respect to $x$, at the point $x = .1$ and $\varepsilon = .2$, of the function       exp_eps(x, epsilon) 
2. Create and test a modified version of the routine below that computes partial derivative with respect to $x$, at the point $x = .1$ and $\varepsilon = .2$, of the function corresponding to the operation sequence for $x = .5$ and $\varepsilon = .2$. Hint: you could define a vector u with two components and use       f.Forward(0, u)  to run zero order forward mode at a point different form the point where the operation sequence corresponding to f was recorded.
# include <cppad/cppad.hpp>  // http://www.coin-or.org/CppAD/
# include "exp_eps.hpp"      // our example exponential function approximation
{     bool ok = true;
using CppAD::vector;    // can use any simple vector template class
using CppAD::NearEqual; // checks if values are nearly equal

// domain space vector
size_t n = 2; // dimension of the domain space
U[0] = .5;    // value of x for this operation sequence
U[1] = .2;    // value of e for this operation sequence

// declare independent variables and start recording operation sequence

// evaluate our exponential approximation

// range space vector
size_t m = 1;  // dimension of the range space
Y[0] = apx;    // variable that represents only range space component

// Create f: U -> Y corresponding to this operation sequence
// and stop recording. This also executes a zero order forward
// mode sweep using values in U for x and e.

// first order forward mode sweep that computes partial w.r.t x
vector<double> du(n);      // differential in domain space
vector<double> dy(m);      // differential in range space
du[0] = 1.;                // x direction in domain space
du[1] = 0.;
dy    = f.Forward(1, du);  // partial w.r.t. x
double check = 1.5;
ok   &= NearEqual(dy[0], check, 1e-10, 1e-10);

// first order reverse mode sweep that computes the derivative
vector<double>  w(m);     // weights for components of the range
vector<double> dw(n);     // derivative of the weighted function
w[0] = 1.;                // there is only one weight
dw   = f.Reverse(1, w);   // derivative of w[0] * exp_eps(x, epsilon)
check = 1.5;              // partial w.r.t. x
ok   &= NearEqual(dw[0], check, 1e-10, 1e-10);
check = 0.;               // partial w.r.t. epsilon
ok   &= NearEqual(dw[1], check, 1e-10, 1e-10);

// second order forward sweep that computes
// second partial of exp_eps(x, epsilon) w.r.t. x
vector<double> x2(n);     // second order Taylor coefficients
vector<double> y2(m);
x2[0] = 0.;               // evaluate partial w.r.t x
x2[1] = 0.;
y2    = f.Forward(2, x2);
check = 0.5 * 1.;         // Taylor coef is 1/2 second derivative
ok   &= NearEqual(y2[0], check, 1e-10, 1e-10);

// second order reverse sweep that computes
// derivative of partial of exp_eps(x, epsilon) w.r.t. x
dw.resize(2 * n);         // space for first and second derivative
dw    = f.Reverse(2, w);
check = 1.;               // result should be second derivative
ok   &= NearEqual(dw[0*2+1], check, 1e-10, 1e-10);

return ok;
}

3.3: Correctness Tests For Exponential Approximation in Introduction

3.3.a: Running Tests
To build this program and run its correctness tests see 2.3: cmake_check .

3.3.b: Source  // system include files used for I/O # include <iostream> // memory allocation routine # include <cppad/utility/thread_alloc.hpp> // test runner # include <cppad/utility/test_boolofvoid.hpp> // external complied tests extern bool exp_2(void); extern bool exp_2_cppad(void); extern bool exp_2_for1(void); extern bool exp_2_for2(void); extern bool exp_2_rev1(void); extern bool exp_2_rev2(void); extern bool exp_2_for0(void); extern bool exp_eps(void); extern bool exp_eps_cppad(void); extern bool exp_eps_for1(void); extern bool exp_eps_for2(void); extern bool exp_eps_for0(void); extern bool exp_eps_rev1(void); extern bool exp_eps_rev2(void); // main program that runs all the tests int main(void) { std::string group = "introduction"; size_t width = 20; CppAD::test_boolofvoid Run(group, width); // This comment is used by OneTest // external compiled tests Run( exp_2, "exp_2" ); Run( exp_2_cppad, "exp_2_cppad" ); Run( exp_2_for0, "exp_2_for0" ); Run( exp_2_for1, "exp_2_for1" ); Run( exp_2_for2, "exp_2_for2" ); Run( exp_2_rev1, "exp_2_rev1" ); Run( exp_2_rev2, "exp_2_rev2" ); Run( exp_eps, "exp_eps" ); Run( exp_eps_cppad, "exp_eps_cppad" ); Run( exp_eps_for0, "exp_eps_for0" ); Run( exp_eps_for1, "exp_eps_for1" ); Run( exp_eps_for2, "exp_eps_for2" ); Run( exp_eps_rev1, "exp_eps_rev1" ); Run( exp_eps_rev2, "exp_eps_rev2" ); // // check for memory leak bool memory_ok = CppAD::thread_alloc::free_all(); // print summary at end bool ok = Run.summary(memory_ok); // return static_cast<int>( ! ok ); } 
Input File: introduction/introduction.cpp

4.a: Purpose
The sections listed below describe the operations that are available to 12.4.b: AD of Base objects. These objects are used to 12.4.k: tape an AD of Base 12.4.g.b: operation sequence . This operation sequence can be transferred to an 5: ADFun object where it can be used to evaluate the corresponding function and derivative values.

4.b: Base Type Requirements
The Base requirements are provided by the CppAD package for the following base types: float, double, std::complex<float>, std::complex<double>. Otherwise, see 4.7: base_require .

4.c: Contents


4.1.a: Syntax
AD<Base> y()  AD<Base> y(x) 
4.1.b: Purpose
creates a new AD<Base> object y and initializes its value as equal to x .

4.1.c: x

4.1.c.a: implicit
There is an implicit constructor where x has one of the following prototypes:       const Base&        x      const VecAD<Base>& x 
4.1.c.b: explicit
There is an explicit constructor where x has prototype       const Type&        x  for any type that has an explicit constructor of the form Base(x) .

4.1.d: y
The target y has prototype       AD<Base> y 
4.1.e: Example
The files 4.1.1: ad_ctor.cpp contain examples and tests of these operations. It test returns true if it succeeds and false otherwise.
4.1.1: AD Constructors: Example and Test
 # include <cppad/cppad.hpp> bool ad_ctor(void) { bool ok = true; // initialize test result flag using CppAD::AD; // so can use AD in place of CppAD::AD // default constructor AD<double> a; a = 0.; ok &= a == 0.; // constructor from base type AD<double> b(1.); ok &= b == 1.; // constructor from another type that converts to the base type AD<double> c(2); ok &= c == 2.; // constructor from AD<Base> AD<double> d(c); ok &= d == 2.; // constructor from a VecAD<Base> element CppAD::VecAD<double> v(1); v[0] = 3.; AD<double> e( v[0] ); ok &= e == 3.; return ok; } 

4.2.a: Syntax
y = x

4.2.b: Purpose
Assigns the value in x to the object y . In either case,

4.2.c: x
The argument x has prototype       const Type &x  where Type is VecAD<Base>::reference , AD<Base> , Base , or any type that has an implicit constructor of the form Base(x) .

4.2.d: y
The target y has prototype       AD<Base> y 
4.2.e: Example
The file 4.2.1: ad_assign.cpp contain examples and tests of these operations. It test returns true if it succeeds and false otherwise.
4.2.1: AD Assignment: Example and Test
 # include <cppad/cppad.hpp> bool ad_assign(void) { bool ok = true; // initialize test result flag using CppAD::AD; // so can use AD in place of CppAD::AD // assignment to base value AD<double> a; a = 1.; ok &= a == 1.; // assignment to a value that converts to the base type a = 2; ok &= a == 2.; // assignment to an AD<Base> AD<double> b(3.); a = b; ok &= a == 3.; // assignment to an VecAD<Base> element CppAD::VecAD<double> v(1); v[0] = 4.; a = v[0]; ok &= a == 4.; return ok; } 
4.3: Conversion and I/O of AD Objects
 4.3.1: Value Convert From an AD Type to its Base Type 4.3.2: Integer Convert From AD to Integer 4.3.5: ad_output AD Output Stream Operator 4.3.6: PrintFor Printing AD Values During Forward Mode 4.3.7: Var2Par Convert an AD Variable to a Parameter

4.3.1: Convert From an AD Type to its Base Type

4.3.1.a: Syntax
b = Value(x)

4.3.7: var2par

4.3.1.c: Purpose
Converts from an AD type to the corresponding 12.4.e: base type .

4.3.1.d: x
The argument x has prototype       const AD<Base> &x 
4.3.1.e: b
The return value b has prototype       Base b 
4.3.1.f: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.1.g: Restriction
If the argument x is a 12.4.m: variable its dependency information would not be included in the Value result (see above). For this reason, the argument x must be a 12.4.h: parameter ; i.e., it cannot depend on the current 12.4.k.c: independent variables .

4.3.1.h: Example
The file 4.3.1.1: value.cpp contains an example and test of this operation.
4.3.1.1: Convert From AD to its Base Type: Example and Test
 # include <cppad/cppad.hpp> bool Value(void) { bool ok = true; using CppAD::AD; using CppAD::Value; // domain space vector size_t n = 2; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = 3.; x[1] = 4.; // check value before recording ok &= (Value(x[0]) == 3.); ok &= (Value(x[1]) == 4.); // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = - x[1]; // cannot call Value(x[j]) or Value(y[0]) here (currently variables) AD<double> p = 5.; // p is a parameter (does not depend on x) ok &= (Value(p) == 5.); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // can call Value(x[j]) or Value(y[0]) here (currently parameters) ok &= (Value(x[0]) == 3.); ok &= (Value(x[1]) == 4.); ok &= (Value(y[0]) == -4.); return ok; } 
Input File: example/general/value.cpp
4.3.2: Convert From AD to Integer

4.3.2.a: Syntax
i = Integer(x)

4.3.2.b: Purpose
Converts from an AD type to the corresponding integer value.

4.3.2.c: i
The result i has prototype       int i 
4.3.2.d: x

4.3.2.d.a: Real Types
If the argument x has either of the following prototypes:       const float                  &x      const double                 &x  the fractional part is dropped to form the integer value. For example, if x is 1.5, i is 1. In general, if $x \geq 0$, i is the greatest integer less than or equal x . If $x \leq 0$, i is the smallest integer greater than or equal x .

4.3.2.d.b: Complex Types
If the argument x has either of the following prototypes:       const std::complex<float>    &x      const std::complex<double>   &x  The result i is given by       i = Integer(x.real()) 
If the argument x has either of the following prototypes:       const AD<Base>               &x      const VecAD<Base>::reference &x  Base must support the Integer function and the conversion has the same meaning as for Base .

4.3.2.e: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.2.f: Example
The file 4.3.2.1: integer.cpp contains an example and test of this operation.
4.3.2.1: Convert From AD to Integer: Example and Test
 # include <cppad/cppad.hpp> bool Integer(void) { bool ok = true; using CppAD::AD; using CppAD::Integer; // domain space vector size_t n = 2; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = 3.5; x[1] = 4.5; // check integer before recording ok &= (Integer(x[0]) == 3); ok &= (Integer(x[1]) == 4); // start recording // declare independent variables and start tape recording CppAD::Independent(x); // check integer during recording ok &= (Integer(x[0]) == 3); ok &= (Integer(x[1]) == 4); // check integer for VecAD element CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = 2; ok &= (Integer(v[zero]) == 2); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = - x[1]; // create f: x -> y and stop recording CppAD::ADFun<double> f(x, y); // check integer after recording ok &= (Integer(x[0]) == 3.); ok &= (Integer(x[1]) == 4.); ok &= (Integer(y[0]) == -4.); return ok; } 
Input File: example/general/integer.cpp
4.3.3: Convert An AD or Base Type to String

4.3.3.a: Syntax
s = to_string(value) .

8.25: to_string , 4.7.7: base_to_string

4.3.3.c: value
The argument value has prototype       const AD<Base>& value      const Base&     value  where Base is a type that supports the 4.7.7: base_to_string type requirement.

4.3.3.d: s
The return value has prototype       std::string s  and contains a representation of the specified value . If value is an AD type, the result has the same precision as for the Base type.

4.3.3.e: Example
The file 8.25.1: to_string.cpp includes an example and test of to_string with AD types. It returns true if it succeeds and false otherwise.

4.3.4.a: Syntax
is >> x

4.3.4.b: Purpose
Sets x to a 12.4.h: parameter with value b corresponding to       is >> b  where b is a Base object. It is assumed that this Base input operation returns a reference to is .

4.3.4.c: is
The operand is has prototype       std::istream& is 
4.3.4.d: x
The operand x has one of the following prototypes       AD<Base>&               x 
4.3.4.e: Result
The result of this operation can be used as a reference to is . For example, if the operand y has prototype       AD<Base> y  then the syntax       is >> x >> y  will first read the Base value of x from is , and then read the Base value to y .

4.3.4.f: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.4.g: Example
The file 4.3.4.1: ad_input.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
4.3.4.1: AD Output Operator: Example and Test
 # include <cppad/cppad.hpp> # include <sstream> // std::istringstream # include <string> // std::string bool ad_input(void) { bool ok = true; // create the input string stream is. std::string str ("123 456"); std::istringstream is(str); // start and AD<double> recording CPPAD_TESTVECTOR( CppAD::AD<double> ) x(1), y(1); x[0] = 1.0; CppAD::Independent(x); CppAD::AD<double> z = x[0]; ok &= Variable(z); // read first number into z and second into y[0] is >> z >> y[0]; ok &= Parameter(z); ok &= (z == 123.); ok &= Parameter(y[0]); ok &= (y[0] == 456.); // // terminate recording starting by call to Independent CppAD::ADFun<double> f(x, y); return ok; } 

4.3.5.a: Syntax
os << x

4.3.5.b: Purpose
Writes the Base value, corresponding to x , to the output stream os .

4.3.5.c: Assumption
If b is a Base object,       os << b  returns a reference to os .

4.3.5.d: os
The operand os has prototype       std::ostream& os 
4.3.5.e: x
The operand x has one of the following prototypes       const AD<Base>&               x      const VecAD<Base>::reference& x 
4.3.5.f: Result
The result of this operation can be used as a reference to os . For example, if the operand y has prototype       AD<Base> y  then the syntax       os << x << y  will output the value corresponding to x followed by the value corresponding to y .

4.3.5.g: Operation Sequence
The result of this operation is not an 12.4.b: AD of Base object. Thus it will not be recorded as part of an AD of Base 12.4.g.b: operation sequence .

4.3.5.h: Example
The file 4.3.5.1: ad_output.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
4.3.5.1: AD Output Operator: Example and Test
 # include <cppad/cppad.hpp> # include <sstream> // std::ostringstream # include <string> // std::string # include <iomanip> // std::setprecision, setw, setfill, right namespace { template <class S> void set_ostream(S &os) { os << std::setprecision(4) // 4 digits of precision << std::setw(6) // 6 characters per field << std::setfill(' ') // fill with spaces << std::right; // adjust value to the right } } bool ad_output(void) { bool ok = true; // This output stream is an ostringstream for testing purposes. // You can use << with other types of streams; i.e., std::cout. std::ostringstream stream; // ouput an AD<double> object CppAD::AD<double> pi = 4. * atan(1.); // 3.1415926536 set_ostream(stream); stream << pi; // ouput a VecAD<double>::reference object CppAD::VecAD<double> v(1); CppAD::AD<double> zero(0); v[zero] = exp(1.); // 2.7182818285 set_ostream(stream); stream << v[zero]; // convert output from stream to string std::string str = stream.str(); // check the output ok &= (str == " 3.142 2.718"); return ok; } 
4.3.6: Printing AD Values During Forward Mode

4.3.6.a: Syntax
f.Forward(0, x)  PrintFor(before, var)  PrintFor(pos, before, var, after) 
4.3.6.b: Purpose
The 5.3.1: zero order forward mode command       f.Forward(0, x)  assigns the 12.4.k.c: independent variable vector equal to x . It then computes a value for all of the dependent variables in the 12.4.g.b: operation sequence corresponding to f . Putting a PrintFor in the operation sequence will cause the value of var , corresponding to x , to be printed during zero order forward operations.

4.3.6.c: f.Forward(0, x)
The objects f , x , and the purpose for this operation, are documented in 5.3: Forward .

4.3.6.d: pos
If present, the argument pos has one of the following prototypes       const AD<Base>&               pos      const VecAD<Base>::reference& pos  In this case the text and var will be printed if and only if pos is not greater than zero and a finite number.

4.3.6.e: before
The argument before has prototype       const char* before  This text is written to std::cout before var .

4.3.6.f: var
The argument var has one of the following prototypes       const AD<Base>&               var      const VecAD<Base>::reference& var  The value of var , that corresponds to x , is written to std::cout during the execution of       f.Forward(0, x)  Note that var may be a 12.4.m: variable or 12.4.h: parameter . (A parameters value does not depend on the value of the independent variable vector x .)

4.3.6.g: after
The argument after has prototype       const char* after  This text is written to std::cout after var .

4.3.6.h: Redirecting Output
You can redirect this output to any standard output stream; see the 5.3.4.h: s in the forward mode documentation.

4.3.6.i: Discussion
This is helpful for understanding why tape evaluations have trouble. For example, if one of the operations in f is log(var) and var <= 0 , the corresponding result will be 8.11: nan .

4.3.6.j: Alternative
The 4.3.5: ad_output section describes the normal printing of values; i.e., printing when the corresponding code is executed.

4.3.6.k: Example
The program 4.3.6.1: print_for_cout.cpp is an example and test that prints to standard output. The output of this program states the conditions for passing and failing the test. The function 4.3.6.2: print_for_string.cpp is an example and test that prints to an standard string stream. This function automatically check for correct output.
4.3.6.1: Printing During Forward Mode: Example and Test

4.3.6.1.a: Running
To build this program and run its correctness test see 2.3: cmake_check .

4.3.6.1.b: Source Code
# include <cppad/cppad.hpp>

namespace {
using std::cout;
using std::endl;

// use of PrintFor to check for invalid function arguments
{     // check during recording
if( y <= 0. )
cout << "check_log: y = " << y << " is <= 0" << endl;

// check during zero order forward calculation
PrintFor(y, "check_log: y == ", y , " which is <= 0\n");

return log(y);
}
}

void print_for(void)

// independent variable vector
size_t n = 1;
ax[0] = 1.;
Independent(ax);

// print a VecAD<double>::reference object that is a parameter
av[Zero] = 0.;
PrintFor("v[0] = ", av[Zero]);

// Print a newline to separate this from previous output,
// then print an AD<double> object that is a variable.
PrintFor("\nv[0] + x[0] = ", av[0] + ax[0]);

// A conditional print that will not generate output when x[0] = 2.
PrintFor(ax[0], "\n  2. + x[0] = ",   2. + ax[0], "\n");

// A conditional print that will generate output when x[0] = 2.
PrintFor(ax[0] - 2., "\n  3. + x[0] = ",   3. + ax[0], "\n");

// A log evaluations that will result in an error message when x[0] = 2.
AD<double> var     = 2. - ax[0];

// dependent variable vector
size_t m = 2;
ay[0] = av[Zero] + ax[0];

// define f: x -> y and stop tape recording

// zero order forward with x[0] = 2
x[0] = 2.;

cout << "v[0] = 0" << endl;
cout << "v[0] + x[0] = 2" << endl;
cout << "  3. + x[0] = 5" << endl;
cout << "check_log: y == 0 which is <= 0" << endl;
// ./makefile.am expects "Test passes" at beginning of next output line
cout << "Test passes if four lines above repeat below:" << endl;
f.Forward(0, x);

return;
}
int main(void)
{     print_for();

return 0;
}

4.3.6.1.c: Output
Executing the program above generates the following output:  v[0] = 0 v[0] + x[0] = 2 Test passes if two lines above repeat below: v[0] = 0 v[0] + x[0] = 2 
Input File: example/print_for/print_for.cpp
4.3.6.2: Print During Zero Order Forward Mode: Example and Test
# include <cppad/cppad.hpp> namespace { using std::endl; using CppAD::AD; // use of PrintFor to check for invalid function arguments AD<double> check_log(const AD<double>& y, std::ostream& s_out) { // check AD<double> value during recording if( y <= 0 ) s_out << "check_log: y == " << y << " which is <= 0\n"; // check double value during zero order forward calculation PrintFor(y, "check_log: y == ", y , " which is <= 0\n"); return log(y); } } bool print_for(void) { bool ok = true; using CppAD::PrintFor; std::stringstream stream_out; // independent variable vector size_t n = 1; CPPAD_TESTVECTOR(AD<double>) ax(n); ax[0] = 1.; // value of the independent variable during recording Independent(ax); // A log evaluations that is OK when x[0] = 1 but not when x[0] = 2. AD<double> var = 2. - ax[0]; AD<double> log_var = check_log(var, stream_out); ok &= stream_out.str() == ""; // dependent variable vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) ay(m); ay[0] = log_var; // define f: x -> y and stop tape recording CppAD::ADFun<double> f(ax, ay); // zero order forward with x[0] = 2 CPPAD_TESTVECTOR(double) x(n); x[0] = 2.; f.Forward(0, x, stream_out); std::string string_out = stream_out.str(); ok &= stream_out.str() == "check_log: y == 0 which is <= 0\n"; return ok; } 
Input File: example/general/print_for.cpp
4.3.7: Convert an AD Variable to a Parameter

4.3.7.a: Syntax
y = Var2Par(x)

4.3.1: value

4.3.7.c: Purpose
Returns a 12.4.h: parameter y with the same value as the 12.4.m: variable x .

4.3.7.d: x
The argument x has prototype       const AD<Base> &x  The argument x may be a variable or parameter.

4.3.7.e: y
The result y has prototype       AD<Base> &y  The return value y will be a parameter.

4.3.7.f: Example
The file 4.3.7.1: var2par.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
4.3.7.1: Convert an AD Variable to a Parameter: Example and Test
 # include <cppad/cppad.hpp> bool Var2Par(void) { bool ok = true; using CppAD::AD; using CppAD::Value; using CppAD::Var2Par; // domain space vector size_t n = 2; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = 3.; x[1] = 4.; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = - x[1] * Var2Par(x[0]); // same as y[0] = -x[1] * 3.; // cannot call Value(x[j]) or Value(y[0]) here (currently variables) ok &= ( Value( Var2Par(x[0]) ) == 3. ); ok &= ( Value( Var2Par(x[1]) ) == 4. ); ok &= ( Value( Var2Par(y[0]) ) == -12. ); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // can call Value(x[j]) or Value(y[0]) here (currently parameters) ok &= (Value(x[0]) == 3.); ok &= (Value(x[1]) == 4.); ok &= (Value(y[0]) == -12.); // evaluate derivative of y w.r.t x CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= (dw[0] == 0.); // derivative of y[0] w.r.t x[0] is zero ok &= (dw[1] == -3.); // derivative of y[0] w.r.t x[1] is 3 return ok; } 
Input File: example/general/var2par.cpp
4.4: AD Valued Operations and Functions

4.4.a: Contents
 Arithmetic: 4.4.1 AD Arithmetic Operators and Compound Assignments unary_standard_math: 4.4.2 The Unary Standard Math Functions binary_math: 4.4.3 The Binary Math Functions CondExp: 4.4.4 AD Conditional Expressions Discrete: 4.4.5 Discrete AD Functions numeric_limits: 4.4.6 Numeric Limits For an AD and Base Types atomic: 4.4.7 Atomic AD Functions

4.4.1: AD Arithmetic Operators and Compound Assignments

4.4.1.a: Contents


4.4.1.1.a: Syntax
y = + x

4.4.1.1.b: Purpose
Performs the unary plus operation (the result y is equal to the operand x ).

4.4.1.1.c: x
The operand x has one of the following prototypes       const AD<Base>               &x      const VecAD<Base>::reference &x 
4.4.1.1.d: y
The result y has type       AD<Base> y  It is equal to the operand x .

4.4.1.1.e: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.1.f: Derivative
If $f$ is a 12.4.d: Base function , $$\D{[ + f(x) ]}{x} = \D{f(x)}{x}$$

4.4.1.1.g: Example
The file 4.4.1.1.1: unary_plus.cpp contains an example and test of this operation.
4.4.1.1.1: AD Unary Plus Operator: Example and Test
 # include <cppad/cppad.hpp> bool UnaryPlus(void) { bool ok = true; using CppAD::AD; // domain space vector size_t n = 1; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = 3.; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = + x[0]; // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check values ok &= ( y[0] == 3. ); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); size_t p = 1; dx[0] = 1.; dy = f.Forward(p, dx); ok &= ( dy[0] == 1. ); // dy[0] / dx[0] // reverse computation of dertivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(p, w); ok &= ( dw[0] == 1. ); // dy[0] / dx[0] // use a VecAD<Base>::reference object with unary plus CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x[0]; AD<double> result = + v[zero]; ok &= (result == y[0]); return ok; } 
Input File: example/general/unary_plus.cpp

4.4.1.2.a: Syntax
y = - x

4.4.1.2.b: Purpose
Computes the negative of x .

4.4.1.2.c: Base
The operation in the syntax above must be supported for the case where the operand is a const Base object.

4.4.1.2.d: x
The operand x has one of the following prototypes       const AD<Base>               &x      const VecAD<Base>::reference &x 
4.4.1.2.e: y
The result y has type       AD<Base> y  It is equal to the negative of the operand x .

4.4.1.2.f: Operation Sequence
This is an AD of Base 12.4.g.a: atomic operation and hence is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.2.g: Derivative
If $f$ is a 12.4.d: Base function , $$\D{[ - f(x) ]}{x} = - \D{f(x)}{x}$$

4.4.1.2.h: Example
The file 4.4.1.2.1: unary_minus.cpp contains an example and test of this operation.
4.4.1.2.1: AD Unary Minus Operator: Example and Test
 # include <cppad/cppad.hpp> bool UnaryMinus(void) { bool ok = true; using CppAD::AD; // domain space vector size_t n = 1; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = 3.; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = - x[0]; // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check values ok &= ( y[0] == -3. ); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); size_t p = 1; dx[0] = 1.; dy = f.Forward(p, dx); ok &= ( dy[0] == -1. ); // dy[0] / dx[0] // reverse computation of dertivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(p, w); ok &= ( dw[0] == -1. ); // dy[0] / dx[0] // use a VecAD<Base>::reference object with unary minus CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x[0]; AD<double> result = - v[zero]; ok &= (result == y[0]); return ok; } 
Input File: example/general/unary_minus.cpp

4.4.1.3.a: Syntax
z = x Op y

4.4.1.3.b: Purpose
Performs arithmetic operations where either x or y has type AD<Base> or 4.6.d: VecAD<Base>::reference .

4.4.1.3.c: Op
The operator Op is one of the following
 Op Meaning + z is x plus y - z is x minus y * z is x times y / z is x divided by y

4.4.1.3.d: Base
The type Base is determined by the operand that has type AD<Base> or VecAD<Base>::reference .

4.4.1.3.e: x
The operand x has the following prototype       const Type &x  where Type is VecAD<Base>::reference , AD<Base> , Base , or double.

4.4.1.3.f: y
The operand y has the following prototype       const Type &y  where Type is VecAD<Base>::reference , AD<Base> , Base , or double.

4.4.1.3.g: z
The result z has the following prototype       Type z  where Type is AD<Base> .

4.4.1.3.h: Operation Sequence
This is an 12.4.g.a: atomic 12.4.b: AD of Base operation and hence it is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.3.i: Example
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.

4.4.1.3.j: Derivative
If $f$ and $g$ are 12.4.d: Base functions

$$\D{[ f(x) + g(x) ]}{x} = \D{f(x)}{x} + \D{g(x)}{x}$$
4.4.1.3.j.b: Subtraction
$$\D{[ f(x) - g(x) ]}{x} = \D{f(x)}{x} - \D{g(x)}{x}$$
4.4.1.3.j.c: Multiplication
$$\D{[ f(x) * g(x) ]}{x} = g(x) * \D{f(x)}{x} + f(x) * \D{g(x)}{x}$$
4.4.1.3.j.d: Division
$$\D{[ f(x) / g(x) ]}{x} = [1/g(x)] * \D{f(x)}{x} - [f(x)/g(x)^2] * \D{g(x)}{x}$$
# include <cppad/cppad.hpp> bool Add(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // some binary addition operations AD<double> a = x[0] + 1.; // AD<double> + double AD<double> b = a + 2; // AD<double> + int AD<double> c = 3. + b; // double + AD<double> AD<double> d = 4 + c; // int + AD<double> // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = d + x[0]; // AD<double> + AD<double> // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , 2. * x0 + 10, eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 2., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 2., eps99, eps99); // use a VecAD<Base>::reference object with addition CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = a; AD<double> result = v[zero] + 2; ok &= (result == b); return ok; } 
4.4.1.3.2: AD Binary Subtraction: Example and Test
# include <cppad/cppad.hpp> bool Sub(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = .5; CPPAD_TESTVECTOR(AD<double>) x(1); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); AD<double> a = 2. * x[0] - 1.; // AD<double> - double AD<double> b = a - 2; // AD<double> - int AD<double> c = 3. - b; // double - AD<double> AD<double> d = 4 - c; // int - AD<double> // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = x[0] - d; // AD<double> - AD<double> // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0], x0-4.+3.+2.-2.*x0+1., eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], -1., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], -1., eps99, eps99); // use a VecAD<Base>::reference object with subtraction CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = b; AD<double> result = 3. - v[zero]; ok &= (result == c); return ok; } 
Input File: example/general/sub.cpp
4.4.1.3.3: AD Binary Multiplication: Example and Test
# include <cppad/cppad.hpp> bool Mul(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = .5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // some binary multiplication operations AD<double> a = x[0] * 1.; // AD<double> * double AD<double> b = a * 2; // AD<double> * int AD<double> c = 3. * b; // double * AD<double> AD<double> d = 4 * c; // int * AD<double> // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = x[0] * d; // AD<double> * AD<double> // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0*(4.*3.*2.*1.)*x0, eps99 , eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], (4.*3.*2.*1.)*2.*x0, eps99 , eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], (4.*3.*2.*1.)*2.*x0, eps99 , eps99); // use a VecAD<Base>::reference object with multiplication CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = c; AD<double> result = 4 * v[zero]; ok &= (result == d); return ok; } 
Input File: example/general/mul.cpp
4.4.1.3.4: AD Binary Division: Example and Test
# include <cppad/cppad.hpp> bool Div(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // some binary division operations AD<double> a = x[0] / 1.; // AD<double> / double AD<double> b = a / 2; // AD<double> / int AD<double> c = 3. / b; // double / AD<double> AD<double> d = 4 / c; // int / AD<double> // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = (x[0] * x[0]) / d; // AD<double> / AD<double> // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0], x0*x0*3.*2.*1./(4.*x0), eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 3.*2.*1./4., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 3.*2.*1./4., eps99, eps99); // use a VecAD<Base>::reference object with division CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = d; AD<double> result = (x[0] * x[0]) / v[zero]; ok &= (result == y[0]); return ok; } 
Input File: example/general/div.cpp

4.4.1.4.a: Syntax
x Op y

4.4.1.4.b: Purpose
Performs compound assignment operations where either x has type AD<Base> .

4.4.1.4.c: Op
The operator Op is one of the following
 Op Meaning += x is assigned x plus y -= x is assigned x minus y *= x is assigned x times y /= x is assigned x divided by y

4.4.1.4.d: Base
The type Base is determined by the operand x .

4.4.1.4.e: x
The operand x has the following prototype       AD<Base> &x 
4.4.1.4.f: y
The operand y has the following prototype       const Type &y  where Type is VecAD<Base>::reference , AD<Base> , Base , or double.

4.4.1.4.g: Result
The result of this assignment can be used as a reference to x . For example, if z has the following type       AD<Base> z  then the syntax       z = x += y  will compute x plus y and then assign this value to both x and z .

4.4.1.4.h: Operation Sequence
This is an 12.4.g.a: atomic 12.4.b: AD of Base operation and hence it is part of the current AD of Base 12.4.g.b: operation sequence .

4.4.1.4.i: Example
The following files contain examples and tests of these functions. Each test returns true if it succeeds and false otherwise.
 4.4.1.4.1: AddEq.cpp AD Compound Assignment Addition: Example and Test 4.4.1.4.2: sub_eq.cpp AD Compound Assignment Subtraction: Example and Test 4.4.1.4.3: mul_eq.cpp AD Compound Assignment Multiplication: Example and Test 4.4.1.4.4: div_eq.cpp AD Compound Assignment Division: Example and Test

4.4.1.4.j: Derivative
If $f$ and $g$ are 12.4.d: Base functions

$$\D{[ f(x) + g(x) ]}{x} = \D{f(x)}{x} + \D{g(x)}{x}$$
4.4.1.4.j.b: Subtraction
$$\D{[ f(x) - g(x) ]}{x} = \D{f(x)}{x} - \D{g(x)}{x}$$
4.4.1.4.j.c: Multiplication
$$\D{[ f(x) * g(x) ]}{x} = g(x) * \D{f(x)}{x} + f(x) * \D{g(x)}{x}$$
4.4.1.4.j.d: Division
$$\D{[ f(x) / g(x) ]}{x} = [1/g(x)] * \D{f(x)}{x} - [f(x)/g(x)^2] * \D{g(x)}{x}$$
# include <cppad/cppad.hpp> bool AddEq(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = .5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 2; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = x[0]; // initial value y[0] += 2; // AD<double> += int y[0] += 4.; // AD<double> += double y[1] = y[0] += x[0]; // use the result of a compound assignment // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0+2.+4.+x0, eps99, eps99); ok &= NearEqual(y[1] , y[0], eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 2., eps99, eps99); ok &= NearEqual(dy[1], 2., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; w[1] = 0.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 2., eps99, eps99); // use a VecAD<Base>::reference object with computed addition CppAD::VecAD<double> v(1); AD<double> zero(0); AD<double> result = 1; v[zero] = 2; result += v[zero]; ok &= (result == 3); return ok; } 
4.4.1.4.2: AD Compound Assignment Subtraction: Example and Test
# include <cppad/cppad.hpp> bool SubEq(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = .5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 2; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = 3. * x[0]; // initial value y[0] -= 2; // AD<double> -= int y[0] -= 4.; // AD<double> -= double y[1] = y[0] -= x[0]; // use the result of a compound assignment // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , 3.*x0-(2.+4.+x0), eps99, eps99); ok &= NearEqual(y[1] , y[0], eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 2., eps99, eps99); ok &= NearEqual(dy[1], 2., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; w[1] = 0.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 2., eps99, eps99); // use a VecAD<Base>::reference object with computed subtraction CppAD::VecAD<double> v(1); AD<double> zero(0); AD<double> result = 1; v[zero] = 2; result -= v[zero]; ok &= (result == -1); return ok; } 
Input File: example/general/sub_eq.cpp
4.4.1.4.3: AD Compound Assignment Multiplication: Example and Test
# include <cppad/cppad.hpp> bool MulEq(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = .5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 2; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = x[0]; // initial value y[0] *= 2; // AD<double> *= int y[0] *= 4.; // AD<double> *= double y[1] = y[0] *= x[0]; // use the result of a compound assignment // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0*2.*4.*x0, eps99, eps99); ok &= NearEqual(y[1] , y[0], eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 8.*2.*x0, eps99, eps99); ok &= NearEqual(dy[1], 8.*2.*x0, eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; w[1] = 0.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 8.*2.*x0, eps99, eps99); // use a VecAD<Base>::reference object with computed multiplication CppAD::VecAD<double> v(1); AD<double> zero(0); AD<double> result = 1; v[zero] = 2; result *= v[zero]; ok &= (result == 2); return ok; } 
Input File: example/general/mul_eq.cpp
4.4.1.4.4: AD Compound Assignment Division: Example and Test
# include <cppad/cppad.hpp> bool DivEq(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = .5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 2; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = x[0] * x[0]; // initial value y[0] /= 2; // AD<double> /= int y[0] /= 4.; // AD<double> /= double y[1] = y[0] /= x[0]; // use the result of a compound assignment // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0*x0/(2.*4.*x0), eps99, eps99); ok &= NearEqual(y[1] , y[0], eps99, eps99); // forward computation of partials w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1./8., eps99, eps99); ok &= NearEqual(dy[1], 1./8., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; w[1] = 0.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1./8., eps99, eps99); // use a VecAD<Base>::reference object with computed division CppAD::VecAD<double> v(1); AD<double> zero(0); AD<double> result = 2; v[zero] = 1; result /= v[zero]; ok &= (result == 2); return ok; } 
Input File: example/general/div_eq.cpp
4.4.2: The Unary Standard Math Functions

4.4.2.a: Syntax
y = fun(x)

4.4.2.b: Purpose
Evaluates the standard math function fun .

4.4.2.c: Possible Types

4.4.2.c.a: Base
If Base satisfies the 4.7: base type requirements and argument x has prototype       const Base& x  then the result y has prototype       Base y 
If the argument x has prototype       const AD<Base>& x  then the result y has prototype       AD<Base> y 
If the argument x has prototype       const VecAD<Base>::reference& x  then the result y has prototype       AD<Base> y 
4.4.2.d: fun
The possible values for fun are
  fun    Description 4.4.2.14: abs AD Absolute Value Functions: abs, fabs 4.4.2.1: acos Inverse Sine Function: acos 4.4.2.15: acosh The Inverse Hyperbolic Cosine Function: acosh 4.4.2.2: asin Inverse Sine Function: asin 4.4.2.16: asinh The Inverse Hyperbolic Sine Function: asinh 4.4.2.3: atan Inverse Tangent Function: atan 4.4.2.17: atanh The Inverse Hyperbolic Tangent Function: atanh 4.4.2.4: cos The Cosine Function: cos 4.4.2.5: cosh The Hyperbolic Cosine Function: cosh 4.4.2.18: erf The Error Function 4.4.2.6: exp The Exponential Function: exp 4.4.2.19: expm1 The Exponential Function Minus One: expm1 4.4.2.14: fabs AD Absolute Value Functions: abs, fabs 4.4.2.8: log10 The Base 10 Logarithm Function: log10 4.4.2.20: log1p The Logarithm of One Plus Argument: log1p 4.4.2.7: log The Exponential Function: log 4.4.2.21: sign The Sign: sign 4.4.2.9: sin The Sine Function: sin 4.4.2.10: sinh The Hyperbolic Sine Function: sinh 4.4.2.11: sqrt The Square Root Function: sqrt 4.4.2.12: tan The Tangent Function: tan 4.4.2.13: tanh The Hyperbolic Tangent Function: tanh

4.4.2.1: Inverse Sine Function: acos

4.4.2.1.a: Syntax
y = acos(x)

4.4.2.1.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.1.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.1.d: Derivative
$$\begin{array}{lcr} \R{acos}^{(1)} (x) & = & - (1 - x * x)^{-1/2} \end{array}$$
4.4.2.1.e: Example
The file 4.4.2.1.1: acos.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.1.1: The AD acos Function: Example and Test
 # include <cppad/cppad.hpp> bool acos(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // 10 times machine epsilon double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> cos_of_x0 = CppAD::cos(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::acos(cos_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps, eps); // use a VecAD<Base>::reference object with acos CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = cos_of_x0; AD<double> result = CppAD::acos(v[zero]); ok &= NearEqual(result, x0, eps, eps); return ok; } 
Input File: example/general/acos.cpp
4.4.2.2: Inverse Sine Function: asin

4.4.2.2.a: Syntax
y = asin(x)

4.4.2.2.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.2.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.2.d: Derivative
$$\begin{array}{lcr} \R{asin}^{(1)} (x) & = & (1 - x * x)^{-1/2} \end{array}$$
4.4.2.2.e: Example
The file 4.4.2.2.1: asin.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.2.1: The AD asin Function: Example and Test
 # include <cppad/cppad.hpp> bool asin(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // 10 times machine epsilon double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> sin_of_x0 = CppAD::sin(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::asin(sin_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps, eps); // use a VecAD<Base>::reference object with asin CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = sin_of_x0; AD<double> result = CppAD::asin(v[zero]); ok &= NearEqual(result, x0, eps, eps); return ok; } 
Input File: example/general/asin.cpp
4.4.2.3: Inverse Tangent Function: atan

4.4.2.3.a: Syntax
y = atan(x)

4.4.2.3.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.3.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.3.d: Derivative
$$\begin{array}{lcr} \R{atan}^{(1)} (x) & = & \frac{1}{1 + x^2} \end{array}$$
4.4.2.3.e: Example
The file 4.4.2.3.1: atan.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.3.1: The AD atan Function: Example and Test
 # include <cppad/cppad.hpp> bool atan(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> tan_of_x0 = CppAD::tan(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::atan(tan_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps99, eps99); // use a VecAD<Base>::reference object with atan CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = tan_of_x0; AD<double> result = CppAD::atan(v[zero]); ok &= NearEqual(result, x0, eps99, eps99); return ok; } 
Input File: example/general/atan.cpp
4.4.2.4: The Cosine Function: cos

4.4.2.4.a: Syntax
y = cos(x)

4.4.2.4.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.4.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.4.d: Derivative
$$\begin{array}{lcr} \R{cos}^{(1)} (x) & = & - \sin(x) \end{array}$$
4.4.2.4.e: Example
The file 4.4.2.4.1: cos.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.4.1: The AD cos Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool Cos(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::cos(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::cos(x0); ok &= NearEqual(y[0] , check, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = - std::sin(x0); ok &= NearEqual(dy[0], check, eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps99, eps99); // use a VecAD<Base>::reference object with cos CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::cos(v[zero]); check = std::cos(x0); ok &= NearEqual(result, check, eps99, eps99); return ok; } 
Input File: example/general/cos.cpp
4.4.2.5: The Hyperbolic Cosine Function: cosh

4.4.2.5.a: Syntax
y = cosh(x)

4.4.2.5.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.5.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.5.d: Derivative
$$\begin{array}{lcr} \R{cosh}^{(1)} (x) & = & \sinh(x) \end{array}$$
4.4.2.5.e: Example
The file 4.4.2.5.1: cosh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.5.1: The AD cosh Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool Cosh(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::cosh(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::cosh(x0); ok &= NearEqual(y[0] , check, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = std::sinh(x0); ok &= NearEqual(dy[0], check, eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps99, eps99); // use a VecAD<Base>::reference object with cosh CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::cosh(v[zero]); check = std::cosh(x0); ok &= NearEqual(result, check, eps99, eps99); return ok; } 
Input File: example/general/cosh.cpp
4.4.2.6: The Exponential Function: exp

4.4.2.6.a: Syntax
y = exp(x)

4.4.2.6.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.6.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.6.d: Derivative
$$\begin{array}{lcr} \R{exp}^{(1)} (x) & = & \exp(x) \end{array}$$
4.4.2.6.e: Example
The file 4.4.2.6.1: exp.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.6.1: The AD exp Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool exp(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) ax(n); ax[0] = x0; // declare independent variables and start tape recording CppAD::Independent(ax); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) ay(m); ay[0] = CppAD::exp(ax[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(ax, ay); // check value double check = std::exp(x0); ok &= NearEqual(ay[0], check, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], check, eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps, eps); // use a VecAD<Base>::reference object with exp CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::exp(v[zero]); ok &= NearEqual(result, check, eps, eps); return ok; } 
Input File: example/general/exp.cpp
4.4.2.7: The Exponential Function: log

4.4.2.7.a: Syntax
y = log(x)

4.4.2.7.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.7.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.7.d: Derivative
$$\begin{array}{lcr} \R{log}^{(1)} (x) & = & \frac{1}{x} \end{array}$$
4.4.2.7.e: Example
The file 4.4.2.7.1: log.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.7.1: The AD log Function: Example and Test
 # include <cppad/cppad.hpp> bool log(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> exp_of_x0 = CppAD::exp(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::log(exp_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps99, eps99); // use a VecAD<Base>::reference object with log CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = exp_of_x0; AD<double> result = CppAD::log(v[zero]); ok &= NearEqual(result, x0, eps99, eps99); return ok; } 
Input File: example/general/log.cpp
4.4.2.8: The Base 10 Logarithm Function: log10

4.4.2.8.a: Syntax
y = log10(x)

4.4.2.8.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.8.c: Method
CppAD uses the representation $$\begin{array}{lcr} {\rm log10} (x) & = & \log(x) / \log(10) \end{array}$$

4.4.2.8.d: Example
The file 4.4.2.8.1: log10.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.8.1: The AD log10 Function: Example and Test
 # include <cppad/cppad.hpp> bool log10(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // ten raised to the x0 power AD<double> ten = 10.; AD<double> pow_10_x0 = CppAD::pow(ten, x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::log10(pow_10_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps99, eps99); // use a VecAD<Base>::reference object with log10 CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = pow_10_x0; AD<double> result = CppAD::log10(v[zero]); ok &= NearEqual(result, x0, eps99, eps99); return ok; } 
Input File: example/general/log10.cpp
4.4.2.9: The Sine Function: sin

4.4.2.9.a: Syntax
y = sin(x)

4.4.2.9.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.9.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.9.d: Derivative
$$\begin{array}{lcr} \R{sin}^{(1)} (x) & = & \cos(x) \end{array}$$
4.4.2.9.e: Example
The file 4.4.2.9.1: sin.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.9.1: The AD sin Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool Sin(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::sin(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::sin(x0); ok &= NearEqual(y[0] , check, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = std::cos(x0); ok &= NearEqual(dy[0], check, eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps99, eps99); // use a VecAD<Base>::reference object with sin CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::sin(v[zero]); check = std::sin(x0); ok &= NearEqual(result, check, eps99, eps99); return ok; } 
Input File: example/general/sin.cpp
4.4.2.10: The Hyperbolic Sine Function: sinh

4.4.2.10.a: Syntax
y = sinh(x)

4.4.2.10.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.10.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.10.d: Derivative
$$\begin{array}{lcr} \R{sinh}^{(1)} (x) & = & \cosh(x) \end{array}$$
4.4.2.10.e: Example
The file 4.4.2.10.1: sinh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.10.1: The AD sinh Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool Sinh(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::sinh(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::sinh(x0); ok &= NearEqual(y[0] , check, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = std::cosh(x0); ok &= NearEqual(dy[0], check, eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps99, eps99); // use a VecAD<Base>::reference object with sinh CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::sinh(v[zero]); check = std::sinh(x0); ok &= NearEqual(result, check, eps99, eps99); return ok; } 
Input File: example/general/sinh.cpp
4.4.2.11: The Square Root Function: sqrt

4.4.2.11.a: Syntax
y = sqrt(x)

4.4.2.11.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.11.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.11.d: Derivative
$$\begin{array}{lcr} \R{sqrt}^{(1)} (x) & = & \frac{1}{2 \R{sqrt} (x) } \end{array}$$
4.4.2.11.e: Example
The file 4.4.2.11.1: sqrt.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.11.1: The AD sqrt Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool Sqrt(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::sqrt(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::sqrt(x0); ok &= NearEqual(y[0] , check, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = 1. / (2. * std::sqrt(x0) ); ok &= NearEqual(dy[0], check, eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps99, eps99); // use a VecAD<Base>::reference object with sqrt CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::sqrt(v[zero]); check = std::sqrt(x0); ok &= NearEqual(result, check, eps99, eps99); return ok; } 
Input File: example/general/sqrt.cpp
4.4.2.12: The Tangent Function: tan

4.4.2.12.a: Syntax
y = tan(x)

4.4.2.12.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.12.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.12.d: Derivative
$$\begin{array}{lcr} \R{tan}^{(1)} (x) & = & 1 + \tan (x)^2 \end{array}$$
4.4.2.12.e: Example
The file 4.4.2.12.1: tan.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.12.1: The AD tan Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> # include <limits> bool Tan(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::tan(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::tan(x0); ok &= NearEqual(y[0] , check, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = 1. + std::tan(x0) * std::tan(x0); ok &= NearEqual(dy[0], check, eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps, eps); // use a VecAD<Base>::reference object with tan CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::tan(v[zero]); check = std::tan(x0); ok &= NearEqual(result, check, eps, eps); return ok; } 
Input File: example/general/tan.cpp
4.4.2.13: The Hyperbolic Tangent Function: tanh

4.4.2.13.a: Syntax
y = tanh(x)

4.4.2.13.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.13.c: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.13.d: Derivative
$$\begin{array}{lcr} \R{tanh}^{(1)} (x) & = & 1 - \tanh (x)^2 \end{array}$$
4.4.2.13.e: Example
The file 4.4.2.13.1: tanh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.13.1: The AD tanh Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> # include <limits> bool Tanh(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps = 10. * CppAD::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::tanh(x[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value double check = std::tanh(x0); ok &= NearEqual(y[0] , check, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); check = 1. - std::tanh(x0) * std::tanh(x0); ok &= NearEqual(dy[0], check, eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, eps, eps); // use a VecAD<Base>::reference object with tan CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::tanh(v[zero]); check = std::tanh(x0); ok &= NearEqual(result, check, eps, eps); return ok; } 
Input File: example/general/tanh.cpp
4.4.2.14: AD Absolute Value Functions: abs, fabs

4.4.2.14.a: Syntax
y = abs(x)  y = fabs(x)

4.4.2.14.b: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.14.c: Atomic
In the case where x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.14.d: Complex Types
The functions abs and fabs are not defined for the base types std::complex<float> or std::complex<double> because the complex abs function is not complex differentiable (see 12.1.d: complex types faq ).

4.4.2.14.e: Derivative
CppAD defines the derivative of the abs function is the 4.4.2.21: sign function; i.e., $${\rm abs}^{(1)} ( x ) = {\rm sign} (x ) = \left\{ \begin{array}{rl} +1 & {\rm if} \; x > 0 \\ 0 & {\rm if} \; x = 0 \\ -1 & {\rm if} \; x < 0 \end{array} \right.$$ The result for x == 0 used to be a directional derivative.

4.4.2.14.f: Example
The file 4.4.2.14.1: fabs.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.14.1: AD Absolute Value Function: Example and Test
 # include <cppad/cppad.hpp> bool fabs(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = 0.; // declare independent variables and start tape recording CppAD::Independent(x); // range space vector size_t m = 6; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = fabs(x[0] - 1.); y[1] = fabs(x[0]); y[2] = fabs(x[0] + 1.); // y[3] = fabs(x[0] - 1.); y[4] = fabs(x[0]); y[5] = fabs(x[0] + 1.); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check values ok &= (y[0] == 1.); ok &= (y[1] == 0.); ok &= (y[2] == 1.); // ok &= (y[3] == 1.); ok &= (y[4] == 0.); ok &= (y[5] == 1.); // forward computation of partials w.r.t. a positive x[0] direction size_t p = 1; CPPAD_TESTVECTOR(double) dx(n), dy(m); dx[0] = 1.; dy = f.Forward(p, dx); ok &= (dy[0] == - dx[0]); ok &= (dy[1] == 0. ); // used to be (dy[1] == + dx[0]); ok &= (dy[2] == + dx[0]); // ok &= (dy[3] == - dx[0]); ok &= (dy[4] == 0. ); // used to be (dy[1] == + dx[0]); ok &= (dy[5] == + dx[0]); // forward computation of partials w.r.t. a negative x[0] direction dx[0] = -1.; dy = f.Forward(p, dx); ok &= (dy[0] == - dx[0]); ok &= (dy[1] == 0. ); // used to be (dy[1] == - dx[0]); ok &= (dy[2] == + dx[0]); // ok &= (dy[3] == - dx[0]); ok &= (dy[4] == 0. ); // used to be (dy[1] == - dx[0]); ok &= (dy[5] == + dx[0]); // reverse computation of derivative of y[0] p = 1; CPPAD_TESTVECTOR(double) w(m), dw(n); w[0] = 1.; w[1] = 0.; w[2] = 0.; w[3] = 0.; w[4] = 0.; w[5] = 0.; dw = f.Reverse(p, w); ok &= (dw[0] == -1.); // reverse computation of derivative of y[1] w[0] = 0.; w[1] = 1.; dw = f.Reverse(p, w); ok &= (dw[0] == 0.); // reverse computation of derivative of y[5] w[1] = 0.; w[5] = 1.; dw = f.Reverse(p, w); ok &= (dw[0] == 1.); // use a VecAD<Base>::reference object with abs and fabs CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = -1; AD<double> result = fabs(v[zero]); ok &= NearEqual(result, 1., eps99, eps99); result = fabs(v[zero]); ok &= NearEqual(result, 1., eps99, eps99); return ok; } 
Input File: example/general/fabs.cpp
4.4.2.15: The Inverse Hyperbolic Cosine Function: acosh

4.4.2.15.a: Syntax
y = acosh(x)

4.4.2.15.b: Description
The inverse hyperbolic cosine function is defined by x == cosh(y) .

4.4.2.15.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.15.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.15.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation $$\R{acosh} (x) = \log \left( x + \sqrt{ x^2 - 1 } \right)$$ to compute this function.

4.4.2.15.e: Example
The file 4.4.2.15.1: acosh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.15.1: The AD acosh Function: Example and Test
 # include <cppad/cppad.hpp> bool acosh(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // 10 times machine epsilon double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> cosh_of_x0 = CppAD::cosh(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::acosh(cosh_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps, eps); // use a VecAD<Base>::reference object with acosh CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = cosh_of_x0; AD<double> result = CppAD::acosh(v[zero]); ok &= NearEqual(result, x0, eps, eps); return ok; } 
Input File: example/general/acosh.cpp
4.4.2.16: The Inverse Hyperbolic Sine Function: asinh

4.4.2.16.a: Syntax
y = asinh(x)

4.4.2.16.b: Description
The inverse hyperbolic sine function is defined by x == sinh(y) .

4.4.2.16.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.16.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.16.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation $$\R{asinh} (x) = \log \left( x + \sqrt{ 1 + x^2 } \right)$$ to compute this function.

4.4.2.16.e: Example
The file 4.4.2.16.1: asinh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.16.1: The AD asinh Function: Example and Test
 # include <cppad/cppad.hpp> bool asinh(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // 10 times machine epsilon double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> sinh_of_x0 = CppAD::sinh(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::asinh(sinh_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps, eps); // use a VecAD<Base>::reference object with asinh CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = sinh_of_x0; AD<double> result = CppAD::asinh(v[zero]); ok &= NearEqual(result, x0, eps, eps); return ok; } 
Input File: example/general/asinh.cpp
4.4.2.17: The Inverse Hyperbolic Tangent Function: atanh

4.4.2.17.a: Syntax
y = atanh(x)

4.4.2.17.b: Description
The inverse hyperbolic tangent function is defined by x == tanh(y) .

4.4.2.17.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.17.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.17.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation $$\R{atanh} (x) = \frac{1}{2} \log \left( \frac{1 + x}{1 - x} \right)$$ to compute this function.

4.4.2.17.e: Example
The file 4.4.2.17.1: atanh.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.17.1: The AD atanh Function: Example and Test
 # include <cppad/cppad.hpp> bool atanh(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // 10 times machine epsilon double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> tanh_of_x0 = CppAD::tanh(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::atanh(tanh_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps, eps); // use a VecAD<Base>::reference object with atanh CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = tanh_of_x0; AD<double> result = CppAD::atanh(v[zero]); ok &= NearEqual(result, x0, eps, eps); return ok; } 
Input File: example/general/atanh.cpp
4.4.2.18: The Error Function

4.4.2.18.a: Syntax
y = erf(x)

4.4.2.18.b: Description
Returns the value of the error function which is defined by $${\rm erf} (x) = \frac{2}{ \sqrt{\pi} } \int_0^x \exp( - t * t ) \; {\bf d} t$$

4.4.2.18.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.18.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.18.d.b: false
If this preprocessor symbol is false (0), CppAD uses a fast approximation (few numerical operations) with relative error bound $4 \times 10^{-4}$; see Vedder, J.D., Simple approximations for the error function and its inverse , American Journal of Physics, v 55, n 8, 1987, p 762-3.

4.4.2.18.e: Example
The file 4.4.2.18.1: erf.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.18.1: The AD erf Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> # include <limits> bool Erf(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps = 10. * CppAD::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) ax(n); ax[0] = x0; // declare independent variables and start tape recording CppAD::Independent(ax); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) ay(m); ay[0] = CppAD::erf(ax[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(ax, ay); // check relative erorr double erf_x0 = 0.5204998778130465; ok &= NearEqual(ay[0] , erf_x0, 0., 4e-4); # if CPPAD_USE_CPLUSPLUS_2011 double tmp = std::max(1e-15, eps); ok &= NearEqual(ay[0] , erf_x0, 0., tmp); # endif // value of derivative of erf at x0 double pi = 4. * std::atan(1.); double factor = 2. / sqrt(pi); double check = factor * std::exp(-x0 * x0); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], check, 0., 1e-3); # if CPPAD_USE_CPLUSPLUS_2011 ok &= NearEqual(dy[0], check, 0., eps); # endif // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], check, 0., 1e-1); # if CPPAD_USE_CPLUSPLUS_2011 ok &= NearEqual(dw[0], check, 0., eps); # endif // use a VecAD<Base>::reference object with erf CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::erf(v[zero]); ok &= NearEqual(result, ay[0], eps, eps); // use a double with erf ok &= NearEqual(CppAD::erf(x0), ay[0], eps, eps); return ok; } 
Input File: example/general/erf.cpp
4.4.2.19: The Exponential Function Minus One: expm1

4.4.2.19.a: Syntax
y = expm1(x)

4.4.2.19.b: Description
Returns the value of the exponential function minus one which is defined by y == exp(x) - 1 .

4.4.2.19.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.19.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.19.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation $$\R{expm1} (x) = \exp(x) - 1$$ to compute this function.

4.4.2.19.e: Example
The file 4.4.2.19.1: expm1.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.19.1: The AD exp Function: Example and Test
 # include <cppad/cppad.hpp> # include <cmath> bool expm1(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) ax(n); ax[0] = x0; // declare independent variables and start tape recording CppAD::Independent(ax); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) ay(m); ay[0] = CppAD::expm1(ax[0]); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(ax, ay); // expx0 value double expx0 = std::exp(x0); ok &= NearEqual(ay[0], expx0-1.0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], expx0, eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], expx0, eps, eps); // use a VecAD<Base>::reference object with exp CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = x0; AD<double> result = CppAD::expm1(v[zero]); ok &= NearEqual(result, expx0-1.0, eps, eps); return ok; } 
Input File: example/general/expm1.cpp
4.4.2.20: The Logarithm of One Plus Argument: log1p

4.4.2.20.a: Syntax
y = log1p(x)

4.4.2.20.b: Description
Returns the value of the logarithm of one plus argument which is defined by y == log(1 + x) .

4.4.2.20.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.20.d.a: true
If this preprocessor symbol is true (1), and x is an AD type, this is an 12.4.g.a: atomic operation .

4.4.2.20.d.b: false
If this preprocessor symbol is false (0), CppAD uses the representation $$\R{log1p} (x) = \log(1 + x)$$ to compute this function.

4.4.2.20.e: Example
The file 4.4.2.20.1: log1p.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.20.1: The AD log1p Function: Example and Test
 # include <cppad/cppad.hpp> bool log1p(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // 10 times machine epsilon double eps = 10. * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> expm1_of_x0 = CppAD::expm1(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::log1p(expm1_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps, eps); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps, eps); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps, eps); // use a VecAD<Base>::reference object with log1p CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = expm1_of_x0; AD<double> result = CppAD::log1p(v[zero]); ok &= NearEqual(result, x0, eps, eps); return ok; } 
Input File: example/general/log1p.cpp
4.4.2.21: The Sign: sign

4.4.2.21.a: Syntax
y = sign(x)

4.4.2.21.b: Description
Evaluates the sign function which is defined by $${\rm sign} (x) = \left\{ \begin{array}{rl} +1 & {\rm if} \; x > 0 \\ 0 & {\rm if} \; x = 0 \\ -1 & {\rm if} \; x < 0 \end{array} \right.$$

4.4.2.21.c: x, y
See the 4.4.2.c: possible types for a unary standard math function.

4.4.2.21.d: Atomic
This is an 12.4.g.a: atomic operation .

4.4.2.21.e: Derivative
CppAD computes the derivative of the sign function as zero for all argument values x . The correct mathematical derivative is different and is given by $${\rm sign}^{(1)} (x) = 2 \delta (x)$$ where $\delta (x)$ is the Dirac Delta function.

4.4.2.21.f: Example
The file 4.4.2.21.1: sign.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
4.4.2.21.1: Sign Function: Example and Test
 # include <cppad/cppad.hpp> bool sign(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; // create f: x -> y where f(x) = sign(x) size_t n = 1; size_t m = 1; CPPAD_TESTVECTOR(AD<double>) ax(n), ay(m); ax[0] = 0.; CppAD::Independent(ax); ay[0] = sign(ax[0]); CppAD::ADFun<double> f(ax, ay); // check value during recording ok &= (ay[0] == 0.); // use f(x) to evaluate the sign function and its derivatives CPPAD_TESTVECTOR(double) x(n), y(m), dx(n), dy(m), w(m), dw(n); dx[0] = 1.; w[0] = 1.; // x[0] = 2.; y = f.Forward(0, x); ok &= (y[0] == 1.); dy = f.Forward(1, dx); ok &= (dy[0] == 0.); dw = f.Reverse(1, w); ok &= (dw[0] == 0.); // x[0] = 0.; y = f.Forward(0, x); ok &= (y[0] == 0.); dy = f.Forward(1, dx); ok &= (dy[0] == 0.); dw = f.Reverse(1, w); ok &= (dw[0] == 0.); // x[0] = -2.; y = f.Forward(0, x); ok &= (y[0] == -1.); dy = f.Forward(1, dx); ok &= (dy[0] == 0.); dw = f.Reverse(1, w); ok &= (dw[0] == 0.); // use a VecAD<Base>::reference object with sign CppAD::VecAD<double> v(1); AD<double> zero(0); v[zero] = 2.; AD<double> result = sign(v[zero]); ok &= (result == 1.); return ok; } 
Input File: example/general/sign.cpp
4.4.3: The Binary Math Functions

4.4.3.a: Contents
 atan2: 4.4.3.1 AD Two Argument Inverse Tangent Function pow: 4.4.3.2 The AD Power Function azmul: 4.4.3.3 Absolute Zero Multiplication

4.4.3.1: AD Two Argument Inverse Tangent Function

4.4.3.1.a: Syntax
theta = atan2(y, x)

4.4.3.1.b: Purpose
Determines an angle $\theta \in [ - \pi , + \pi ]$ such that $$\begin{array}{rcl} \sin ( \theta ) & = & y / \sqrt{ x^2 + y^2 } \\ \cos ( \theta ) & = & x / \sqrt{ x^2 + y^2 } \end{array}$$

4.4.3.1.c: y
The argument y has one of the following prototypes       const AD<Base>               &y      const VecAD<Base>::reference &y 
4.4.3.1.d: x
The argument x has one of the following prototypes       const AD<Base>               &x      const VecAD<Base>::reference &x 
4.4.3.1.e: theta
The result theta has prototype       AD<Base> theta 
4.4.3.1.f: Operation Sequence
The AD of Base operation sequence used to calculate theta is 12.4.g.d: independent of x and y .

4.4.3.1.g: Example
The file 4.4.3.1.1: atan2.cpp contains an example and test of this function. It returns true if it succeeds and false otherwise.
 # include <cppad/cppad.hpp> bool atan2(void) { bool ok = true; using CppAD::AD; using CppAD::NearEqual; double eps99 = 99.0 * std::numeric_limits<double>::epsilon(); // domain space vector size_t n = 1; double x0 = 0.5; CPPAD_TESTVECTOR(AD<double>) x(n); x[0] = x0; // declare independent variables and start tape recording CppAD::Independent(x); // a temporary value AD<double> sin_of_x0 = CppAD::sin(x[0]); AD<double> cos_of_x0 = CppAD::cos(x[0]); // range space vector size_t m = 1; CPPAD_TESTVECTOR(AD<double>) y(m); y[0] = CppAD::atan2(sin_of_x0, cos_of_x0); // create f: x -> y and stop tape recording CppAD::ADFun<double> f(x, y); // check value ok &= NearEqual(y[0] , x0, eps99, eps99); // forward computation of first partial w.r.t. x[0] CPPAD_TESTVECTOR(double) dx(n); CPPAD_TESTVECTOR(double) dy(m); dx[0] = 1.; dy = f.Forward(1, dx); ok &= NearEqual(dy[0], 1., eps99, eps99); // reverse computation of derivative of y[0] CPPAD_TESTVECTOR(double) w(m); CPPAD_TESTVECTOR(double) dw(n); w[0] = 1.; dw = f.Reverse(1, w); ok &= NearEqual(dw[0], 1., eps99, eps99); // use a VecAD<Base>::reference object with atan2 CppAD::VecAD<double> v(2); AD<double> zero(0); AD<double> one(1); v[zero] = sin_of_x0; v[one] = cos_of_x0; <