Prev | Next |

@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@

*f*.optimize()

*f*.optimize(*options*)

The operation sequence corresponding to an ADFun object can be very large and involve many operations; see the size functions in seq_property . The

*f*.optimize

procedure reduces the number of operations,
and thereby the time and the memory, required to
compute function and derivative values.
The object

*f*

has prototype

ADFun<*Base*> *f*

This argument has prototype

const std::string& *options*

The default for
*options*

is the empty string.
If it is present, it must consist of one or more of the options below
separated by a single space character.
The

`optimize`

function can create conditional skip operators
to improve the speed of conditional expressions; see
optimize
.
If the sub-string `no_conditional_skip`

appears in
*options*

,
conditional skip operations are not be generated.
This may make the optimize routine use significantly less memory
and take less time to optimize
*f*

.
If conditional skip operations are generated,
it may save a significant amount of time when
using
*f*

for forward
or reverse
mode calculations;
see number_skip
.
If the sub-string

`no_compare_op`

appears in
*options*

,
comparison operators will be removed from the optimized function.
These operators are necessary for the
compare_change
functions to be meaningful.
On the other hand, they are not necessary, and take extra time,
when the compare_change functions are not used.
If the sub-string

`no_compare_op`

appears in
*options*

,
PrintFor
operations will be removed form the optimized function.
These operators are useful for reporting problems evaluating derivatives
at independent variable values different from those used to record a function.
forward_active.cpp | Example Optimization and Forward Activity Analysis |

reverse_active.cpp | Example Optimization and Reverse Activity Analysis |

compare_op.cpp | Example Optimization and Comparison Operators |

print_for_op.cpp | Example Optimization and Print Forward Operators |

conditional_skip.cpp | Example Optimization and Conditional Expressions |

nest_conditional.cpp | Example Optimization and Nested Conditional Expressions |

cumulative_sum.cpp | Example Optimization and Cumulative Sum Operations |

If a zero order forward calculation is done during the construction of

*f*

, it will require more memory
and time than required after the optimization procedure.
In addition, it will need to be redone.
For this reason, it is more efficient to use

ADFun<*Base*> *f*;

*f*.Dependent(*x*, *y*);

*f*.optimize();

instead of

ADFun<*Base*> *f*(*x*, *y*)

*f*.optimize();

See the discussion about
sequence constructors
.
You can run the CppAD speed tests and see the corresponding changes in number of variables and execution time. Note that there is an interaction between using optimize and onetape . If

*onetape*

is true and
*optimize*

is true,
the optimized tape will be reused many times.
If
*onetape*

is false and
*optimize*

is true,
the tape will be re-optimized for each test.
There are some subtitle issue with optimized atomic functions @(@ v = g(u) @)@:

The atomic_rev_sparse_jac function is be used to determine which components of

*u*

affect the dependent variables of
*f*

.
For each atomic operation, the current
atomic_sparsity
setting is used
to determine if `pack_sparsity_enum`

, `bool_sparsity_enum`

,
or `set_sparsity_enum`

is used to determine dependency relations
between argument and result variables.
If

*u*[*i*]

does not affect the value of
the dependent variables for
*f*

,
the value of
*u*[*i*]

is set to nan
.
If NDEBUG is not defined, and f.size_order() is greater than zero, a forward_zero calculation is done using the optimized version of

*f*

and the results are checked to see that they are
the same as before.
If they are not the same, the
ErrorHandler
is called with a known error message
related to
*f*.optimize()

.
Input File: cppad/core/optimize.hpp