CppAD: A C++ Algorithmic Differentiation Package  20171217
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros
checkpoint.hpp
Go to the documentation of this file.
1 # ifndef CPPAD_CORE_CHECKPOINT_HPP
2 # define CPPAD_CORE_CHECKPOINT_HPP
3 
4 /* --------------------------------------------------------------------------
5 CppAD: C++ Algorithmic Differentiation: Copyright (C) 2003-17 Bradley M. Bell
6 
7 CppAD is distributed under multiple licenses. This distribution is under
8 the terms of the
9  Eclipse Public License Version 1.0.
10 
11 A copy of this license is included in the COPYING file of this distribution.
12 Please visit http://www.coin-or.org/CppAD/ for information on other licenses.
13 -------------------------------------------------------------------------- */
16 
17 namespace CppAD { // BEGIN_CPPAD_NAMESPACE
18 /*!
19 \file checkpoint.hpp
20 defining checkpoint functions.
21 */
22 
23 /*
24 $begin checkpoint$$
25 $spell
26  sv
27  var
28  cppad.hpp
29  CppAD
30  checkpoint
31  checkpointing
32  algo
33  atom_fun
34  const
35  enum
36  bool
37  recomputed
38 $$
39 
40 $section Checkpointing Functions$$
41 
42 $head Syntax$$
43 $codei%checkpoint<%Base%> %atom_fun%(
44  %name%, %algo%, %ax%, %ay%, %sparsity%, %optimize%
45 )
46 %sv% = %atom_fun%.size_var()
47 %atom_fun%.option(%option_value%)
48 %algo%(%ax%, %ay%)
49 %atom_fun%(%ax%, %ay%)
50 checkpoint<%Base%>::clear()%$$
51 
52 $head See Also$$
53 $cref reverse_checkpoint.cpp$$
54 
55 $head Purpose$$
56 
57 $subhead Reduce Memory$$
58 You can reduce the size of the tape and memory required for AD by
59 checkpointing functions of the form $latex y = f(x)$$ where
60 $latex f : B^n \rightarrow B^m$$.
61 
62 $subhead Faster Recording$$
63 It may also reduce the time to make a recording the same function
64 for different values of the independent variable.
65 Note that the operation sequence for a recording that uses $latex f(x)$$
66 may depend on its independent variables.
67 
68 $subhead Repeating Forward$$
69 Normally, CppAD store $cref forward$$ mode results until they freed
70 using $cref capacity_order$$ or the corresponding $cref ADFun$$ object is
71 deleted. This is not true for checkpoint functions because a checkpoint
72 function may be used repeatedly with different arguments in the same tape.
73 Thus, forward mode results are recomputed each time a checkpoint function
74 is used during a forward or reverse mode sweep.
75 
76 $subhead Restriction$$
77 The $cref/operation sequence/glossary/Operation/Sequence/$$
78 representing $latex f(x)$$ cannot depend on the value of $latex x$$.
79 The approach in the $cref reverse_checkpoint.cpp$$ example case be applied
80 when the operation sequence depends on $latex x$$.
81 
82 $subhead Multiple Level AD$$
83 If $icode Base$$ is an AD type, it is possible to record $icode Base$$
84 operations.
85 Note that $icode atom_fun$$ will treat $icode algo$$ as an atomic
86 operation while recording $codei%AD%<%Base%>%$$ operations, but not while
87 recording $icode Base$$ operations.
88 See the $cref atomic_mul_level.cpp$$ example.
89 
90 
91 $head Method$$
92 The $code checkpoint$$ class is derived from $code atomic_base$$
93 and makes this easy.
94 It implements all the $code atomic_base$$
95 $cref/virtual functions/atomic_base/Virtual Functions/$$
96 and hence its source code $code cppad/core/checkpoint.hpp$$
97 provides an example implementation of $cref atomic_base$$.
98 The difference is that $code checkpoint.hpp$$ uses AD
99 instead of user provided derivatives.
100 
101 $head constructor$$
102 The syntax for the checkpoint constructor is
103 $codei%
104  checkpoint<%Base%> %atom_fun%(%name%, %algo%, %ax%, %ay%)
105 %$$
106 $list number$$
107 This constructor cannot be called in $cref/parallel/ta_in_parallel/$$ mode.
108 $lnext
109 You cannot currently be recording
110 $codei%AD<%Base%>%$$ operations when the constructor is called.
111 $lnext
112 This object $icode atom_fun$$ must not be destructed for as long
113 as any $codei%ADFun<%Base%>%$$ object uses its atomic operation.
114 $lnext
115 This class is implemented as a derived class of
116 $cref/atomic_base/atomic_ctor/atomic_base/$$ and hence
117 some of its error message will refer to $code atomic_base$$.
118 $lend
119 
120 $head Base$$
121 The type $icode Base$$ specifies the base type for AD operations.
122 
123 $head ADVector$$
124 The type $icode ADVector$$ must be a
125 $cref/simple vector class/SimpleVector/$$ with elements of type
126 $codei%AD<%Base%>%$$.
127 
128 $head name$$
129 This $icode checkpoint$$ constructor argument has prototype
130 $codei%
131  const char* %name%
132 %$$
133 It is the name used for error reporting.
134 The suggested value for $icode name$$ is $icode atom_fun$$; i.e.,
135 the same name as used for the object being constructed.
136 
137 $head ax$$
138 This argument has prototype
139 $codei%
140  const %ADVector%& %ax%
141 %$$
142 and size must be equal to $icode n$$.
143 It specifies vector $latex x \in B^n$$
144 at which an $codei%AD<%Base%>%$$ version of
145 $latex y = f(x)$$ is to be evaluated.
146 
147 $head ay$$
148 This argument has prototype
149 $codei%
150  %ADVector%& %ay%
151 %$$
152 Its input size must be equal to $icode m$$ and does not change.
153 The input values of its elements do not matter.
154 Upon return, it is an $codei%AD<%Base%>%$$ version of
155 $latex y = f(x)$$.
156 
157 $head sparsity$$
158 This argument has prototype
159 $codei%
160  atomic_base<%Base%>::option_enum %sparsity%
161 %$$
162 It specifies $cref/sparsity/atomic_ctor/atomic_base/sparsity/$$
163 in the $code atomic_base$$ constructor and must be either
164 $codei%atomic_base<%Base%>::pack_sparsity_enum%$$,
165 $codei%atomic_base<%Base%>::bool_sparsity_enum%$$, or
166 $codei%atomic_base<%Base%>::set_sparsity_enum%$$.
167 This argument is optional and its default value is unspecified.
168 
169 $head optimize$$
170 This argument has prototype
171 $codei%
172  bool %optimize%
173 %$$
174 It specifies if the recording corresponding to the atomic function
175 should be $cref/optimized/optimize/$$.
176 One expects to use a checkpoint function many times, so it should
177 be worth the time to optimize its operation sequence.
178 For debugging purposes, it may be useful to use the
179 original operation sequence (before optimization)
180 because it corresponds more closely to $icode algo$$.
181 This argument is optional and its default value is true.
182 
183 
184 $head size_var$$
185 This $code size_var$$ member function return value has prototype
186 $codei%
187  size_t %sv%
188 %$$
189 It is the $cref/size_var/seq_property/size_var/$$ for the
190 $codei%ADFun<%Base%>%$$ object is used to store the operation sequence
191 corresponding to $icode algo$$.
192 
193 $head option$$
194 The $code option$$ syntax can be used to set the type of sparsity
195 pattern used by $icode atom_fun$$.
196 This is an $codei%atomic_base<%Base%>%$$ function and its documentation
197 can be found at $cref atomic_option$$.
198 
199 $head algo$$
200 The type of $icode algo$$ is arbitrary, except for the fact that
201 the syntax
202 $codei%
203  %algo%(%ax%, %ay%)
204 %$$
205 must evaluate the function $latex y = f(x)$$ using
206 $codei%AD<%Base%>%$$ operations.
207 In addition, we assume that the
208 $cref/operation sequence/glossary/Operation/Sequence/$$
209 does not depend on the value of $icode ax$$.
210 
211 $head atom_fun$$
212 Given $icode ax$$ it computes the corresponding value of $icode ay$$
213 using the operation sequence corresponding to $icode algo$$.
214 If $codei%AD<%Base%>%$$ operations are being recorded,
215 it enters the computation as single operation in the recording
216 see $cref/start recording/Independent/Start Recording/$$.
217 (Currently each use of $icode atom_fun$$ actually corresponds to
218 $icode%m%+%n%+2%$$ operations and creates $icode m$$ new variables,
219 but this is not part of the CppAD specifications and my change.)
220 
221 $head clear$$
222 The $code atomic_base$$ class holds onto static work space in order to
223 increase speed by avoiding system memory allocation calls.
224 This call makes to work space $cref/available/ta_available/$$ to
225 for other uses by the same thread.
226 This should be called when you are done using the
227 user atomic functions for a specific value of $icode Base$$.
228 
229 $subhead Restriction$$
230 The $code clear$$ routine cannot be called
231 while in $cref/parallel/ta_in_parallel/$$ execution mode.
232 
233 $children%example/atomic/checkpoint.cpp
234  %example/atomic/mul_level.cpp
235  %example/atomic/ode.cpp
236  %example/atomic/extended_ode.cpp
237 %$$
238 $head Example$$
239 The file $cref checkpoint.cpp$$ contains an example and test
240 of these operations.
241 It returns true if it succeeds and false if it fails.
242 
243 $end
244 */
245 
246 # define CPPAD_TMP_MULTI_THREAD 0
247 # ifdef _OPENMP
248 # ifdef CPPAD_FOR_TMB
249 # undef CPPAD_TMP_MULTI_THREAD
250 # define CPPAD_TMP_MULTI_THREAD 1
251 # endif
252 # endif
253 
254 // ============================================================================
255 # if ! CPPAD_TMP_MULTI_THREAD
256 // ============================================================================
257 
258 template <class Base>
259 class checkpoint : public atomic_base<Base> {
260 // ---------------------------------------------------------------------------
261 private:
262  /// same as option_enum in base class
264  //
265  /// AD function corresponding to this checkpoint object
267  //
268  /// sparsity for entire Jacobian f(x)^{(1)} does not change so can cache it
271  //
272  /// sparsity for sum_i f_i(x)^{(2)} does not change so can cache it
275  // ------------------------------------------------------------------------
277  { return static_cast< atomic_base<Base>* >(this)->sparsity(); }
278  // ------------------------------------------------------------------------
279  /// set jac_sparse_set_
282  bool transpose = false;
283  bool dependency = true;
284  size_t n = f_.Domain();
285  size_t m = f_.Range();
286  // Use the choice for forward / reverse that results in smaller
287  // size for the sparsity pattern of all variables in the tape.
288  if( n <= m )
289  { local::sparse_list identity;
290  identity.resize(n, n);
291  for(size_t j = 0; j < n; j++)
292  { // use add_element because only adding one element per set
293  identity.add_element(j, j);
294  }
295  f_.ForSparseJacCheckpoint(
296  n, identity, transpose, dependency, jac_sparse_set_
297  );
298  f_.size_forward_set(0);
299  }
300  else
301  { local::sparse_list identity;
302  identity.resize(m, m);
303  for(size_t i = 0; i < m; i++)
304  { // use add_element because only adding one element per set
305  identity.add_element(i, i);
306  }
307  f_.RevSparseJacCheckpoint(
308  m, identity, transpose, dependency, jac_sparse_set_
309  );
310  }
311  CPPAD_ASSERT_UNKNOWN( f_.size_forward_set() == 0 );
312  CPPAD_ASSERT_UNKNOWN( f_.size_forward_bool() == 0 );
313  }
314  /// set jac_sparse_bool_
317  bool transpose = false;
318  bool dependency = true;
319  size_t n = f_.Domain();
320  size_t m = f_.Range();
321  // Use the choice for forward / reverse that results in smaller
322  // size for the sparsity pattern of all variables in the tape.
323  if( n <= m )
324  { vectorBool identity(n * n);
325  for(size_t j = 0; j < n; j++)
326  { for(size_t i = 0; i < n; i++)
327  identity[ i * n + j ] = (i == j);
328  }
329  jac_sparse_bool_ = f_.ForSparseJac(
330  n, identity, transpose, dependency
331  );
332  f_.size_forward_bool(0);
333  }
334  else
335  { vectorBool identity(m * m);
336  for(size_t j = 0; j < m; j++)
337  { for(size_t i = 0; i < m; i++)
338  identity[ i * m + j ] = (i == j);
339  }
340  jac_sparse_bool_ = f_.RevSparseJac(
341  m, identity, transpose, dependency
342  );
343  }
344  CPPAD_ASSERT_UNKNOWN( f_.size_forward_bool() == 0 );
345  CPPAD_ASSERT_UNKNOWN( f_.size_forward_set() == 0 );
346  }
347  // ------------------------------------------------------------------------
348  /// set hes_sparse_set_
351  size_t n = f_.Domain();
352  size_t m = f_.Range();
353  //
354  // set version of sparsity for vector of all ones
355  vector<bool> all_one(m);
356  for(size_t i = 0; i < m; i++)
357  all_one[i] = true;
358 
359  // set version of sparsity for n by n idendity matrix
360  local::sparse_list identity;
361  identity.resize(n, n);
362  for(size_t j = 0; j < n; j++)
363  { // use add_element because only adding one element per set
364  identity.add_element(j, j);
365  }
366 
367  // compute sparsity pattern for H(x) = sum_i f_i(x)^{(2)}
368  bool transpose = false;
369  bool dependency = false;
370  f_.ForSparseJacCheckpoint(
371  n, identity, transpose, dependency, jac_sparse_set_
372  );
373  f_.RevSparseHesCheckpoint(
374  n, all_one, transpose, hes_sparse_set_
375  );
378  //
379  // drop the forward sparsity results from f_
380  f_.size_forward_set(0);
381  }
382  /// set hes_sparse_bool_
385  size_t n = f_.Domain();
386  size_t m = f_.Range();
387  //
388  // set version of sparsity for vector of all ones
389  vectorBool all_one(m);
390  for(size_t i = 0; i < m; i++)
391  all_one[i] = true;
392 
393  // set version of sparsity for n by n idendity matrix
394  vectorBool identity(n * n);
395  for(size_t j = 0; j < n; j++)
396  { for(size_t i = 0; i < n; i++)
397  identity[ i * n + j ] = (i == j);
398  }
399 
400  // compute sparsity pattern for H(x) = sum_i f_i(x)^{(2)}
401  bool transpose = false;
402  bool dependency = false;
403  f_.ForSparseJac(n, identity, transpose, dependency);
404  hes_sparse_bool_ = f_.RevSparseHes(n, all_one, transpose);
406  //
407  // drop the forward sparsity results from f_
408  f_.size_forward_bool(0);
409  CPPAD_ASSERT_UNKNOWN( f_.size_forward_bool() == 0 );
410  CPPAD_ASSERT_UNKNOWN( f_.size_forward_set() == 0 );
411  }
412  // ------------------------------------------------------------------------
413  /*!
414  Link from user_atomic to forward sparse Jacobian pack and bool
415 
416  \copydetails atomic_base::for_sparse_jac
417  */
418  template <class sparsity_type>
420  size_t q ,
421  const sparsity_type& r ,
422  sparsity_type& s ,
423  const vector<Base>& x )
424  { // during user sparsity calculations
425  size_t m = f_.Range();
426  size_t n = f_.Domain();
427  if( jac_sparse_bool_.size() == 0 )
429  if( jac_sparse_set_.n_set() != 0 )
430  jac_sparse_set_.resize(0, 0);
433  CPPAD_ASSERT_UNKNOWN( r.size() == n * q );
434  CPPAD_ASSERT_UNKNOWN( s.size() == m * q );
435  //
436  bool ok = true;
437  for(size_t i = 0; i < m; i++)
438  { for(size_t k = 0; k < q; k++)
439  s[i * q + k] = false;
440  }
441  // sparsity for s = jac_sparse_bool_ * r
442  for(size_t i = 0; i < m; i++)
443  { for(size_t k = 0; k < q; k++)
444  { // initialize sparsity for S(i,k)
445  bool s_ik = false;
446  // S(i,k) = sum_j J(i,j) * R(j,k)
447  for(size_t j = 0; j < n; j++)
448  { bool J_ij = jac_sparse_bool_[ i * n + j];
449  bool R_jk = r[j * q + k ];
450  s_ik |= ( J_ij & R_jk );
451  }
452  s[i * q + k] = s_ik;
453  }
454  }
455  return ok;
456  }
457  // ------------------------------------------------------------------------
458  /*!
459  Link from user_atomic to reverse sparse Jacobian pack and bool
460 
461  \copydetails atomic_base::rev_sparse_jac
462  */
463  template <class sparsity_type>
465  size_t q ,
466  const sparsity_type& rt ,
467  sparsity_type& st ,
468  const vector<Base>& x )
469  { // during user sparsity calculations
470  size_t m = f_.Range();
471  size_t n = f_.Domain();
472  if( jac_sparse_bool_.size() == 0 )
474  if( jac_sparse_set_.n_set() != 0 )
475  jac_sparse_set_.resize(0, 0);
478  CPPAD_ASSERT_UNKNOWN( rt.size() == m * q );
479  CPPAD_ASSERT_UNKNOWN( st.size() == n * q );
480  bool ok = true;
481  //
482  // S = R * J where J is jacobian
483  for(size_t i = 0; i < q; i++)
484  { for(size_t j = 0; j < n; j++)
485  { // initialize sparsity for S(i,j)
486  bool s_ij = false;
487  // S(i,j) = sum_k R(i,k) * J(k,j)
488  for(size_t k = 0; k < m; k++)
489  { // sparsity for R(i, k)
490  bool R_ik = rt[ k * q + i ];
491  bool J_kj = jac_sparse_bool_[ k * n + j ];
492  s_ij |= (R_ik & J_kj);
493  }
494  // set sparsity for S^T
495  st[ j * q + i ] = s_ij;
496  }
497  }
498  return ok;
499  }
500  /*!
501  Link from user_atomic to reverse sparse Hessian bools
502 
503  \copydetails atomic_base::rev_sparse_hes
504  */
505  template <class sparsity_type>
507  const vector<bool>& vx ,
508  const vector<bool>& s ,
509  vector<bool>& t ,
510  size_t q ,
511  const sparsity_type& r ,
512  const sparsity_type& u ,
513  sparsity_type& v ,
514  const vector<Base>& x )
515  { size_t n = f_.Domain();
516 # ifndef NDEBUG
517  size_t m = f_.Range();
518 # endif
519  CPPAD_ASSERT_UNKNOWN( vx.size() == n );
520  CPPAD_ASSERT_UNKNOWN( s.size() == m );
521  CPPAD_ASSERT_UNKNOWN( t.size() == n );
522  CPPAD_ASSERT_UNKNOWN( r.size() == n * q );
523  CPPAD_ASSERT_UNKNOWN( u.size() == m * q );
524  CPPAD_ASSERT_UNKNOWN( v.size() == n * q );
525  //
526  bool ok = true;
527 
528  // make sure hes_sparse_bool_ has been set
529  if( hes_sparse_bool_.size() == 0 )
531  if( hes_sparse_set_.n_set() != 0 )
532  hes_sparse_set_.resize(0, 0);
535 
536 
537  // compute sparsity pattern for T(x) = S(x) * f'(x)
538  t = f_.RevSparseJac(1, s);
539 # ifndef NDEBUG
540  for(size_t j = 0; j < n; j++)
541  CPPAD_ASSERT_UNKNOWN( vx[j] || ! t[j] )
542 # endif
543 
544  // V(x) = f'(x)^T * g''(y) * f'(x) * R + g'(y) * f''(x) * R
545  // U(x) = g''(y) * f'(x) * R
546  // S(x) = g'(y)
547 
548  // compute sparsity pattern for A(x) = f'(x)^T * U(x)
549  bool transpose = true;
550  sparsity_type a(n * q);
551  a = f_.RevSparseJac(q, u, transpose);
552 
553  // Need sparsity pattern for H(x) = (S(x) * f(x))''(x) * R,
554  // but use less efficient sparsity for f(x)''(x) * R so that
555  // hes_sparse_set_ can be used every time this is needed.
556  for(size_t i = 0; i < n; i++)
557  { for(size_t k = 0; k < q; k++)
558  { // initialize sparsity pattern for H(i,k)
559  bool h_ik = false;
560  // H(i,k) = sum_j f''(i,j) * R(j,k)
561  for(size_t j = 0; j < n; j++)
562  { bool f_ij = hes_sparse_bool_[i * n + j];
563  bool r_jk = r[j * q + k];
564  h_ik |= ( f_ij & r_jk );
565  }
566  // sparsity for H(i,k)
567  v[i * q + k] = h_ik;
568  }
569  }
570 
571  // compute sparsity pattern for V(x) = A(x) + H(x)
572  for(size_t i = 0; i < n; i++)
573  { for(size_t k = 0; k < q; k++)
574  // v[ i * q + k ] |= a[ i * q + k];
575  v[ i * q + k ] = bool(v[ i * q + k]) | bool(a[ i * q + k]);
576  }
577  return ok;
578  }
579 public:
580  // ------------------------------------------------------------------------
581  /*!
582  Constructor of a checkpoint object
583 
584  \param name [in]
585  is the user's name for the AD version of this atomic operation.
586 
587  \param algo [in/out]
588  user routine that compute AD function values
589  (not const because state may change during evaluation).
590 
591  \param ax [in]
592  argument value where algo operation sequence is taped.
593 
594  \param ay [out]
595  function value at specified argument value.
596 
597  \param sparsity [in]
598  what type of sparsity patterns are computed by this function,
599  pack_sparsity_enum bool_sparsity_enum, or set_sparsity_enum.
600  The default value is unspecified.
601 
602  \param optimize [in]
603 l should the operation sequence corresponding to the algo be optimized.
604  The default value is true, but it is
605  sometimes useful to use false for debugging purposes.
606  */
607  template <class Algo, class ADVector>
609  const char* name ,
610  Algo& algo ,
611  const ADVector& ax ,
612  ADVector& ay ,
615  bool optimize = true
616  ) : atomic_base<Base>(name, sparsity)
617  { CheckSimpleVector< CppAD::AD<Base> , ADVector>();
618 
619  // make a copy of ax because Independent modifies AD information
620  ADVector x_tmp(ax);
621  // delcare x_tmp as the independent variables
622  Independent(x_tmp);
623  // record mapping from x_tmp to ay
624  algo(x_tmp, ay);
625  // create function f_ : x -> y
626  f_.Dependent(ay);
627  if( optimize )
628  { // suppress checking for nan in f_ results
629  // (see optimize documentation for atomic functions)
630  f_.check_for_nan(false);
631  //
632  // now optimize
633  f_.optimize();
634  }
635  // now disable checking of comparison operations
636  // 2DO: add a debugging mode that checks for changes and aborts
637  f_.compare_change_count(0);
638  }
639  // ------------------------------------------------------------------------
640  /*!
641  Implement the user call to <tt>atom_fun.size_var()</tt>.
642  */
643  size_t size_var(void)
644  { return f_.size_var(); }
645  // ------------------------------------------------------------------------
646  /*!
647  Implement the user call to <tt>atom_fun(ax, ay)</tt>.
648 
649  \tparam ADVector
650  A simple vector class with elements of type <code>AD<Base></code>.
651 
652  \param id
653  optional parameter which must be zero if present.
654 
655  \param ax
656  is the argument vector for this call,
657  <tt>ax.size()</tt> determines the number of arguments.
658 
659  \param ay
660  is the result vector for this call,
661  <tt>ay.size()</tt> determines the number of results.
662  */
663  template <class ADVector>
664  void operator()(const ADVector& ax, ADVector& ay, size_t id = 0)
666  id == 0,
667  "checkpoint: id is non-zero in atom_fun(ax, ay, id)"
668  );
669  this->atomic_base<Base>::operator()(ax, ay, id);
670  }
671  // ------------------------------------------------------------------------
672  /*!
673  Link from user_atomic to forward mode
674 
675  \copydetails atomic_base::forward
676  */
677  virtual bool forward(
678  size_t p ,
679  size_t q ,
680  const vector<bool>& vx ,
681  vector<bool>& vy ,
682  const vector<Base>& tx ,
683  vector<Base>& ty )
684  { size_t n = f_.Domain();
685  size_t m = f_.Range();
686  //
687  CPPAD_ASSERT_UNKNOWN( f_.size_var() > 0 );
688  CPPAD_ASSERT_UNKNOWN( tx.size() % (q+1) == 0 );
689  CPPAD_ASSERT_UNKNOWN( ty.size() % (q+1) == 0 );
690  CPPAD_ASSERT_UNKNOWN( n == tx.size() / (q+1) );
691  CPPAD_ASSERT_UNKNOWN( m == ty.size() / (q+1) );
692  bool ok = true;
693  //
694  if( vx.size() == 0 )
695  { // during user forward mode
696  if( jac_sparse_set_.n_set() != 0 )
697  jac_sparse_set_.resize(0,0);
698  if( jac_sparse_bool_.size() != 0 )
700  //
701  if( hes_sparse_set_.n_set() != 0 )
702  hes_sparse_set_.resize(0,0);
703  if( hes_sparse_bool_.size() != 0 )
705  }
706  if( vx.size() > 0 )
707  { // need Jacobian sparsity pattern to determine variable relation
708  // during user recording using checkpoint functions
710  { if( jac_sparse_set_.n_set() == 0 )
714  //
715  for(size_t i = 0; i < m; i++)
716  { vy[i] = false;
718  jac_sparse_set_, i
719  );
720  size_t j = *set_itr;
721  while(j < n )
722  { // y[i] depends on the value of x[j]
723  // cast avoid Microsoft warning (should not be needed)
724  vy[i] |= static_cast<bool>( vx[j] );
725  j = *(++set_itr);
726  }
727  }
728  }
729  else
730  { if( jac_sparse_set_.n_set() != 0 )
731  jac_sparse_set_.resize(0, 0);
732  if( jac_sparse_bool_.size() == 0 )
736  //
737  for(size_t i = 0; i < m; i++)
738  { vy[i] = false;
739  for(size_t j = 0; j < n; j++)
740  { if( jac_sparse_bool_[ i * n + j ] )
741  { // y[i] depends on the value of x[j]
742  // cast avoid Microsoft warning
743  vy[i] |= static_cast<bool>( vx[j] );
744  }
745  }
746  }
747  }
748  }
749  // compute forward results for orders zero through q
750  ty = f_.Forward(q, tx);
751 
752  // no longer need the Taylor coefficients in f_
753  // (have to reconstruct them every time)
754  // Hold onto sparsity pattern because it is always good.
755  size_t c = 0;
756  size_t r = 0;
757  f_.capacity_order(c, r);
758  return ok;
759  }
760  // ------------------------------------------------------------------------
761  /*!
762  Link from user_atomic to reverse mode
763 
764  \copydetails atomic_base::reverse
765  */
766  virtual bool reverse(
767  size_t q ,
768  const vector<Base>& tx ,
769  const vector<Base>& ty ,
770  vector<Base>& px ,
771  const vector<Base>& py )
772  {
773 # ifndef NDEBUG
774  size_t n = f_.Domain();
775  size_t m = f_.Range();
776 # endif
777  CPPAD_ASSERT_UNKNOWN( n == tx.size() / (q+1) );
778  CPPAD_ASSERT_UNKNOWN( m == ty.size() / (q+1) );
779  CPPAD_ASSERT_UNKNOWN( f_.size_var() > 0 );
780  CPPAD_ASSERT_UNKNOWN( tx.size() % (q+1) == 0 );
781  CPPAD_ASSERT_UNKNOWN( ty.size() % (q+1) == 0 );
782  bool ok = true;
783 
784  // put proper forward mode coefficients in f_
785 # ifdef NDEBUG
786  // compute forward results for orders zero through q
787  f_.Forward(q, tx);
788 # else
789  CPPAD_ASSERT_UNKNOWN( px.size() == n * (q+1) );
790  CPPAD_ASSERT_UNKNOWN( py.size() == m * (q+1) );
791  size_t i, j, k;
792  //
793  // compute forward results for orders zero through q
794  vector<Base> check_ty = f_.Forward(q, tx);
795  for(i = 0; i < m; i++)
796  { for(k = 0; k <= q; k++)
797  { j = i * (q+1) + k;
798  CPPAD_ASSERT_UNKNOWN( check_ty[j] == ty[j] );
799  }
800  }
801 # endif
802  // now can run reverse mode
803  px = f_.Reverse(q+1, py);
804 
805  // no longer need the Taylor coefficients in f_
806  // (have to reconstruct them every time)
807  size_t c = 0;
808  size_t r = 0;
809  f_.capacity_order(c, r);
810  return ok;
811  }
812  // ------------------------------------------------------------------------
813  /*!
814  Link from user_atomic to forward sparse Jacobian pack
815 
816  \copydetails atomic_base::for_sparse_jac
817  */
818  virtual bool for_sparse_jac(
819  size_t q ,
820  const vectorBool& r ,
821  vectorBool& s ,
822  const vector<Base>& x )
823  { return for_sparse_jac< vectorBool >(q, r, s, x);
824  }
825  /*!
826  Link from user_atomic to forward sparse Jacobian bool
827 
828  \copydetails atomic_base::for_sparse_jac
829  */
830  virtual bool for_sparse_jac(
831  size_t q ,
832  const vector<bool>& r ,
833  vector<bool>& s ,
834  const vector<Base>& x )
835  { return for_sparse_jac< vector<bool> >(q, r, s, x);
836  }
837  /*!
838  Link from user_atomic to forward sparse Jacobian sets
839 
840  \copydetails atomic_base::for_sparse_jac
841  */
842  virtual bool for_sparse_jac(
843  size_t q ,
844  const vector< std::set<size_t> >& r ,
845  vector< std::set<size_t> >& s ,
846  const vector<Base>& x )
847  { // during user sparsity calculations
848  size_t m = f_.Range();
849  size_t n = f_.Domain();
850  if( jac_sparse_bool_.size() != 0 )
852  if( jac_sparse_set_.n_set() == 0 )
857  CPPAD_ASSERT_UNKNOWN( r.size() == n );
858  CPPAD_ASSERT_UNKNOWN( s.size() == m );
859 
860  bool ok = true;
861  for(size_t i = 0; i < m; i++)
862  s[i].clear();
863 
864  // sparsity for s = jac_sparse_set_ * r
865  for(size_t i = 0; i < m; i++)
866  { // compute row i of the return pattern
868  jac_sparse_set_, i
869  );
870  size_t j = *set_itr;
871  while(j < n )
872  { std::set<size_t>::const_iterator itr_j;
873  const std::set<size_t>& r_j( r[j] );
874  for(itr_j = r_j.begin(); itr_j != r_j.end(); itr_j++)
875  { size_t k = *itr_j;
876  CPPAD_ASSERT_UNKNOWN( k < q );
877  s[i].insert(k);
878  }
879  j = *(++set_itr);
880  }
881  }
882 
883  return ok;
884  }
885  // ------------------------------------------------------------------------
886  /*!
887  Link from user_atomic to reverse sparse Jacobian pack
888 
889  \copydetails atomic_base::rev_sparse_jac
890  */
891  virtual bool rev_sparse_jac(
892  size_t q ,
893  const vectorBool& rt ,
894  vectorBool& st ,
895  const vector<Base>& x )
896  { return rev_sparse_jac< vectorBool >(q, rt, st, x);
897  }
898  /*!
899  Link from user_atomic to reverse sparse Jacobian bool
900 
901  \copydetails atomic_base::rev_sparse_jac
902  */
903  virtual bool rev_sparse_jac(
904  size_t q ,
905  const vector<bool>& rt ,
906  vector<bool>& st ,
907  const vector<Base>& x )
908  { return rev_sparse_jac< vector<bool> >(q, rt, st, x);
909  }
910  /*!
911  Link from user_atomic to reverse Jacobian sets
912 
913  \copydetails atomic_base::rev_sparse_jac
914  */
915  virtual bool rev_sparse_jac(
916  size_t q ,
917  const vector< std::set<size_t> >& rt ,
918  vector< std::set<size_t> >& st ,
919  const vector<Base>& x )
920  { // during user sparsity calculations
921  size_t m = f_.Range();
922  size_t n = f_.Domain();
923  if( jac_sparse_bool_.size() != 0 )
925  if( jac_sparse_set_.n_set() == 0 )
930  CPPAD_ASSERT_UNKNOWN( rt.size() == m );
931  CPPAD_ASSERT_UNKNOWN( st.size() == n );
932  //
933  bool ok = true;
934  //
935  for(size_t j = 0; j < n; j++)
936  st[j].clear();
937  //
938  // sparsity for s = r * jac_sparse_set_
939  // s^T = jac_sparse_set_^T * r^T
940  for(size_t i = 0; i < m; i++)
941  { // i is the row index in r^T
942  std::set<size_t>::const_iterator itr_i;
943  const std::set<size_t>& r_i( rt[i] );
944  for(itr_i = r_i.begin(); itr_i != r_i.end(); itr_i++)
945  { // k is the column index in r^T
946  size_t k = *itr_i;
947  CPPAD_ASSERT_UNKNOWN( k < q );
948  //
949  // i is column index in jac_sparse_set^T
951  jac_sparse_set_, i
952  );
953  size_t j = *set_itr;
954  while( j < n )
955  { // j is row index in jac_sparse_set^T
956  st[j].insert(k);
957  j = *(++set_itr);
958  }
959  }
960  }
961 
962  return ok;
963  }
964  // ------------------------------------------------------------------------
965  /*!
966  Link from user_atomic to reverse sparse Hessian pack
967 
968  \copydetails atomic_base::rev_sparse_hes
969  */
970  virtual bool rev_sparse_hes(
971  const vector<bool>& vx ,
972  const vector<bool>& s ,
973  vector<bool>& t ,
974  size_t q ,
975  const vectorBool& r ,
976  const vectorBool& u ,
977  vectorBool& v ,
978  const vector<Base>& x )
979  { return rev_sparse_hes< vectorBool >(vx, s, t, q, r, u, v, x);
980  }
981  /*!
982  Link from user_atomic to reverse sparse Hessian bool
983 
984  \copydetails atomic_base::rev_sparse_hes
985  */
986  virtual bool rev_sparse_hes(
987  const vector<bool>& vx ,
988  const vector<bool>& s ,
989  vector<bool>& t ,
990  size_t q ,
991  const vector<bool>& r ,
992  const vector<bool>& u ,
993  vector<bool>& v ,
994  const vector<Base>& x )
995  { return rev_sparse_hes< vector<bool> >(vx, s, t, q, r, u, v, x);
996  }
997  /*!
998  Link from user_atomic to reverse sparse Hessian sets
999 
1000  \copydetails atomic_base::rev_sparse_hes
1001  */
1002  virtual bool rev_sparse_hes(
1003  const vector<bool>& vx ,
1004  const vector<bool>& s ,
1005  vector<bool>& t ,
1006  size_t q ,
1007  const vector< std::set<size_t> >& r ,
1008  const vector< std::set<size_t> >& u ,
1009  vector< std::set<size_t> >& v ,
1010  const vector<Base>& x )
1011  { size_t n = f_.Domain();
1012 # ifndef NDEBUG
1013  size_t m = f_.Range();
1014 # endif
1015  CPPAD_ASSERT_UNKNOWN( vx.size() == n );
1016  CPPAD_ASSERT_UNKNOWN( s.size() == m );
1017  CPPAD_ASSERT_UNKNOWN( t.size() == n );
1018  CPPAD_ASSERT_UNKNOWN( r.size() == n );
1019  CPPAD_ASSERT_UNKNOWN( u.size() == m );
1020  CPPAD_ASSERT_UNKNOWN( v.size() == n );
1021  //
1022  bool ok = true;
1023 
1024  // make sure hes_sparse_set_ has been set
1025  if( hes_sparse_bool_.size() != 0 )
1027  if( hes_sparse_set_.n_set() == 0 )
1032 
1033  // compute sparsity pattern for T(x) = S(x) * f'(x)
1034  t = f_.RevSparseJac(1, s);
1035 # ifndef NDEBUG
1036  for(size_t j = 0; j < n; j++)
1037  CPPAD_ASSERT_UNKNOWN( vx[j] || ! t[j] )
1038 # endif
1039 
1040  // V(x) = f'(x)^T * g''(y) * f'(x) * R + g'(y) * f''(x) * R
1041  // U(x) = g''(y) * f'(x) * R
1042  // S(x) = g'(y)
1043 
1044  // compute sparsity pattern for A(x) = f'(x)^T * U(x)
1045  // 2DO: change a to use INTERNAL_SPARSE_SET
1046  bool transpose = true;
1047  vector< std::set<size_t> > a(n);
1048  a = f_.RevSparseJac(q, u, transpose);
1049 
1050  // Need sparsity pattern for H(x) = (S(x) * f(x))''(x) * R,
1051  // but use less efficient sparsity for f(x)''(x) * R so that
1052  // hes_sparse_set_ can be used every time this is needed.
1053  for(size_t i = 0; i < n; i++)
1054  { v[i].clear();
1056  hes_sparse_set_, i
1057  );
1058  size_t j = *set_itr;
1059  while( j < n )
1060  { std::set<size_t>::const_iterator itr_j;
1061  const std::set<size_t>& r_j( r[j] );
1062  for(itr_j = r_j.begin(); itr_j != r_j.end(); itr_j++)
1063  { size_t k = *itr_j;
1064  v[i].insert(k);
1065  }
1066  j = *(++set_itr);
1067  }
1068  }
1069  // compute sparsity pattern for V(x) = A(x) + H(x)
1070  std::set<size_t>::const_iterator itr;
1071  for(size_t i = 0; i < n; i++)
1072  { for(itr = a[i].begin(); itr != a[i].end(); itr++)
1073  { size_t j = *itr;
1074  CPPAD_ASSERT_UNKNOWN( j < q );
1075  v[i].insert(j);
1076  }
1077  }
1078 
1079  return ok;
1080  }
1081 };
1082 // ============================================================================
1083 # else
1084 // ============================================================================
1085 # define THREAD omp_get_thread_num()
1086 template <class Base>
1087 class checkpoint : public atomic_base<Base> {
1088 // ---------------------------------------------------------------------------
1089 private:
1090  /// same as option_enum in base class
1091  typedef typename atomic_base<Base>::option_enum option_enum;
1092  //
1093  /// AD function corresponding to this checkpoint object
1094  vector< ADFun<Base> > f_;
1095  //
1096  /// sparsity for entire Jacobian f(x)^{(1)} does not change so can cache it
1097  vector<local::sparse_list> jac_sparse_set_;
1098  vector<vectorBool> jac_sparse_bool_;
1099  //
1100  /// sparsity for sum_i f_i(x)^{(2)} does not change so can cache it
1101  vector<local::sparse_list> hes_sparse_set_;
1102  vector<vectorBool> hes_sparse_bool_;
1103  // ------------------------------------------------------------------------
1104  option_enum sparsity(void)
1105  { return static_cast< atomic_base<Base>* >(this)->sparsity(); }
1106  // ------------------------------------------------------------------------
1107  /// set jac_sparse_set_
1108  void set_jac_sparse_set(void)
1109  { CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == 0 );
1110  bool transpose = false;
1111  bool dependency = true;
1112  size_t n = f_[THREAD].Domain();
1113  size_t m = f_[THREAD].Range();
1114  // Use the choice for forward / reverse that results in smaller
1115  // size for the sparsity pattern of all variables in the tape.
1116  if( n <= m )
1117  { local::sparse_list identity;
1118  identity.resize(n, n);
1119  for(size_t j = 0; j < n; j++)
1120  { // use add_element because only adding one element per set
1121  identity.add_element(j, j);
1122  }
1123  f_[THREAD].ForSparseJacCheckpoint(
1124  n, identity, transpose, dependency, jac_sparse_set_[THREAD]
1125  );
1126  f_[THREAD].size_forward_set(0);
1127  }
1128  else
1129  { local::sparse_list identity;
1130  identity.resize(m, m);
1131  for(size_t i = 0; i < m; i++)
1132  { // use add_element because only adding one element per set
1133  identity.add_element(i, i);
1134  }
1135  f_[THREAD].RevSparseJacCheckpoint(
1136  m, identity, transpose, dependency, jac_sparse_set_[THREAD]
1137  );
1138  }
1139  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_forward_set() == 0 );
1140  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_forward_bool() == 0 );
1141  }
1142  /// set jac_sparse_bool_
1143  void set_jac_sparse_bool(void)
1144  { CPPAD_ASSERT_UNKNOWN( jac_sparse_bool_[THREAD].size() == 0 );
1145  bool transpose = false;
1146  bool dependency = true;
1147  size_t n = f_[THREAD].Domain();
1148  size_t m = f_[THREAD].Range();
1149  // Use the choice for forward / reverse that results in smaller
1150  // size for the sparsity pattern of all variables in the tape.
1151  if( n <= m )
1152  { vectorBool identity(n * n);
1153  for(size_t j = 0; j < n; j++)
1154  { for(size_t i = 0; i < n; i++)
1155  identity[ i * n + j ] = (i == j);
1156  }
1157  jac_sparse_bool_[THREAD] = f_[THREAD].ForSparseJac(
1158  n, identity, transpose, dependency
1159  );
1160  f_[THREAD].size_forward_bool(0);
1161  }
1162  else
1163  { vectorBool identity(m * m);
1164  for(size_t j = 0; j < m; j++)
1165  { for(size_t i = 0; i < m; i++)
1166  identity[ i * m + j ] = (i == j);
1167  }
1168  jac_sparse_bool_[THREAD] = f_[THREAD].RevSparseJac(
1169  m, identity, transpose, dependency
1170  );
1171  }
1172  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_forward_bool() == 0 );
1173  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_forward_set() == 0 );
1174  }
1175  // ------------------------------------------------------------------------
1176  /// set hes_sparse_set_
1177  void set_hes_sparse_set(void)
1178  { CPPAD_ASSERT_UNKNOWN( hes_sparse_set_[THREAD].n_set() == 0 );
1179  size_t n = f_[THREAD].Domain();
1180  size_t m = f_[THREAD].Range();
1181  //
1182  // set version of sparsity for vector of all ones
1183  vector<bool> all_one(m);
1184  for(size_t i = 0; i < m; i++)
1185  all_one[i] = true;
1186 
1187  // set version of sparsity for n by n idendity matrix
1188  local::sparse_list identity;
1189  identity.resize(n, n);
1190  for(size_t j = 0; j < n; j++)
1191  { // use add_element because only adding one element per set
1192  identity.add_element(j, j);
1193  }
1194 
1195  // compute sparsity pattern for H(x) = sum_i f_i(x)^{(2)}
1196  bool transpose = false;
1197  bool dependency = false;
1198  f_[THREAD].ForSparseJacCheckpoint(
1199  n, identity, transpose, dependency, jac_sparse_set_[THREAD]
1200  );
1201  f_[THREAD].RevSparseHesCheckpoint(
1202  n, all_one, transpose, hes_sparse_set_[THREAD]
1203  );
1204  CPPAD_ASSERT_UNKNOWN( hes_sparse_set_[THREAD].n_set() == n );
1205  CPPAD_ASSERT_UNKNOWN( hes_sparse_set_[THREAD].end() == n );
1206  //
1207  // drop the forward sparsity results from f_
1208  f_[THREAD].size_forward_set(0);
1209  }
1210  /// set hes_sparse_bool_
1211  void set_hes_sparse_bool(void)
1212  { CPPAD_ASSERT_UNKNOWN( hes_sparse_bool_[THREAD].size() == 0 );
1213  size_t n = f_[THREAD].Domain();
1214  size_t m = f_[THREAD].Range();
1215  //
1216  // set version of sparsity for vector of all ones
1217  vectorBool all_one(m);
1218  for(size_t i = 0; i < m; i++)
1219  all_one[i] = true;
1220 
1221  // set version of sparsity for n by n idendity matrix
1222  vectorBool identity(n * n);
1223  for(size_t j = 0; j < n; j++)
1224  { for(size_t i = 0; i < n; i++)
1225  identity[ i * n + j ] = (i == j);
1226  }
1227 
1228  // compute sparsity pattern for H(x) = sum_i f_i(x)^{(2)}
1229  bool transpose = false;
1230  bool dependency = false;
1231  f_[THREAD].ForSparseJac(n, identity, transpose, dependency);
1232  hes_sparse_bool_[THREAD] = f_[THREAD].RevSparseHes(n, all_one, transpose);
1233  CPPAD_ASSERT_UNKNOWN( hes_sparse_bool_[THREAD].size() == n * n );
1234  //
1235  // drop the forward sparsity results from f_
1236  f_[THREAD].size_forward_bool(0);
1237  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_forward_bool() == 0 );
1238  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_forward_set() == 0 );
1239  }
1240  // ------------------------------------------------------------------------
1241  /*!
1242  Link from user_atomic to forward sparse Jacobian pack and bool
1243 
1244  \copydetails atomic_base::for_sparse_jac
1245  */
1246  template <class sparsity_type>
1247  bool for_sparse_jac(
1248  size_t q ,
1249  const sparsity_type& r ,
1250  sparsity_type& s ,
1251  const vector<Base>& x )
1252  { // during user sparsity calculations
1253  size_t m = f_[THREAD].Range();
1254  size_t n = f_[THREAD].Domain();
1255  if( jac_sparse_bool_[THREAD].size() == 0 )
1257  if( jac_sparse_set_[THREAD].n_set() != 0 )
1258  jac_sparse_set_[THREAD].resize(0, 0);
1259  CPPAD_ASSERT_UNKNOWN( jac_sparse_bool_[THREAD].size() == m * n );
1260  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == 0 );
1261  CPPAD_ASSERT_UNKNOWN( r.size() == n * q );
1262  CPPAD_ASSERT_UNKNOWN( s.size() == m * q );
1263  //
1264  bool ok = true;
1265  for(size_t i = 0; i < m; i++)
1266  { for(size_t k = 0; k < q; k++)
1267  s[i * q + k] = false;
1268  }
1269  // sparsity for s = jac_sparse_bool_ * r
1270  for(size_t i = 0; i < m; i++)
1271  { for(size_t k = 0; k < q; k++)
1272  { // initialize sparsity for S(i,k)
1273  bool s_ik = false;
1274  // S(i,k) = sum_j J(i,j) * R(j,k)
1275  for(size_t j = 0; j < n; j++)
1276  { bool J_ij = jac_sparse_bool_[THREAD][ i * n + j];
1277  bool R_jk = r[j * q + k ];
1278  s_ik |= ( J_ij & R_jk );
1279  }
1280  s[i * q + k] = s_ik;
1281  }
1282  }
1283  return ok;
1284  }
1285  // ------------------------------------------------------------------------
1286  /*!
1287  Link from user_atomic to reverse sparse Jacobian pack and bool
1288 
1289  \copydetails atomic_base::rev_sparse_jac
1290  */
1291  template <class sparsity_type>
1292  bool rev_sparse_jac(
1293  size_t q ,
1294  const sparsity_type& rt ,
1295  sparsity_type& st ,
1296  const vector<Base>& x )
1297  { // during user sparsity calculations
1298  size_t m = f_[THREAD].Range();
1299  size_t n = f_[THREAD].Domain();
1300  if( jac_sparse_bool_[THREAD].size() == 0 )
1302  if( jac_sparse_set_[THREAD].n_set() != 0 )
1303  jac_sparse_set_[THREAD].resize(0, 0);
1304  CPPAD_ASSERT_UNKNOWN( jac_sparse_bool_[THREAD].size() == m * n );
1305  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == 0 );
1306  CPPAD_ASSERT_UNKNOWN( rt.size() == m * q );
1307  CPPAD_ASSERT_UNKNOWN( st.size() == n * q );
1308  bool ok = true;
1309  //
1310  // S = R * J where J is jacobian
1311  for(size_t i = 0; i < q; i++)
1312  { for(size_t j = 0; j < n; j++)
1313  { // initialize sparsity for S(i,j)
1314  bool s_ij = false;
1315  // S(i,j) = sum_k R(i,k) * J(k,j)
1316  for(size_t k = 0; k < m; k++)
1317  { // sparsity for R(i, k)
1318  bool R_ik = rt[ k * q + i ];
1319  bool J_kj = jac_sparse_bool_[THREAD][ k * n + j ];
1320  s_ij |= (R_ik & J_kj);
1321  }
1322  // set sparsity for S^T
1323  st[ j * q + i ] = s_ij;
1324  }
1325  }
1326  return ok;
1327  }
1328  /*!
1329  Link from user_atomic to reverse sparse Hessian bools
1330 
1331  \copydetails atomic_base::rev_sparse_hes
1332  */
1333  template <class sparsity_type>
1334  bool rev_sparse_hes(
1335  const vector<bool>& vx ,
1336  const vector<bool>& s ,
1337  vector<bool>& t ,
1338  size_t q ,
1339  const sparsity_type& r ,
1340  const sparsity_type& u ,
1341  sparsity_type& v ,
1342  const vector<Base>& x )
1343  { size_t n = f_[THREAD].Domain();
1344 # ifndef NDEBUG
1345  size_t m = f_[THREAD].Range();
1346 # endif
1347  CPPAD_ASSERT_UNKNOWN( vx.size() == n );
1348  CPPAD_ASSERT_UNKNOWN( s.size() == m );
1349  CPPAD_ASSERT_UNKNOWN( t.size() == n );
1350  CPPAD_ASSERT_UNKNOWN( r.size() == n * q );
1351  CPPAD_ASSERT_UNKNOWN( u.size() == m * q );
1352  CPPAD_ASSERT_UNKNOWN( v.size() == n * q );
1353  //
1354  bool ok = true;
1355 
1356  // make sure hes_sparse_bool_ has been set
1357  if( hes_sparse_bool_[THREAD].size() == 0 )
1359  if( hes_sparse_set_[THREAD].n_set() != 0 )
1360  hes_sparse_set_[THREAD].resize(0, 0);
1361  CPPAD_ASSERT_UNKNOWN( hes_sparse_bool_[THREAD].size() == n * n );
1362  CPPAD_ASSERT_UNKNOWN( hes_sparse_set_[THREAD].n_set() == 0 );
1363 
1364 
1365  // compute sparsity pattern for T(x) = S(x) * f'(x)
1366  t = f_[THREAD].RevSparseJac(1, s);
1367 # ifndef NDEBUG
1368  for(size_t j = 0; j < n; j++)
1369  CPPAD_ASSERT_UNKNOWN( vx[j] || ! t[j] )
1370 # endif
1371 
1372  // V(x) = f'(x)^T * g''(y) * f'(x) * R + g'(y) * f''(x) * R
1373  // U(x) = g''(y) * f'(x) * R
1374  // S(x) = g'(y)
1375 
1376  // compute sparsity pattern for A(x) = f'(x)^T * U(x)
1377  bool transpose = true;
1378  sparsity_type a(n * q);
1379  a = f_[THREAD].RevSparseJac(q, u, transpose);
1380 
1381  // Need sparsity pattern for H(x) = (S(x) * f(x))''(x) * R,
1382  // but use less efficient sparsity for f(x)''(x) * R so that
1383  // hes_sparse_set_ can be used every time this is needed.
1384  for(size_t i = 0; i < n; i++)
1385  { for(size_t k = 0; k < q; k++)
1386  { // initialize sparsity pattern for H(i,k)
1387  bool h_ik = false;
1388  // H(i,k) = sum_j f''(i,j) * R(j,k)
1389  for(size_t j = 0; j < n; j++)
1390  { bool f_ij = hes_sparse_bool_[THREAD][i * n + j];
1391  bool r_jk = r[j * q + k];
1392  h_ik |= ( f_ij & r_jk );
1393  }
1394  // sparsity for H(i,k)
1395  v[i * q + k] = h_ik;
1396  }
1397  }
1398 
1399  // compute sparsity pattern for V(x) = A(x) + H(x)
1400  for(size_t i = 0; i < n; i++)
1401  { for(size_t k = 0; k < q; k++)
1402  // v[ i * q + k ] |= a[ i * q + k];
1403  v[ i * q + k ] = bool(v[ i * q + k]) | bool(a[ i * q + k]);
1404  }
1405  return ok;
1406  }
1407 public:
1408  // ------------------------------------------------------------------------
1409  /*!
1410  Constructor of a checkpoint object
1411 
1412  \param name [in]
1413  is the user's name for the AD version of this atomic operation.
1414 
1415  \param algo [in/out]
1416  user routine that compute AD function values
1417  (not const because state may change during evaluation).
1418 
1419  \param ax [in]
1420  argument value where algo operation sequence is taped.
1421 
1422  \param ay [out]
1423  function value at specified argument value.
1424 
1425  \param sparsity [in]
1426  what type of sparsity patterns are computed by this function,
1427  pack_sparsity_enum bool_sparsity_enum, or set_sparsity_enum.
1428  The default value is unspecified.
1429 
1430  \param optimize [in]
1431 l should the operation sequence corresponding to the algo be optimized.
1432  The default value is true, but it is
1433  sometimes useful to use false for debugging purposes.
1434  */
1435  template <class Algo, class ADVector>
1436  checkpoint(
1437  const char* name ,
1438  Algo& algo ,
1439  const ADVector& ax ,
1440  ADVector& ay ,
1442  atomic_base<Base>::pack_sparsity_enum ,
1443  bool optimize = true
1444  ) :
1445  atomic_base<Base>(name , sparsity) ,
1446  f_( omp_get_max_threads() ) ,
1447  jac_sparse_set_( omp_get_max_threads() ) ,
1448  jac_sparse_bool_( omp_get_max_threads() ) ,
1449  hes_sparse_set_( omp_get_max_threads() ) ,
1450  hes_sparse_bool_( omp_get_max_threads() )
1451  {
1452  CheckSimpleVector< CppAD::AD<Base> , ADVector>();
1453 
1454  // make a copy of ax because Independent modifies AD information
1455  ADVector x_tmp(ax);
1456  // delcare x_tmp as the independent variables
1457  Independent(x_tmp);
1458  // record mapping from x_tmp to ay
1459  algo(x_tmp, ay);
1460  // create function f_ : x -> y
1461  f_[0].Dependent(ay);
1462  if( optimize )
1463  { // suppress checking for nan in f_ results
1464  // (see optimize documentation for atomic functions)
1465  f_[0].check_for_nan(false);
1466  //
1467  // now optimize
1468  f_[0].optimize();
1469  }
1470  // now disable checking of comparison operations
1471  // 2DO: add a debugging mode that checks for changes and aborts
1472  f_[0].compare_change_count(0);
1473  //
1474  // copy for use during multi-threading
1475  for(int i = 1; i < omp_get_max_threads(); ++i)
1476  f_[i] = f_[0];
1477  }
1478  // ------------------------------------------------------------------------
1479  /*!
1480  Implement the user call to <tt>atom_fun.size_var()</tt>.
1481  */
1482  size_t size_var(void)
1483  { return f_[THREAD].size_var(); }
1484  // ------------------------------------------------------------------------
1485  /*!
1486  Implement the user call to <tt>atom_fun(ax, ay)</tt>.
1487 
1488  \tparam ADVector
1489  A simple vector class with elements of type <code>AD<Base></code>.
1490 
1491  \param id
1492  optional parameter which must be zero if present.
1493 
1494  \param ax
1495  is the argument vector for this call,
1496  <tt>ax.size()</tt> determines the number of arguments.
1497 
1498  \param ay
1499  is the result vector for this call,
1500  <tt>ay.size()</tt> determines the number of results.
1501  */
1502  template <class ADVector>
1503  void operator()(const ADVector& ax, ADVector& ay, size_t id = 0)
1505  id == 0,
1506  "checkpoint: id is non-zero in atom_fun(ax, ay, id)"
1507  );
1508  this->atomic_base<Base>::operator()(ax, ay, id);
1509  }
1510  // ------------------------------------------------------------------------
1511  /*!
1512  Link from user_atomic to forward mode
1513 
1514  \copydetails atomic_base::forward
1515  */
1516  virtual bool forward(
1517  size_t p ,
1518  size_t q ,
1519  const vector<bool>& vx ,
1520  vector<bool>& vy ,
1521  const vector<Base>& tx ,
1522  vector<Base>& ty )
1523  { size_t n = f_[THREAD].Domain();
1524  size_t m = f_[THREAD].Range();
1525  //
1526  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_var() > 0 );
1527  CPPAD_ASSERT_UNKNOWN( tx.size() % (q+1) == 0 );
1528  CPPAD_ASSERT_UNKNOWN( ty.size() % (q+1) == 0 );
1529  CPPAD_ASSERT_UNKNOWN( n == tx.size() / (q+1) );
1530  CPPAD_ASSERT_UNKNOWN( m == ty.size() / (q+1) );
1531  bool ok = true;
1532  //
1533  if( vx.size() == 0 )
1534  { // during user forward mode
1535  if( jac_sparse_set_[THREAD].n_set() != 0 )
1536  jac_sparse_set_[THREAD].resize(0,0);
1537  if( jac_sparse_bool_[THREAD].size() != 0 )
1538  jac_sparse_bool_[THREAD].clear();
1539  //
1540  if( hes_sparse_set_[THREAD].n_set() != 0 )
1541  hes_sparse_set_[THREAD].resize(0,0);
1542  if( hes_sparse_bool_[THREAD].size() != 0 )
1543  hes_sparse_bool_[THREAD].clear();
1544  }
1545  if( vx.size() > 0 )
1546  { // need Jacobian sparsity pattern to determine variable relation
1547  // during user recording using checkpoint functions
1548  if( sparsity() == atomic_base<Base>::set_sparsity_enum )
1549  { if( jac_sparse_set_[THREAD].n_set() == 0 )
1551  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == m );
1552  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].end() == n );
1553  //
1554  for(size_t i = 0; i < m; i++)
1555  { vy[i] = false;
1557  jac_sparse_set_[THREAD], i
1558  );
1559  size_t j = *set_itr;
1560  while(j < n )
1561  { // y[i] depends on the value of x[j]
1562  // cast avoid Microsoft warning (should not be needed)
1563  vy[i] |= static_cast<bool>( vx[j] );
1564  j = *(++set_itr);
1565  }
1566  }
1567  }
1568  else
1569  { if( jac_sparse_set_[THREAD].n_set() != 0 )
1570  jac_sparse_set_[THREAD].resize(0, 0);
1571  if( jac_sparse_bool_[THREAD].size() == 0 )
1573  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == 0 );
1574  CPPAD_ASSERT_UNKNOWN( jac_sparse_bool_[THREAD].size() == m * n );
1575  //
1576  for(size_t i = 0; i < m; i++)
1577  { vy[i] = false;
1578  for(size_t j = 0; j < n; j++)
1579  { if( jac_sparse_bool_[THREAD][ i * n + j ] )
1580  { // y[i] depends on the value of x[j]
1581  // cast avoid Microsoft warning
1582  vy[i] |= static_cast<bool>( vx[j] );
1583  }
1584  }
1585  }
1586  }
1587  }
1588  // compute forward results for orders zero through q
1589  ty = f_[THREAD].Forward(q, tx);
1590 
1591  // no longer need the Taylor coefficients in f_
1592  // (have to reconstruct them every time)
1593  // Hold onto sparsity pattern because it is always good.
1594  size_t c = 0;
1595  size_t r = 0;
1596  f_[THREAD].capacity_order(c, r);
1597  return ok;
1598  }
1599  // ------------------------------------------------------------------------
1600  /*!
1601  Link from user_atomic to reverse mode
1602 
1603  \copydetails atomic_base::reverse
1604  */
1605  virtual bool reverse(
1606  size_t q ,
1607  const vector<Base>& tx ,
1608  const vector<Base>& ty ,
1609  vector<Base>& px ,
1610  const vector<Base>& py )
1611  {
1612 # ifndef NDEBUG
1613  size_t n = f_[THREAD].Domain();
1614  size_t m = f_[THREAD].Range();
1615 # endif
1616  CPPAD_ASSERT_UNKNOWN( n == tx.size() / (q+1) );
1617  CPPAD_ASSERT_UNKNOWN( m == ty.size() / (q+1) );
1618  CPPAD_ASSERT_UNKNOWN( f_[THREAD].size_var() > 0 );
1619  CPPAD_ASSERT_UNKNOWN( tx.size() % (q+1) == 0 );
1620  CPPAD_ASSERT_UNKNOWN( ty.size() % (q+1) == 0 );
1621  bool ok = true;
1622 
1623  // put proper forward mode coefficients in f_
1624 # ifdef NDEBUG
1625  // compute forward results for orders zero through q
1626  f_[THREAD].Forward(q, tx);
1627 # else
1628  CPPAD_ASSERT_UNKNOWN( px.size() == n * (q+1) );
1629  CPPAD_ASSERT_UNKNOWN( py.size() == m * (q+1) );
1630  size_t i, j, k;
1631  //
1632  // compute forward results for orders zero through q
1633  vector<Base> check_ty = f_[THREAD].Forward(q, tx);
1634  for(i = 0; i < m; i++)
1635  { for(k = 0; k <= q; k++)
1636  { j = i * (q+1) + k;
1637  CPPAD_ASSERT_UNKNOWN( check_ty[j] == ty[j] );
1638  }
1639  }
1640 # endif
1641  // now can run reverse mode
1642  px = f_[THREAD].Reverse(q+1, py);
1643 
1644  // no longer need the Taylor coefficients in f_
1645  // (have to reconstruct them every time)
1646  size_t c = 0;
1647  size_t r = 0;
1648  f_[THREAD].capacity_order(c, r);
1649  return ok;
1650  }
1651  // ------------------------------------------------------------------------
1652  /*!
1653  Link from user_atomic to forward sparse Jacobian pack
1654 
1655  \copydetails atomic_base::for_sparse_jac
1656  */
1657  virtual bool for_sparse_jac(
1658  size_t q ,
1659  const vectorBool& r ,
1660  vectorBool& s ,
1661  const vector<Base>& x )
1662  { return for_sparse_jac< vectorBool >(q, r, s, x);
1663  }
1664  /*!
1665  Link from user_atomic to forward sparse Jacobian bool
1666 
1667  \copydetails atomic_base::for_sparse_jac
1668  */
1669  virtual bool for_sparse_jac(
1670  size_t q ,
1671  const vector<bool>& r ,
1672  vector<bool>& s ,
1673  const vector<Base>& x )
1674  { return for_sparse_jac< vector<bool> >(q, r, s, x);
1675  }
1676  /*!
1677  Link from user_atomic to forward sparse Jacobian sets
1678 
1679  \copydetails atomic_base::for_sparse_jac
1680  */
1681  virtual bool for_sparse_jac(
1682  size_t q ,
1683  const vector< std::set<size_t> >& r ,
1684  vector< std::set<size_t> >& s ,
1685  const vector<Base>& x )
1686  { // during user sparsity calculations
1687  size_t m = f_[THREAD].Range();
1688  size_t n = f_[THREAD].Domain();
1689  if( jac_sparse_bool_[THREAD].size() != 0 )
1690  jac_sparse_bool_[THREAD].clear();
1691  if( jac_sparse_set_[THREAD].n_set() == 0 )
1693  CPPAD_ASSERT_UNKNOWN( jac_sparse_bool_[THREAD].size() == 0 );
1694  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == m );
1695  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].end() == n );
1696  CPPAD_ASSERT_UNKNOWN( r.size() == n );
1697  CPPAD_ASSERT_UNKNOWN( s.size() == m );
1698 
1699  bool ok = true;
1700  for(size_t i = 0; i < m; i++)
1701  s[i].clear();
1702 
1703  // sparsity for s = jac_sparse_set_ * r
1704  for(size_t i = 0; i < m; i++)
1705  { // compute row i of the return pattern
1707  jac_sparse_set_[THREAD], i
1708  );
1709  size_t j = *set_itr;
1710  while(j < n )
1711  { std::set<size_t>::const_iterator itr_j;
1712  const std::set<size_t>& r_j( r[j] );
1713  for(itr_j = r_j.begin(); itr_j != r_j.end(); itr_j++)
1714  { size_t k = *itr_j;
1715  CPPAD_ASSERT_UNKNOWN( k < q );
1716  s[i].insert(k);
1717  }
1718  j = *(++set_itr);
1719  }
1720  }
1721 
1722  return ok;
1723  }
1724  // ------------------------------------------------------------------------
1725  /*!
1726  Link from user_atomic to reverse sparse Jacobian pack
1727 
1728  \copydetails atomic_base::rev_sparse_jac
1729  */
1730  virtual bool rev_sparse_jac(
1731  size_t q ,
1732  const vectorBool& rt ,
1733  vectorBool& st ,
1734  const vector<Base>& x )
1735  { return rev_sparse_jac< vectorBool >(q, rt, st, x);
1736  }
1737  /*!
1738  Link from user_atomic to reverse sparse Jacobian bool
1739 
1740  \copydetails atomic_base::rev_sparse_jac
1741  */
1742  virtual bool rev_sparse_jac(
1743  size_t q ,
1744  const vector<bool>& rt ,
1745  vector<bool>& st ,
1746  const vector<Base>& x )
1747  { return rev_sparse_jac< vector<bool> >(q, rt, st, x);
1748  }
1749  /*!
1750  Link from user_atomic to reverse Jacobian sets
1751 
1752  \copydetails atomic_base::rev_sparse_jac
1753  */
1754  virtual bool rev_sparse_jac(
1755  size_t q ,
1756  const vector< std::set<size_t> >& rt ,
1757  vector< std::set<size_t> >& st ,
1758  const vector<Base>& x )
1759  { // during user sparsity calculations
1760  size_t m = f_[THREAD].Range();
1761  size_t n = f_[THREAD].Domain();
1762  if( jac_sparse_bool_[THREAD].size() != 0 )
1763  jac_sparse_bool_[THREAD].clear();
1764  if( jac_sparse_set_[THREAD].n_set() == 0 )
1766  CPPAD_ASSERT_UNKNOWN( jac_sparse_bool_[THREAD].size() == 0 );
1767  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].n_set() == m );
1768  CPPAD_ASSERT_UNKNOWN( jac_sparse_set_[THREAD].end() == n );
1769  CPPAD_ASSERT_UNKNOWN( rt.size() == m );
1770  CPPAD_ASSERT_UNKNOWN( st.size() == n );
1771  //
1772  bool ok = true;
1773  //
1774  for(size_t j = 0; j < n; j++)
1775  st[j].clear();
1776  //
1777  // sparsity for s = r * jac_sparse_set_
1778  // s^T = jac_sparse_set_^T * r^T
1779  for(size_t i = 0; i < m; i++)
1780  { // i is the row index in r^T
1781  std::set<size_t>::const_iterator itr_i;
1782  const std::set<size_t>& r_i( rt[i] );
1783  for(itr_i = r_i.begin(); itr_i != r_i.end(); itr_i++)
1784  { // k is the column index in r^T
1785  size_t k = *itr_i;
1786  CPPAD_ASSERT_UNKNOWN( k < q );
1787  //
1788  // i is column index in jac_sparse_set^T
1790  jac_sparse_set_[THREAD], i
1791  );
1792  size_t j = *set_itr;
1793  while( j < n )
1794  { // j is row index in jac_sparse_set^T
1795  st[j].insert(k);
1796  j = *(++set_itr);
1797  }
1798  }
1799  }
1800 
1801  return ok;
1802  }
1803  // ------------------------------------------------------------------------
1804  /*!
1805  Link from user_atomic to reverse sparse Hessian pack
1806 
1807  \copydetails atomic_base::rev_sparse_hes
1808  */
1809  virtual bool rev_sparse_hes(
1810  const vector<bool>& vx ,
1811  const vector<bool>& s ,
1812  vector<bool>& t ,
1813  size_t q ,
1814  const vectorBool& r ,
1815  const vectorBool& u ,
1816  vectorBool& v ,
1817  const vector<Base>& x )
1818  { return rev_sparse_hes< vectorBool >(vx, s, t, q, r, u, v, x);
1819  }
1820  /*!
1821  Link from user_atomic to reverse sparse Hessian bool
1822 
1823  \copydetails atomic_base::rev_sparse_hes
1824  */
1825  virtual bool rev_sparse_hes(
1826  const vector<bool>& vx ,
1827  const vector<bool>& s ,
1828  vector<bool>& t ,
1829  size_t q ,
1830  const vector<bool>& r ,
1831  const vector<bool>& u ,
1832  vector<bool>& v ,
1833  const vector<Base>& x )
1834  { return rev_sparse_hes< vector<bool> >(vx, s, t, q, r, u, v, x);
1835  }
1836  /*!
1837  Link from user_atomic to reverse sparse Hessian sets
1838 
1839  \copydetails atomic_base::rev_sparse_hes
1840  */
1841  virtual bool rev_sparse_hes(
1842  const vector<bool>& vx ,
1843  const vector<bool>& s ,
1844  vector<bool>& t ,
1845  size_t q ,
1846  const vector< std::set<size_t> >& r ,
1847  const vector< std::set<size_t> >& u ,
1848  vector< std::set<size_t> >& v ,
1849  const vector<Base>& x )
1850  { size_t n = f_[THREAD].Domain();
1851 # ifndef NDEBUG
1852  size_t m = f_[THREAD].Range();
1853 # endif
1854  CPPAD_ASSERT_UNKNOWN( vx.size() == n );
1855  CPPAD_ASSERT_UNKNOWN( s.size() == m );
1856  CPPAD_ASSERT_UNKNOWN( t.size() == n );
1857  CPPAD_ASSERT_UNKNOWN( r.size() == n );
1858  CPPAD_ASSERT_UNKNOWN( u.size() == m );
1859  CPPAD_ASSERT_UNKNOWN( v.size() == n );
1860  //
1861  bool ok = true;
1862 
1863  // make sure hes_sparse_set_ has been set
1864  if( hes_sparse_bool_[THREAD].size() != 0 )
1865  hes_sparse_bool_[THREAD].clear();
1866  if( hes_sparse_set_[THREAD].n_set() == 0 )
1868  CPPAD_ASSERT_UNKNOWN( hes_sparse_bool_[THREAD].size() == 0 );
1869  CPPAD_ASSERT_UNKNOWN( hes_sparse_set_[THREAD].n_set() == n );
1870  CPPAD_ASSERT_UNKNOWN( hes_sparse_set_[THREAD].end() == n );
1871 
1872  // compute sparsity pattern for T(x) = S(x) * f'(x)
1873  t = f_[THREAD].RevSparseJac(1, s);
1874 # ifndef NDEBUG
1875  for(size_t j = 0; j < n; j++)
1876  CPPAD_ASSERT_UNKNOWN( vx[j] || ! t[j] )
1877 # endif
1878 
1879  // V(x) = f'(x)^T * g''(y) * f'(x) * R + g'(y) * f''(x) * R
1880  // U(x) = g''(y) * f'(x) * R
1881  // S(x) = g'(y)
1882 
1883  // compute sparsity pattern for A(x) = f'(x)^T * U(x)
1884  // 2DO: change a to use INTERNAL_SPARSE_SET
1885  bool transpose = true;
1886  vector< std::set<size_t> > a(n);
1887  a = f_[THREAD].RevSparseJac(q, u, transpose);
1888 
1889  // Need sparsity pattern for H(x) = (S(x) * f(x))''(x) * R,
1890  // but use less efficient sparsity for f(x)''(x) * R so that
1891  // hes_sparse_set_ can be used every time this is needed.
1892  for(size_t i = 0; i < n; i++)
1893  { v[i].clear();
1895  hes_sparse_set_[THREAD], i
1896  );
1897  size_t j = *set_itr;
1898  while( j < n )
1899  { std::set<size_t>::const_iterator itr_j;
1900  const std::set<size_t>& r_j( r[j] );
1901  for(itr_j = r_j.begin(); itr_j != r_j.end(); itr_j++)
1902  { size_t k = *itr_j;
1903  v[i].insert(k);
1904  }
1905  j = *(++set_itr);
1906  }
1907  }
1908  // compute sparsity pattern for V(x) = A(x) + H(x)
1909  std::set<size_t>::const_iterator itr;
1910  for(size_t i = 0; i < n; i++)
1911  { for(itr = a[i].begin(); itr != a[i].end(); itr++)
1912  { size_t j = *itr;
1913  CPPAD_ASSERT_UNKNOWN( j < q );
1914  v[i].insert(j);
1915  }
1916  }
1917 
1918  return ok;
1919  }
1920 };
1921 # undef THREAD
1922 // ============================================================================
1923 # endif
1924 // ============================================================================
1925 # undef CPPAD_TMP_MULTI_THREAD
1926 
1927 } // END_CPPAD_NAMESPACE
1928 # endif
atomic_base< Base >::option_enum option_enum
same as option_enum in base class
Definition: checkpoint.hpp:263
ADFun< Base > f_
AD function corresponding to this checkpoint object.
Definition: checkpoint.hpp:266
#define CPPAD_ASSERT_KNOWN(exp, msg)
Check that exp is true, if not print msg and terminate execution.
size_t end(void) const
Fetch end for this vector of sets object.
virtual bool rev_sparse_hes(const vector< bool > &vx, const vector< bool > &s, vector< bool > &t, size_t q, const vector< bool > &r, const vector< bool > &u, vector< bool > &v, const vector< Base > &x)
Link from user_atomic to reverse sparse Hessian bool.
Definition: checkpoint.hpp:986
Vector of sets of positive integers, each set stored as a singly linked list.
Definition: sparse_list.hpp:35
Class used to hold function objects.
Definition: ad_fun.hpp:69
void add_element(size_t i, size_t element)
Add one element to a set.
Vector of sets of positive integers stored as singly linked lists with the element values strictly in...
virtual bool rev_sparse_jac(size_t q, const vector< bool > &rt, vector< bool > &st, const vector< Base > &x)
Link from user_atomic to reverse sparse Jacobian bool.
Definition: checkpoint.hpp:903
vectorBool hes_sparse_bool_
Definition: checkpoint.hpp:274
atomic_base(void)
Base class for atomic_user functions.
virtual bool for_sparse_jac(size_t q, const vector< std::set< size_t > > &r, vector< std::set< size_t > > &s, const vector< Base > &x)
Link from user_atomic to forward sparse Jacobian sets.
Definition: checkpoint.hpp:842
virtual bool for_sparse_jac(size_t q, const vectorBool &r, vectorBool &s, const vector< Base > &x)
Link from user_atomic to forward sparse Jacobian pack.
Definition: checkpoint.hpp:818
Vector of sets of positive integers stored as a packed array of bools.
void clear(void)
free memory and set number of elements to zero
Definition: vector.hpp:738
void set_jac_sparse_set(void)
set jac_sparse_set_
Definition: checkpoint.hpp:280
void Independent(VectorAD &x, size_t abort_op_index)
Declaration of independent variables.
virtual bool for_sparse_jac(size_t q, const vector< bool > &r, vector< bool > &s, const vector< Base > &x)
Link from user_atomic to forward sparse Jacobian bool.
Definition: checkpoint.hpp:830
bool for_sparse_jac(size_t q, const sparsity_type &r, sparsity_type &s, const vector< Base > &x)
Link from user_atomic to forward sparse Jacobian pack and bool.
Definition: checkpoint.hpp:419
size_t size(void) const
number of elements currently in this vector.
Definition: vector.hpp:387
void set_jac_sparse_bool(void)
set jac_sparse_bool_
Definition: checkpoint.hpp:315
void operator()(const ADVector &ax, ADVector &ay, size_t id=0)
Implement the user call to afun(ax, ay) and old_atomic call to afun(ax, ay, id).
bool rev_sparse_hes(const vector< bool > &vx, const vector< bool > &s, vector< bool > &t, size_t q, const sparsity_type &r, const sparsity_type &u, sparsity_type &v, const vector< Base > &x)
Link from user_atomic to reverse sparse Hessian bools.
Definition: checkpoint.hpp:506
virtual bool reverse(size_t q, const vector< Base > &tx, const vector< Base > &ty, vector< Base > &px, const vector< Base > &py)
Link from user_atomic to reverse mode.
Definition: checkpoint.hpp:766
virtual bool rev_sparse_jac(size_t q, const vector< std::set< size_t > > &rt, vector< std::set< size_t > > &st, const vector< Base > &x)
Link from user_atomic to reverse Jacobian sets.
Definition: checkpoint.hpp:915
vectorBool jac_sparse_bool_
Definition: checkpoint.hpp:270
void set_hes_sparse_bool(void)
set hes_sparse_bool_
Definition: checkpoint.hpp:383
#define CPPAD_ASSERT_UNKNOWN(exp)
Check that exp is true, if not terminate execution.
virtual bool forward(size_t p, size_t q, const vector< bool > &vx, vector< bool > &vy, const vector< Base > &tx, vector< Base > &ty)
Link from user_atomic to forward mode.
Definition: checkpoint.hpp:677
size_t size_var(void)
Implement the user call to atom_fun.size_var().
Definition: checkpoint.hpp:643
checkpoint(const char *name, Algo &algo, const ADVector &ax, ADVector &ay, option_enum sparsity=atomic_base< Base >::pack_sparsity_enum, bool optimize=true)
Constructor of a checkpoint object.
Definition: checkpoint.hpp:608
void resize(size_t n_set, size_t end)
Start a new vector of sets.
local::sparse_list jac_sparse_set_
sparsity for entire Jacobian f(x)^{(1)} does not change so can cache it
Definition: checkpoint.hpp:269
static void clear(void)
Free all thread_alloc static memory held by atomic_base (avoids reallocations).
virtual bool rev_sparse_hes(const vector< bool > &vx, const vector< bool > &s, vector< bool > &t, size_t q, const vectorBool &r, const vectorBool &u, vectorBool &v, const vector< Base > &x)
Link from user_atomic to reverse sparse Hessian pack.
Definition: checkpoint.hpp:970
void set_hes_sparse_set(void)
set hes_sparse_set_
Definition: checkpoint.hpp:349
size_t n_set(void) const
Fetch n_set for vector of sets object.
bool rev_sparse_jac(size_t q, const sparsity_type &rt, sparsity_type &st, const vector< Base > &x)
Link from user_atomic to reverse sparse Jacobian pack and bool.
Definition: checkpoint.hpp:464
local::sparse_list hes_sparse_set_
sparsity for sum_i f_i(x)^{(2)} does not change so can cache it
Definition: checkpoint.hpp:273
virtual bool rev_sparse_hes(const vector< bool > &vx, const vector< bool > &s, vector< bool > &t, size_t q, const vector< std::set< size_t > > &r, const vector< std::set< size_t > > &u, vector< std::set< size_t > > &v, const vector< Base > &x)
Link from user_atomic to reverse sparse Hessian sets.
sparse_list_const_iterator const_iterator
declare a const iterator
cons_iterator for one set of positive integers in a sparse_list object.
void operator()(const ADVector &ax, ADVector &ay, size_t id=0)
Implement the user call to atom_fun(ax, ay).
Definition: checkpoint.hpp:664
virtual bool rev_sparse_jac(size_t q, const vectorBool &rt, vectorBool &st, const vector< Base > &x)
Link from user_atomic to reverse sparse Jacobian pack.
Definition: checkpoint.hpp:891
size_t size(void) const
number of elements in this vector
Definition: vector.hpp:713
option_enum sparsity(void)
Definition: checkpoint.hpp:276