\newcommand{\T}{{\rm T}}  \newcommand{\R}{{\bf R}}
ckbs-0.20100325.0: A Constrained Kalman-Bucy Smoother
One web page per Section   All as one web page
Math in Latex   ckbs.htm _printable.htm
Math in MathML   ckbs.xml _priintable.xml

a: Purpose
Minimize the Kalman smoother residual sum of squares objective, subject to inequality constraints, for an 4: affine or general 3: nonlinear model.

b: MathML
This documentation is in XHTML + MathML. If you have trouble viewing it in your browser see Viewing MathML on the web page
     
http://www.seanet.com/~bradbell/packages.htm

c: System Requirements
This constrained Kalman-Bucy smoother runs under both the Matlab® (http://www.mathworks.com/) and Octave (http://www.gnu.org/software/octave/) While it will work under MS windows, these download instructions are for the Unix operating system.

d: Download

d.a: Release Tarballs
You can download a tarball release of ckbs from the directory
     
http://www.coin-or.org/download/source/CoinBazaar
Theses releases are named
     ckbs-0.
yyyymmdd.r.ext
where yyyy is the year, mm is the month, dd is the day, r is the release number, and ext is tgz or zip.

If ext is tgz the command
     tar -xvzf ckbs-0.
yyyymmdd.r.tgz
will create the ckbs-0.yyyymmdd.r directory.

If ext is zip the command
     unzip ckbs-0.
yyyymmdd.r.zip
will create the ckbs-0.yyyymmdd.r directory.

d.b: Using Subversion
For each release listed in the download directory (http://www.coin-or.org/download/source/CoinBazaar) you can use subversion to download the release with the commands
     web="https://projects.coin-or.org/svn/CoinBazaar/projects/ckbs"
     svn checkout $web/releases/0.
yyyymmdd.r ckbs-0.yyyymmdd.r
This will create the ckbs-0.yyyymmdd.r directory.

d.c: Latest Version
You can download the most recent version of ckbs (called the trunk) using the commands
     web="https://projects.coin-or.org/svn/CoinBazaar/projects/ckbs"
     svn checkout $web/trunk ckbs-0.20100325.0
This will create the ckbs-0.20100325.0 directory.

e: Test Install
Using Matlab® or Octave, to change into the directory
     ckbs-0.
yyyymmdd.r/example
and run the program all_ok.m. It will print the test results for all the components of ckbs. You can run some more tests by changing into the directory
     ckbs-0.
yyyymmdd.r/test
and running the program all_ok.m which will have a similar output.

f: Documentation
If you download a release, a copy of the documentation for the release can be found be viewed in your browser by opening the file
     ckbs-0.
yyyymmdd.r/doc/ckbs.xml

If you download the latest version (the trunk), a copy of the documentation for this version can be built by executing the commands
 
	cd ckbs-0.20100325.0 
	./build_doc.sh
You can then view the documentation in your browser by opening the file
     ckbs-0.20100325.0/doc/ckbs.xml

g: Contents
_contents: 1Table of Contents
license: 2Your License to use the ckbs Software
ckbs_nonlinear: 3The Nonlinear Constrained Kalman-Bucy Smoother
ckbs_affine: 4Constrained Affine Kalman Bucy Smoother
utility: 5ckbs Utility Functions
all_ok.m: 6Run All Correctness Tests
whatsnew: 7Changes and Additions to ckbs
_reference: 8Alphabetic Listing of Cross Reference Tags
_index: 9Keyword Index
_external: 10External Internet References

Input File: ckbs.omh
1: Table of Contents
ckbs-0.20100325.0: A Constrained Kalman-Bucy Smoother: : ckbs
    Table of Contents: 1: _contents
    Your License to use the ckbs Software: 2: license
    The Nonlinear Constrained Kalman-Bucy Smoother: 3: ckbs_nonlinear
        Simple ckbs_nonlinear Example and Test: 3.1: nonlinear_ok_simple.m
            Example Constraint Function: nonlinear_ok_simple_f: 3.1.1: nonlinear_ok_simple_f.m
            Example Transition Function: nonlinear_ok_simple_g: 3.1.2: nonlinear_ok_simple_g.m
            Example Measurement Function: nonlinear_ok_simple_h: 3.1.3: nonlinear_ok_simple_h.m
        ckbs_nonlinear Box Constrained Tracking Example and Test: 3.2: nonlinear_ok_box.m
            Example Constraint Function: nonlinear_ok_box_f: 3.2.1: nonlinear_ok_box_f.m
            Example Transition Function: nonlinear_ok_box_g: 3.2.2: nonlinear_ok_box_g.m
            Example Measurement Function: nonlinear_ok_box_h: 3.2.3: nonlinear_ok_box_h.m
            Example no Constraint Function: nonlinear_ok_box_nof: 3.2.4: nonlinear_ok_box_nof.m
        ckbs_nonlinear Sine Wave Constrained Tracking Example and Test: 3.3: nonlinear_ok_sin.m
            Example Constraint Function: nonlinear_ok_sin_f: 3.3.1: nonlinear_ok_sin_f.m
    Constrained Affine Kalman Bucy Smoother: 4: ckbs_affine
        ckbs_affine Box Constrained Smoothing Spline Example and Test: 4.1: affine_ok_box.m
    ckbs Utility Functions: 5: utility
        Packed Block Diagonal Matrix Times a Vector: 5.1: ckbs_blkdiag_mul
            blkdiag_mul Example and Test: 5.1.1: blkdiag_mul_ok.m
        Transpose of Packed Block Diagonal Matrix Times a Vector: 5.2: ckbs_blkdiag_mul_t
            blkdiag_mul_t Example and Test: 5.2.1: blkdiag_mul_t_ok.m
        Affine Residual Sum of Squares Objective: 5.3: ckbs_sumsq_obj
            ckbs_sumsq_obj Example and Test: 5.3.1: sumsq_obj_ok.m
        Affine Residual Sum of Squares Gradient: 5.4: ckbs_sumsq_grad
            ckbs_sumsq_grad Example and Test: 5.4.1: sumsq_grad_ok.m
        Affine Residual Sum of Squares Hessian: 5.5: ckbs_sumsq_hes
            ckbs_sumsq_hes Example and Test: 5.5.1: sumsq_hes_ok.m
        Symmetric Block Tridiagonal Algorithm: 5.6: ckbs_tridiag_solve
            ckbs_tridiag_solve Example and Test: 5.6.1: tridiag_solve_ok.m
        Affine Constrained Kalman Bucy Smoother Newton Step: 5.7: ckbs_newton_step
            ckbs_newton_step Example and Test: 5.7.1: newton_step_ok.m
        Compute Residual in Kuhn-Tucker Conditions: 5.8: ckbs_kuhn_tucker
            ckbs_kuhn_tucker Example and Test: 5.8.1: kuhn_tucker_ok.m
    Run All Correctness Tests: 6: all_ok.m
        Ensure Path is Set for Testing: 6.1: test_path.m
    Changes and Additions to ckbs: 7: whatsnew
    Alphabetic Listing of Cross Reference Tags: 8: _reference
    Keyword Index: 9: _index
    External Internet References: 10: _external

2: Your License to use the ckbs Software

              GNU GENERAL PUBLIC LICENSE
                 Version 2, June 1991

 Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                   Preamble

  The licenses for most software are designed to take away your
freedom to share and change it.  By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users.  This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it.  (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.)  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.

  To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have.  You must make sure that they, too, receive or can get the
source code.  And you must show them these terms so they know their
rights.

  We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.

  Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software.  If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.

  Finally, any free program is threatened constantly by software
patents.  We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary.  To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.

  The precise terms and conditions for copying, distribution and
modification follow.

              GNU GENERAL PUBLIC LICENSE
   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

  0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License.  The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language.  (Hereinafter, translation is included without limitation in
the term "modification".)  Each licensee is addressed as "you".

Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope.  The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.

  1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.

You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.

  2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:

    a) You must cause the modified files to carry prominent notices
    stating that you changed the files and the date of any change.

    b) You must cause any work that you distribute or publish, that in
    whole or in part contains or is derived from the Program or any
    part thereof, to be licensed as a whole at no charge to all third
    parties under the terms of this License.

    c) If the modified program normally reads commands interactively
    when run, you must cause it, when started running for such
    interactive use in the most ordinary way, to print or display an
    announcement including an appropriate copyright notice and a
    notice that there is no warranty (or else, saying that you provide
    a warranty) and that users may redistribute the program under
    these conditions, and telling the user how to view a copy of this
    License.  (Exception: if the Program itself is interactive but
    does not normally print such an announcement, your work based on
    the Program is not required to print an announcement.)

These requirements apply to the modified work as a whole.  If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works.  But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.

Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.

In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.

  3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:

    a) Accompany it with the complete corresponding machine-readable
    source code, which must be distributed under the terms of Sections
    1 and 2 above on a medium customarily used for software interchange; or,

    b) Accompany it with a written offer, valid for at least three
    years, to give any third party, for a charge no more than your
    cost of physically performing source distribution, a complete
    machine-readable copy of the corresponding source code, to be
    distributed under the terms of Sections 1 and 2 above on a medium
    customarily used for software interchange; or,

    c) Accompany it with the information you received as to the offer
    to distribute corresponding source code.  (This alternative is
    allowed only for noncommercial distribution and only if you
    received the program in object code or executable form with such
    an offer, in accord with Subsection b above.)

The source code for a work means the preferred form of the work for
making modifications to it.  For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable.  However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.

If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.

  4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License.  Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.

  5. You are not required to accept this License, since you have not
signed it.  However, nothing else grants you permission to modify or
distribute the Program or its derivative works.  These actions are
prohibited by law if you do not accept this License.  Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.

  6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions.  You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.

  7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all.  For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.

If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.

It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices.  Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.

This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.

  8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded.  In such case, this License incorporates
the limitation as if written in the body of this License.

  9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

Each version is given a distinguishing version number.  If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation.  If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.

  10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission.  For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this.  Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.

                   NO WARRANTY

  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.

  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

               END OF TERMS AND CONDITIONS

         How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation; either version 2 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License along
    with this program; if not, write to the Free Software Foundation, Inc.,
    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

Also add information on how to contact you by electronic and paper mail.

If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:

    Gnomovision version 69, Copyright (C) year name of author
    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.

You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary.  Here is a sample; alter the names:

  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
  `Gnomovision' (which makes passes at compilers) written by James Hacker.

  <signature of Ty Coon>, 1 April 1989
  Ty Coon, President of Vice

This General Public License does not permit incorporating your program into
proprietary programs.  If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

Input File: omh/license.omh
3: The Nonlinear Constrained Kalman-Bucy Smoother

3.a: Syntax
[x_outu_outinfo] = ckbs_nonlinear(f_fung_funh_fun, ...
     
max_itrepsilonx_inzqinvrinvlevel)

3.b: Purpose
This routine minimizes the Kalman-Bucy smoother residual sum of squares objective for a general nonlinear model functions (in the case where the model functions are affine, the routine 4: ckbs_affine is more efficient).

3.c: Notation
The Kalman-Bucy smoother residual sum of squares is defined by  \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = & 
\frac{1}{2} 
[ z_k - h_k ( x_k ) ]^\T * R_k^{-1} * [ z_k - h_k ( x_k ) ]
\\
& + &
\frac{1}{2} 
[ x_k - g_k ( x_{k-1} ) ]^\T * Q_k^{-1} * [ x_k - g_k ( x_{k-1} ) ]
\\
\end{array}
\] 
where the matrices  R_k and  Q_k are symmetric positive definite and  x_0 is the constant zero. Note that  g_1 ( x_0 ) is the initial state estimate and  Q_1 is the corresponding covariance.

3.d: Problem
Our constrained Kalman-Bucy smoother problem is  \[
\begin{array}{rll}
{\rm minimize} 
     & S( x_1 , \ldots , x_N ) 
     & {\rm w.r.t.} \; x_1 \in \R^n , \ldots , x_N \in \R^n
\\
{\rm subject \; to} 
     & f_k(x_k) \leq 0 
     & {\rm for} \; k = 1 , \ldots , N
\end{array}
\] 


3.e: First Order Conditions
A state sequence  ( x_1 , \ldots , x_N ) is considered a solution if there is a Lagrange multiplier sequence  ( u_1 , \ldots , u_N ) such that the following conditions are satisfied for  i = 1 , \ldots , \ell and  k = 1 , \ldots , N :  \[
\begin{array}{rcl}
f_k ( x_k )                                & \leq & \varepsilon  \\
0                                          & \leq & u_k          \\
| u_k^\T * \partial_k f_k ( x_k ) + \partial_k  S( x_1 , \ldots , x_N ) |
                                           & \leq & \varepsilon \\  
| f_k ( x_k )_i | * ( u_k )_i              & \leq & \varepsilon
\end{array}
\] 
Here the notation  \partial_k is used for the partial derivative of with respect to  x_k and the notation  (u_k)_i denotes the i-th component of  u_k .

3.f: f_fun
The ckbs_nonlinear argument f_fun is a function that supports both of the following syntaxes
     [
fk] = feval(f_funkxk)
     [
fkFk] = feval(f_funkxk)
In the case where there are no constraints, one can use the following function for f_fun:
 
	[fk, Fk] = f_fun(k, xk)
	n  = size(xk, 1);
	fk = -1;
	Fk =  zeros(1, n); 
	return
	end


3.f.a: k
The f_fun argument k is an integer with  1 \leq k \leq N .

3.f.b: xk
The f_fun argument xk is an column vector with length  n . It specifies the state vector at time index k  \[
     xk = x_k
\] 
.

3.f.c: fk
The f_fun result fk is an column vector with length  \ell and  \[
     fk = f_k ( x_k ) 
\] 


3.f.d: Fk
If the f_fun result Fk is present in the syntax, it is the  \ell \times n matrix given by and  \[
     Fk = \partial_k f_k ( x_k ) 
\] 


3.g: g_fun
The ckbs_nonlinear argument g_fun is a function that supports both of the following syntaxes
     [
gk] = feval(g_funkxk1)
     [
gkGk] = feval(g_funkxk1)

3.g.a: k
The g_fun argument k is an integer with  1 \leq k \leq N . The case  k = 1 serves to specify the initial state estimate.

3.g.b: xk1
The g_fun argument xk1 is an column vector with length  n . It specifies the state vector at time index k  \[
     xk1 = x_{k-1}
\] 
. In the case  k = 1 , the value of xk1 does not matter.

3.g.c: gk
The g_fun result gk is an column vector with length  n and  \[
     gk = g_k ( x_{k-1} ) 
\] 
In the case  k = 1 , the value of gk is the initial state estimate at time index k.

3.g.d: Gk
If the g_fun result Gk is present in the syntax, it is the  n \times n matrix given by and  \[
     Gk = \partial_{k-1} g_k ( x_{k-1} ) 
\] 
In the case  k = 1 , the value of Gk is the zero matrix; i.e.,  g_k does not depend on the value of  x_{k-1} .

3.h: h_fun
The ckbs_nonlinear argument h_fun is a function that supports both of the following syntaxes
     [
hk] = feval(h_funkxk)
     [
hkHk] = feval(h_funkxk)

3.h.a: k
The h_fun argument k is an integer with  1 \leq k \leq N .

3.h.b: xk
The h_fun argument xk is an column vector with length  n . It specifies the state vector at time index k  \[
     xk = x_k
\] 
.

3.h.c: hk
The h_fun result hk is an column vector with length  m and  \[
     hk = h_k ( x_k ) 
\] 


3.h.d: Hk
If the h_fun result Hk is present in the syntax, it is the  m \times n matrix given by and  \[
     Hk = \partial_k h_k ( x_k ) 
\] 


3.i: max_itr
The integer scalar max_itr specifies the maximum number of iterations of the algorithm to execute. It must be greater than or equal to zero. Note that if it is zero, the first row of the 3.r: info return value will still be computed. This can be useful for deciding what is a good value for the argument 3.j: epsilon .

3.j: epsilon
The ckbs_nonlinear argument epsilon is a positive scalar. It specifies the convergence criteria value; i.e.,  \[
     \varepsilon = epsilon
\] 
In some instances, ckbs_nonlinear will return after printing a warning without convergence; see 3.r: info .

3.k: x_in
The ckbs_nonlinear argument x_in is a two dimensional array with size  n \times N . It specifies a sequence of state values; i.e., for  k = 1 , \ldots , N  \[
     x\_in (:, k) = x_k
\] 
The closer the initial state sequence is to the solution the faster, and more likely, the ckbs_nonlinear will converge. The initial state sequence need not be feasible; i.e. it is not necessary that  \[
     f_k ( x_k ) \leq 0
\] 
for all  k .

3.l: z
The ckbs_nonlinear argument z is a two dimensional array with size  m \times N . It specifies the sequence of measurement vectors; i.e., for  k = 1 , \ldots , N  \[
     z(:, k) = z_k
\]


3.m: qinv
The ckbs_nonlinear argument qinv is a three dimensional array with size  n \times n \times N . It specifies the inverse of the variance of the measurement noise; i.e., for  k = 1 , \ldots , N  \[
     qinv(:,:, k) = Q_k^{-1}
\]
In the case  k = 1 , the value of  Q_k is the variance of the initial state estimate (see 3.g: g_fun .

3.n: rinv
The ckbs_nonlinear argument rinv is a three dimensional array, with size  m \times m \times N . it specifies the inverse of the variance of the transition noise; i.e., for  k = 1 , \ldots , N  \[
     rinv(:,:, k) = R_k^{-1}
\]
It is ok to signify a missing data value by setting the corresponding row and column of rinv to zero. This also enables use to interpolate the state vector  x_k to points where there are no measurements.

3.o: level
The integer scalar level specifies the level of tracing to do. If level == 0, no tracing is done. If level >= 1, the row index q and the correspond row of info are printed as soon as soon as they are computed. If level >= 2, a check of the derivative calculations is printed before the calculations start. In this case, control will return to the keyboard so that you can print values, continue, or abort the calculation.

3.p: x_out
The result x_out is a two dimensional array with size  n \times N .  ( x_1 , \ldots , x_N ) . It contains an approximation for the optimal sequence; i.e., for  k = 1 , \ldots , N  \[
     x\_out(:, k) = x_k
\]


3.q: u_out
The result u_out is a two dimensional array with size %  \ell \times N . It contains the Lagrange multiplier sequence corresponding to x_out; i.e., for  k = 1 , \ldots , N  \[
     u\_out(:, k) = u_k
\]
The pair x_out, u_out satisfy the 3.e: first order conditions .

3.r: info
The result info is a matrix with each row corresponding to an iteration of the algorithm. Note that ckbs_nonlinear has satisfied the convergence condition if and only if
     all( 
info(end, 1:3) <= epsilon )


We use  \{ x_k^q \} to denote the state vector sequence at the at the end of iteration  q-1 and  \{ u_k^q \} for the dual vector sequence (  u_k^q \geq 0 ).

3.r.a: max_f
The value
     
max_f(q) = info(q, 1)
is a bound on the constraint functions at the end of iteration q-1. To be specific  \[
      max\_f(q) = \max_{i, k} [ f_k ( x_k^q )_i ]
\] 
where the maximum is with respect to  i = 1 , \ldots , \ell and  k = 1 , \ldots , N . Convergence requires this value to be less than or equal epsilon.

3.r.b: max_grad
The value
     
max_grad(q) = info(q, 2)
is a bound on the partial of the Lagrangian with respect to  x at the end of iteration q-1. To be specific  \[
max\_grad(q) = \max_{j, k} \left[ \left|
(u_k^q)^\T * \partial_k f_k(x_k^q) + \partial_k  S(x_1^q, \ldots ,x_N^q) 
\right| \right]_j
\] 
where the maximum is with respect to  j = 1 , \ldots , n and  k = 1 , \ldots , N . This value should converge to zero.

3.r.c: max_fu
The value
     
max_fu = info(q, 3)
is a bound on the complementarity conditions at the end of iteration q-1. To be specific  \[
max\_fu(q) = \max_{i, k} [ | f_k ( x_k )_i | * ( u_k )_i ]
\] 
where the maximum is with respect to  i = 1 , \ldots , \ell and  k = 1 , \ldots , N . This value should converge to be less than or equal zero.

3.r.d: obj_cur
The value
     
obj_cur(q) = info(q, 4)
is the value of the objective function at the end of  q-1 ; i.e.,  \[
     obj\_cur(q) = S( x_1^q , \ldots , x_k^q )
\] 
(Note that  info(1, 4)% is the value of the objective corresponding to x_in.

3.r.e: lambda
The value
     
lambda(q) = info(q, 5)
is the line search step size used during iteration  q-1 of ckbs_nonlinear. If the problem is nearly affine (if the affine approximate is accurate) this will be one. Much small step sizes indicate highly non-affine problems (with the exception that info(1, 5) is always zero).

3.r.f: lam_aff
The value
     
lam_aff(q) = info(q, 6)
is the minimum line search step size while solving the affine sub-problem that corresponds to iteration  q-1 of ckbs_nonlinear. If the sub-problem solution is working well, this will be one. Much small step sizes indicate the sub-problem solution is not working well.

3.r.g: Penalty Parameter
The exact penalty function  \[
     S( x_1 , \ldots , x_k ) 
     + \alpha \sum_{k=1}^N \sum_{i=1}^\ell \max [ f_k ( x_k )_i , 0 ]
\] 
is used to determine the line search step size. The value info(a, 7) is the penalty parameter  \alpha during iteration  q-1 of ckbs_nonlinear.

3.s: Example

3.s.a: Simple
The file 3.1: nonlinear_ok_simple.m contains a simple example and test of ckbs_nonlinear. It returns true if ckbs_nonlinear passes the tests and false otherwise.

3.s.b: Box Constraints
The file 3.2: nonlinear_ok_box.m contains an example and test of ckbs_nonlinear with box constraints and a nonlinear measurement function. It returns true if ckbs_nonlinear passes the tests and false otherwise.

3.s.c: Nonlinear Constraints
The file 3.3: nonlinear_ok_sin.m contains an example and test of ckbs_nonlinear with a sine wave constraint and a nonlinear measurement function. It returns true if ckbs_nonlinear passes the tests and false otherwise.
Input File: src/ckbs_nonlinear.m
3.1: Simple ckbs_nonlinear Example and Test

3.1.a: Call Back Functions
3.1.1: nonlinear_ok_simple_f.m Example Constraint Function: nonlinear_ok_simple_f
3.1.2: nonlinear_ok_simple_g.m Example Transition Function: nonlinear_ok_simple_g
3.1.3: nonlinear_ok_simple_h.m Example Measurement Function: nonlinear_ok_simple_h

3.1.b: Source Code
 
function [ok] = nonlinear_ok_simple()
% --------------------------------------------------------------------
max_itr = 20;      % maximum number of iterations
epsilon = 1e-5;    % convergence criteria
N       = 3;       % number of time points
ell     = 2;       % number of constraints
m       = 1;       % number of measurements per time point
n       = 3;       % number of state vector components per time point
sigma   = 1;       % variance of measurement noise
level   = 0;       % level of tracing in ckbs_nonlinear
%
% simulated true trajectory 
x_true  = ones(n, 1) * (1 : N);
%
% simulate measurement noise
e_true  = zeros(m, N);
%
% initial x vector
x_in = zeros(n, N);
%
% --------------------------------------------------------------------
if m > n 
	error('nonlinear_ok_simple: m > n');
end
if ell > n 
	error('nonlinear_ok_simple: ell > n');
end
%
global nonlinear_ok_simple_m
global nonlinear_ok_simple_ell
global nonlinear_ok_simple_N
%
nonlinear_ok_simple_m   = m;
nonlinear_ok_simple_ell = ell;
nonlinear_ok_simple_N   = N;
%
% measurement variances
rinv    = zeros(m, m, N);
qinv    = zeros(n, n, N);
qinvk   = eye(n) / (sigma * sigma);
rinvk   = eye(m) / (sigma * sigma);
z       = zeros(m, N);
for k = 1 : N
	xk           = x_true(:, k);
	ek           = e_true(:, k);
	hk           = nonlinear_ok_simple_h(k, xk);
	z(:, k)      = hk + ek;
	qinv(:,:, k) = qinvk;
	rinv(:,:, k) = rinvk;
end
%
% pass the maximum state value to f
global nonlinear_ok_simple_f_max
nonlinear_ok_simple_f_max = (N - .5) * ones(m, 1);
%
% ----------------------------------------------------------------------
f_fun = 'nonlinear_ok_simple_f';
g_fun = 'nonlinear_ok_simple_g';
h_fun = 'nonlinear_ok_simple_h';
[x_out, u_out, info] = ckbs_nonlinear( ...
	f_fun,    ...
	g_fun,    ...
	h_fun,    ...
	max_itr,  ...
	epsilon,  ...
	x_in,     ...
	z,        ...
	qinv,     ...
	rinv,     ...
	level     ...
);
% ----------------------------------------------------------------------
ok     = size(info, 1) <= max_itr;
f_out  = zeros(ell, N);
g_out  = zeros(n,   N);
h_out  = zeros(m,   N);
df_out = zeros(ell, n, N);
dg_out = zeros(n, n,   N);
dh_out = zeros(m, n,   N);
xk1    = zeros(n, 1);
for k = 1 : N
	xk       = x_out(:, k);
	uk       = u_out(:, k);
	[fk, Fk] = feval(f_fun, k, xk);
	[gk, Gk] = feval(g_fun, k, xk1);
	[hk, Hk] = feval(h_fun, k, xk);
	%
	ok       = ok & all( fk <= epsilon);
	ok       = ok & all( abs(fk) .* uk <= epsilon );
	%
	f_out(:, k) = fk - Fk * xk;
	g_out(:, k) = gk - Gk * xk1;
	h_out(:, k) = hk - Hk * xk;
	df_out(:,:, k) = Fk;
	dg_out(:,:, k) = Gk;
	dh_out(:,:, k) = Hk;
	xk1 = xk;
end
ok   = ok & all( all( u_out >= 0 ) );
grad  = ckbs_sumsq_grad(x_out, z, g_out, h_out, dg_out, dh_out, qinv, rinv);
for k = 1 : N
	uk  = u_out(:, k);
	Fk  = df_out(:,:, k);
	dk  = grad(:, k);
	%
	ok  = ok & all( abs(Fk' * uk + dk) <= epsilon );
end
return
end

Input File: example/nonlinear_ok_simple.m
3.1.1: Example Constraint Function: nonlinear_ok_simple_f

3.1.1.a: Source Code
 
function [fk, Fk] = nonlinear_ok_simple_f(k, xk)
	global nonlinear_ok_simple_N
	global nonlinear_ok_simple_ell
	%
        N   = nonlinear_ok_simple_N;
        ell = nonlinear_ok_simple_ell;
	n   = size(xk, 1);
	% xk(1:ell)^2 <= N - .5
	fk = xk(1:ell).^2 - (N - .5);
	Fk = zeros(ell, n);
	for i = 1 : ell
		Fk(i, i) = 2 * xk(i);
	end
	return
end

Input File: example/nonlinear_ok_simple_f.m
3.1.2: Example Transition Function: nonlinear_ok_simple_g

3.1.2.a: Source Code
 
function [gk, Gk] = nonlinear_ok_simple_g(k, xk1)
	n  = size(xk1, 1);
	if k == 1 
		Gk = zeros(n, n);
		gk = Gk * xk1 + 1;
	else
		Gk = eye(n);
		gk = Gk * xk1 + 1;
	end
	return
end

Input File: example/nonlinear_ok_simple_g.m
3.1.3: Example Measurement Function: nonlinear_ok_simple_h

3.1.3.a: Source Code
 
function [hk, Hk] = nonlinear_ok_simple_h(k, xk)
	global nonlinear_ok_simple_m
	%
	n        = size(xk, 1);
	m        = nonlinear_ok_simple_m;
	hk       = xk(1:m);
	Hk       = [ eye(m) , zeros(m, n-m) ];
	return
end

Input File: example/nonlinear_ok_simple_h.m
3.2: ckbs_nonlinear Box Constrained Tracking Example and Test

3.2.a: Syntax
[ok] = nonlinear_ok_box(draw_plot)

3.2.b: draw_plot
If this argument is true, a plot is drawn showing the results and the nonlinear_ok_box.out file is written for use with the program nonlinear_ok_box.r.

3.2.c: ok
If the return value ok is true, the test passed, otherwise the test failed.

3.2.d: State Vector
 x1 (t) first component of velocity
 x2 (t) first component of position
 x3 (t) second component of velocity
 x4 (t) second component of position

3.2.e: Measurement Vector
 z1 (t) range to the station s1 plus noise
 z2 (t) range to the station s2 plus noise

3.2.f: Constraint
 2 + x4\_min \geq x_4 (t) \geq x4\_min  .

3.2.g: Call Back Functions
3.2.1: nonlinear_ok_box_f.m Example Constraint Function: nonlinear_ok_box_f
3.2.2: nonlinear_ok_box_g.m Example Transition Function: nonlinear_ok_box_g
3.2.3: nonlinear_ok_box_h.m Example Measurement Function: nonlinear_ok_box_h
3.2.4: nonlinear_ok_box_nof.m Example no Constraint Function: nonlinear_ok_box_nof

3.2.h: Source Code
 
function [ok] = nonlinear_ok_box(draw_plot)
if nargin < 1 
	draw_plot = false;
end
% --------------------------------------------------------
% You can change these parameters
N       = 50;            % number of measurement time points
dt      = 2 * pi / N;    % time between measurement points
sigma   = .25;           % standard deviation of measurement noise
gamma   = 1;             % multiplier for transition variance
max_itr = 50;            % maximum number of iterations
epsilon = 1e-4;          % convergence criteria
x4_min  = .25;           % minimum value for x4_true
h_min   = 0;             % minimum horizontal value in plots
h_max   = 7;             % maximum horizontal value in plots
v_min   = 0;             % minimum vertical value in plots
v_max   = 2.5;           % maximum vertical value in plots 
s1      = [ 0 , 0 ];     % station one
s2      = [ 2*pi , 0 ];  % station two
%
% level of tracing during the optimization
if draw_plot
	level   = 1;
else
	level   = 0;
end
% ---------------------------------------------------------
%
% global variables used by nonlinear_ok_box_h
global nonlinear_ok_box_s1
global nonlinear_ok_box_s2
nonlinear_ok_box_s1 = s1;
nonlinear_ok_box_s2 = s2;
%
ok = true;
%
%  Define the problem
rand('seed', 1234);
%
% number of constraints per time point
ell   = 2;
%
% number of measurements per time point
m     = 2;
%
% number of state vector components per time point
n     = 4;
%
% simulate the true trajectory and measurement noise
t        =  (1 : N) * dt;
x2_true  = t; 
x4_max   = x4_min + 2;
x4_true  = x4_min + 1 - sin(t);
x1_true  = ones(1, N);
x3_true  = - cos(t);
x_true   = [ x1_true ; x2_true ; x3_true ; x4_true ];
v1_true  = sigma * randn(1, N);
v2_true  = sigma * randn(1, N);
%
% corresponding measurement values
rinv    = zeros(m, m, N);
z       = zeros(m, N);
rinvk   = eye(m) / (sigma * sigma);
for k = 1 : N
	x_k          = x_true(:, k);
	h_k          = nonlinear_ok_box_h(k, x_k);
	z(:, k)      = h_k + [ v1_true(k) ; v2_true(k) ];
	rinv(:,:, k) = rinvk;
end
%
% covariance for the transition noise
qk       = diag( [ dt, dt^3 / 3 , dt, dt^3 / 3 ] );
qk(1, 2) = dt^2 / 2;
qk(2, 1) = dt^2 / 2;
qk(3, 4) = dt^2 / 2;
qk(4, 3) = dt^2 / 2;
qk       = qk * gamma;
qinvk    = inv(qk);
qinv     = zeros(n, n, N);
for k = 2 : N
	qinv(:,:, k) = qinvk;
end
%
% covariance for the initial estimate
qinv(:,:,1) = eye(n) * 100 * gamma; 
%
% initial x vector
x_in = zeros(n, N);
for k = 1 : N
	x_in(:, k) = [ 0 ; 1 ; 0 ; 1 ];
end
%
% global variables used by nonlinear_ok_box_f
global nonlinear_ok_box_x4_min
global nonlinear_ok_box_x4_max
nonlinear_ok_box_x4_min = x4_min;
nonlinear_ok_box_x4_max = x4_max;
%
% global variables used by nonlinear_ok_box_g
global nonlinear_ok_box_x1
global nonlinear_ok_box_dt
nonlinear_ok_box_x1 = x_true(:, 1);
nonlinear_ok_box_dt = dt;
%
% ----------------------------------------------------------------------
f_fun = 'nonlinear_ok_box_f';
g_fun = 'nonlinear_ok_box_g';
h_fun = 'nonlinear_ok_box_h';
[x_out, u_out, info] = ckbs_nonlinear( ...
	f_fun,    ...
	g_fun,    ...
	h_fun,    ...
	max_itr,  ...
	epsilon,  ...
	x_in,     ...
	z,        ...
	qinv,     ...
	rinv,     ...
	level     ...
);
% ----------------------------------------------------------------------
ok         = ok & (size(info,1) <= max_itr);
f_out      = zeros(ell, N);
g_out      = zeros(n, N);
h_out      = zeros(m, N);
df_out     = zeros(ell, n, N);
dg_out     = zeros(n, n, N);
dh_out     = zeros(m, n, N);
xk1        = zeros(n, 1);
for k = 1 : N
	xk    = x_out(:, k);
	uk    = u_out(:, k);
	[fk, Fk]   = nonlinear_ok_box_f(k, xk);
	[gk, Gk]   = nonlinear_ok_box_g(k, xk1);
	[hk, Hk]   = nonlinear_ok_box_h(k, xk);
	%
	ok   = ok & all( fk <= epsilon );
	ok   = ok & all( abs(fk) .* uk <= epsilon );
	%
	df_out(:,:, k) = Fk;
	dg_out(:,:, k) = Gk;
	dh_out(:,:, k) = Hk;
        f_out(:, k)    = fk - Fk * xk;
	g_out(:, k)    = gk - Gk * xk1;
	h_out(:, k)    = hk - Hk * xk;
	xk1 = xk;
end
ok   = ok & all( all( u_out >= 0 ) );
%
d_out = ckbs_sumsq_grad(x_out, z, g_out, h_out, dg_out, dh_out, qinv, rinv);
for k = 1 : N
	uk = u_out(:, k);
	Fk = df_out(:,:, k);
	dk = d_out(:, k);
	%
	ok = ok & (max ( abs( Fk' * uk + dk ) ) <= epsilon);
end
if ~ ok
	keyboard
end
if draw_plot
	figure(1);
	clf
	hold on
	plot(x_true(2,:)', x_true(4,:)', 'b-' );
	plot(x_out(2,:)', x_out(4,:)', 'g-' );
	plot(x_true(2,:)', x4_min * ones(N,1), 'r-');
	plot(x_true(2,:)', x4_max * ones(N,1), 'r-');
	axis([h_min, h_max, v_min, v_max]);
	title('Constrained: blue=truth, green=estimate, red=constraint');
	hold off
	%
	% constrained estimate
	x_con = x_out;
end
% ----------------------------------------------------------------------
% Unconstrained case
f_fun = 'nonlinear_ok_box_nof';
[x_out, u_out, info] = ckbs_nonlinear( ...
	f_fun,    ...
	g_fun,    ...
	h_fun,    ...
	max_itr,  ...
	epsilon,  ...
	x_in,     ...
	z,        ...
	qinv,     ...
	rinv,     ...
	level     ...
);
% ----------------------------------------------------------------------
ok   = ok & (size(info,1) <= max_itr);
xk1  = zeros(n, 1);
for k = 1 : N
	xk    = x_out(:, k);
	[gk, Gk]   = nonlinear_ok_box_g(k, xk1);
	[hk, Hk]   = nonlinear_ok_box_h(k, xk);
	%
	dg_out(:,:, k) = Gk;
	dh_out(:,:, k) = Hk;
	g_out(:, k)    = gk - Gk * xk1;
	h_out(:, k)    = hk - Hk * xk;
	xk1 = xk;
end
d_out  = ckbs_sumsq_grad(x_out, z, g_out, h_out, dg_out, dh_out, qinv, rinv);
ok     = ok & (max( max( abs(d_out) ) ) <= epsilon);
if draw_plot
	figure(2);
	clf
	hold on
	plot(x_true(2,:)', x_true(4,:)', 'b-' );
	plot(x_out(2,:)', x_out(4,:)', 'g-' );
	plot(x_true(2,:)', x4_min * ones(1,N), 'r-');
	plot(x_true(2,:)', x4_max * ones(1,N), 'r-');
	axis([h_min, h_max, v_min, v_max]);
	title('Unconstrained: blue=truth, green=estimate, red=constraint');
	hold off
	%
	% unconstrained estimate
	x_free = x_out;
	%
	% write out constrained and unconstrained results
	[fid, msg] = fopen('nonlinear_ok_box.out', 'wt');
	if size(msg, 2) > 0 
		disp(['nonlinear_ok: ', msg]);
	end
	%                      123456789012345678901234'
	heading =             ' x2_true  x2_con x2_free'  ;
	heading = [ heading , ' x4_true  x4_con x4_free' ];
	heading = [ heading , '      z1      z2\n'       ];
	fprintf(fid, heading);
	for k = 1 : N
		fprintf(fid,'%8.3f%8.3f%8.3f%8.3f%8.3f%8.3f%8.3f%8.3f\n', ...
			x_true(2,k), x_con(2,k), x_free(2,k), ...
			x_true(4,k), x_con(4,k), x_free(4,k), ...
			z(1,k), z(2,k) ...
		);
	end
	fclose(fid);
end
return
end

Input File: example/nonlinear_ok_box.m
3.2.1: Example Constraint Function: nonlinear_ok_box_f

3.2.1.a: Source Code
 
function [fk, Fk] = nonlinear_ok_box_f(k, xk)
	global nonlinear_ok_box_x4_min
	global nonlinear_ok_box_x4_max
	%
	% constrained so that x4_min <= x4(t) <= x4_max
	x4_min = nonlinear_ok_box_x4_min;
	x4_max = nonlinear_ok_box_x4_max;
        ell    = 2;
	n      = size(xk, 1);
	% 
	fk = [ ...
		xk(4) - x4_max;
		x4_min - xk(4) ...
	];
	Fk = zeros(ell, n);
	Fk(1, 4) = 1;
	Fk(2, 4) = -1;
	return
end

Input File: example/nonlinear_ok_box_f.m
3.2.2: Example Transition Function: nonlinear_ok_box_g

3.2.2.a: Source Code
 
function [gk, Gk] = nonlinear_ok_box_g(k, xk1)
	global nonlinear_ok_box_x1
	global nonlinear_ok_box_dt
	if k == 1 
		Gk = zeros(4, 4);
		gk = nonlinear_ok_box_x1;
		return;
	end 
	dt = nonlinear_ok_box_dt;
	Gk = [ ... 
		1  0  0  0 
		dt 1  0  0
		0  0  1  0
		0  0  dt 1 ...
	];
	gk = Gk * xk1;
	return
end

Input File: example/nonlinear_ok_box_g.m
3.2.3: Example Measurement Function: nonlinear_ok_box_h

3.2.3.a: Source Code
 
function [hk, Hk] = nonlinear_ok_box_h(k, xk)
	global nonlinear_ok_box_s1
	global nonlinear_ok_box_s2
	s1  = nonlinear_ok_box_s1;
	s2  = nonlinear_ok_box_s2;
	r1  = sqrt( (xk(2) - s1(1))^2 + (xk(4) - s1(2))^2 );
	r2  = sqrt( (xk(2) - s2(1))^2 + (xk(4) - s2(2))^2 );
	hk = [ r1 ; r2 ];
	dr1 = [ 0, xk(2) - s1(1), 0, xk(4) - s1(2) ] / r1;
	dr2 = [ 0, xk(2) - s2(1), 0, xk(4) - s2(2) ] / r2;
	Hk = [ dr1 ; dr2 ];
	return
end

Input File: example/nonlinear_ok_box_h.m
3.2.4: Example no Constraint Function: nonlinear_ok_box_nof

3.2.4.a: Source Code
 
function [fk, Fk] = nonlinear_ok_box_nof(k, xk)
	% no constraint case
	n  = size(xk, 1);
	fk = -1;
	Fk = zeros(1, n);
	return
end

Input File: example/nonlinear_ok_box_nof.m
3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test

3.3.a: Syntax
[ok] = nonlinear_ok_sin(draw_plot)

3.3.b: draw_plot
If this argument is true, a plot is drawn showing the results and the nonlinear_ok_sin.out file is written for use with the program test/nonlinear_ok_sin.r.

3.3.c: ok
If the return value ok is true, the test passed, otherwise the test failed.

3.3.d: State Vector
 x1 (t) first component of velocity
 x2 (t) first component of position
 x3 (t) second component of velocity
 x4 (t) second component of position

3.3.e: Measurement Vector
 z1 (t) range to station s1 plus noise
 z2 (t) range to station s2 plus noise

3.3.f: Constraint
  x4(t) \geq 1 - sin( x2(t) )

3.3.g: Call Back Functions
3.3.1: nonlinear_ok_sin_f.m Example Constraint Function: nonlinear_ok_sin_f
3.2.2: nonlinear_ok_box_g.m Example Transition Function: nonlinear_ok_box_g
3.2.3: nonlinear_ok_box_h.m Example Measurement Function: nonlinear_ok_box_h
3.2.4: nonlinear_ok_box_nof.m Example no Constraint Function: nonlinear_ok_box_nof

3.3.h: Source Code
 
function [ok] = nonlinear_ok_sin(draw_plot)
if nargin < 1 
	draw_plot = false;
end
% --------------------------------------------------------
% You can change these parameters
N       = 50;            % number of measurement time points
dt      = 2 * pi / N;    % time between measurement points
sigma   = .25;           % standard deviation of measurement noise
gamma   = 1.;            % multiplier for transition variance
max_itr = 50;            % maximum number of iterations
epsilon = 1e-4;          % convergence criteria
x4_min  = .25;           % minimum value for x4 constraint
h_min   = 0;             % minimum horizontal value in plots
h_max   = 7;             % maximum horizontal value in plots
v_min   = 0;             % minimum vertical value in plots
v_max   = 2.5;           % maximum vertical value in plots 
x4_dist = .05;           % distance true path is from the constraint
s1      = [ 0 , 0 ];     % station one
s2      = [ 2*pi , 0 ];  % station two
%
% level of tracing during the optimization
if draw_plot
	level   = 1;
else
	level   = 0;
end
% ---------------------------------------------------------
% global variables used by nonlinear_ok_box_h
global nonlinear_ok_box_s1
global nonlinear_ok_box_s2
nonlinear_ok_box_s1 = s1;
nonlinear_ok_box_s2 = s2;
%
% global variable used by nonlinear_ok_sin_f
global nonlinear_ok_sin_x4_min
nonlinear_ok_sin_x4_min = x4_min;
%
ok = true;
%
%  Define the problem
rand('seed', 234);
%
% number of constraints per time point
ell   = 1;
%
% number of measurements per time point
m     = 2;
%
% number of state vector components per time point
n     = 4;
%
% simulate the true trajectory and measurement noise
t        =  (1 : N) * dt;
x1_true  = ones(1, N);
x2_true  = t; 
x3_true  = - cos(t);
x4_true  = x4_dist + 1 - sin(t) + x4_min;
x_true   = [ x1_true ; x2_true ; x3_true ; x4_true ];
v1_true  = sigma * randn(1, N);
v2_true  = sigma * randn(1, N);
%
% corresponding measurement values
rinv    = zeros(m, m, N);
z       = zeros(m, N);
rinvk   = eye(m) / (sigma * sigma);
for k = 1 : N
	x_k          = x_true(:, k);
	h_k          = nonlinear_ok_box_h(k, x_k);
	z(:, k)      = h_k + [ v1_true(k) ; v2_true(k) ];
	rinv(:,:, k) = rinvk;
end
%
% covariance for the transition noise
qk       = diag( [ dt, dt^3 / 3 , dt, dt^3 / 3 ] );
qk(1, 2) = dt^2 / 2;
qk(2, 1) = dt^2 / 2;
qk(3, 4) = dt^2 / 2;
qk(4, 3) = dt^2 / 2;
qk       = qk * gamma;
qinvk    = inv(qk);
qinv     = zeros(n, n, N);
for k = 2 : N
	qinv(:,:, k) = qinvk;
end
%
% covariance for the initial estimate
qinv(:,:,1) = eye(n) * 100 * gamma; 
%
% initial x vector
x_in       = zeros(n, N);
x_in(4, :) = 1;
%
% nonlinear_ok_sin use same g function as nonlinear_box_ok
% global variables used by nonlinear_ok_box_g
global nonlinear_ok_box_x1
global nonlinear_ok_box_dt
nonlinear_ok_box_x1 = x_true(:, 1);
nonlinear_ok_box_dt = dt;
%
% ----------------------------------------------------------------------
f_fun = 'nonlinear_ok_sin_f';  % special nonlinear_ok_sin constraints
g_fun = 'nonlinear_ok_box_g';  % same transition function as nonlinear_ok_box
h_fun = 'nonlinear_ok_box_h';  % same measurement function as nonlinear_ok_box
[x_out, u_out, info] = ckbs_nonlinear( ...
	f_fun,    ...
	g_fun,    ...
	h_fun,    ...
	max_itr,  ...
	epsilon,  ...
	x_in,     ...
	z,        ...
	qinv,     ...
	rinv,     ...
	level     ...
);
% ----------------------------------------------------------------------
ok         = ok & (size(info,1) <= max_itr);
f_out      = zeros(ell, N);
g_out      = zeros(n, N);
h_out      = zeros(m, N);
df_out     = zeros(ell, n, N);
dg_out     = zeros(n, n, N);
dh_out     = zeros(m, n, N);
xk1        = zeros(n, 1);
for k = 1 : N
	xk    = x_out(:, k);
	uk    = u_out(:, k);
	[fk, Fk]   = nonlinear_ok_sin_f(k, xk);
	[gk, Gk]   = nonlinear_ok_box_g(k, xk1);
	[hk, Hk]   = nonlinear_ok_box_h(k, xk);
	%
	ok   = ok & all( fk <= epsilon );
	ok   = ok & all( abs(fk) .* uk <= epsilon );
	%
	df_out(:,:, k) = Fk;
	dg_out(:,:, k) = Gk;
	dh_out(:,:, k) = Hk;
        f_out(:, k)    = fk - Fk * xk;
	g_out(:, k)    = gk - Gk * xk1;
	h_out(:, k)    = hk - Hk * xk;
	xk1 = xk;
end
ok   = ok & all( all( u_out >= 0 ) );
%
d_out = ckbs_sumsq_grad(x_out, z, g_out, h_out, dg_out, dh_out, qinv, rinv);
for k = 1 : N
	uk = u_out(:, k);
	Fk = df_out(:,:, k);
	dk = d_out(:, k);
	%
	ok = ok & (max ( abs( Fk' * uk + dk ) ) <= epsilon);
end
if ~ ok
	keyboard
end
if draw_plot
	figure(1);
	clf
	hold on
	plot(x_true(2,:)', x_true(4,:)', 'b-' );
	plot(x_out(2,:)', x_out(4,:)', 'g-' );
	plot(x_true(2,:)', 1 - sin(x_true(2,:))' + x4_min, 'r-');
	axis([h_min, h_max, v_min, v_max]);
	title('Constrained: blue=truth, green=estimate, red=constraint');
	hold off
	%
	% constrained estimate
	x_con = x_out;
end
% ----------------------------------------------------------------------
% Unconstrained case
f_fun = 'nonlinear_ok_box_nof';  % same function as used by nonlinear_ok_box
[x_out, u_out, info] = ckbs_nonlinear( ...
	f_fun,    ...
	g_fun,    ...
	h_fun,    ...
	max_itr,  ...
	epsilon,  ...
	x_in,     ...
	z,        ...
	qinv,     ...
	rinv,     ...
	level     ...
);
% ----------------------------------------------------------------------
ok   = ok & (size(info,1) <= max_itr);
xk1  = zeros(n, 1);
for k = 1 : N
	xk    = x_out(:, k);
	[gk, Gk]   = nonlinear_ok_box_g(k, xk1);
	[hk, Hk]   = nonlinear_ok_box_h(k, xk);
	%
	dg_out(:,:, k) = Gk;
	dh_out(:,:, k) = Hk;
	g_out(:, k)    = gk - Gk * xk1;
	h_out(:, k)    = hk - Hk * xk;
	xk1 = xk;
end
d_out  = ckbs_sumsq_grad(x_out, z, g_out, h_out, dg_out, dh_out, qinv, rinv);
ok     = ok & (max( max( abs(d_out) ) ) <= epsilon);
if draw_plot
	figure(2);
	clf
	hold on
	plot(x_true(2,:)', x_true(4,:)', 'b-' );
	plot(x_out(2,:)', x_out(4,:)', 'g-' );
	plot(x_true(2,:)', 1 - sin(x_true(2,:))' + x4_min, 'r-');
	axis([h_min, h_max, v_min, v_max]);
	title('Unconstrained: blue=truth, green=estimate, red=constraint');
	hold off
	%
	% unconstrained estimate
	x_free = x_out;
	%
	% write out constrained and unconstrained results
	[fid, msg] = fopen('nonlinear_ok_sin.out', 'wt');
	if size(msg, 2) > 0 
		disp(['nonlinear_ok: ', msg]);
	end
	%                      123456789012345678901234'
	heading =             ' x2_true  x2_con x2_free'  ;
	heading = [ heading , ' x4_true  x4_con x4_free' ];
	heading = [ heading , '      z1      z2\n'       ];
	fprintf(fid, heading);
	for k = 1 : N
		fprintf(fid,'%8.3f%8.3f%8.3f%8.3f%8.3f%8.3f%8.3f%8.3f\n', ...
			x_true(2,k), x_con(2,k), x_free(2,k), ...
			x_true(4,k), x_con(4,k), x_free(4,k), ...
			z(1,k), z(2,k) ...
		);
	end
	fclose(fid);
end
return
end

Input File: example/nonlinear_ok_sin.m
3.3.1: Example Constraint Function: nonlinear_ok_sin_f

3.3.1.a: Source Code
 
function [fk, Fk] = nonlinear_ok_sin_f(k, xk)
	global nonlinear_ok_sin_x4_min
	x4_min = nonlinear_ok_sin_x4_min;
	%
	% constrained so that x(4) >= 1 - sin( x(1) ) + x4_min
        ell      = 1;
	n        = size(xk, 1);
	% 
	fk       = 1 - sin( xk(2) ) + x4_min - xk(4);
	Fk       = zeros(ell, n);
	Fk(1, 2) = - cos( xk(2) );
	Fk(1, 4) = -1;
	return
end

Input File: example/nonlinear_ok_sin_f.m
4: Constrained Affine Kalman Bucy Smoother

4.a: Syntax
[xOutuOutinfo] = ckbs_affine(max_itrepsilonzbghdbdgdhqinvrinv)

4.b: Purpose
This routine minimizes the affine Kalman-Bucy smoother residual sum of squares objective subject to an affine inequality constraint.

4.c: Notation
The affine Kalman-Bucy smoother residual sum of squares is defined by  \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = & 
\frac{1}{2} 
( z_k - h_k - H_k * x_k )^\T * R_k^{-1} * ( z_k - h_k - H_k * x_k )
\\
& + &
\frac{1}{2} 
( x_k - g_k - G_k * x_{k-1} )^\T * Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\\
\end{array}
\] 
where the matrices  R_k and  Q_k are symmetric positive definite and  x_0 is the constant zero. Note that  g_1 is the initial state estimate and  Q_1 is the corresponding covariance.

4.d: Problem
The affine constrained Kalman-Bucy smoother problem is  \[
\begin{array}{rll}
{\rm minimize} 
     & S( x_1 , \ldots , x_N ) 
     & {\rm w.r.t.} \; x_1 \in \R^n , \ldots , x_N \in \R^n
\\
{\rm subject \; to} 
     & b_k + B_k * x_k  \leq 0 
     & {\rm for} \; k = 1 , \ldots , N
\end{array}
\] 


4.e: First Order Conditions
A state sequence  ( x_1 , \ldots , x_N ) is considered a solution if there is a Lagrange multiplier sequence  ( u_1 , \ldots , u_N ) such that the following conditions are satisfied.  \[
\begin{array}{rcl}
     b_k + B_k * x_k                 & \leq & \varepsilon    \\
     0                               & \leq & u_k         \\
      | ( B_k^T * u_k + d_k )_j |        & \leq & \varepsilon \\  
      | (u_k)_i * ( b_k + B_k * x_k)_i | & \leq & \varepsilon
\end{array}
\] 
for  j = 1 , \ldots , n ,  i = 1 , \ldots , \ell , and  k = 1 , \ldots , N . Here  d_k is the partial derivative of  S ( x_1 , \ldots , x_N ) with respect to  x_k and  (u_k)_i denotes the i-th component of  u_k .

4.f: max_itr
The integer scalar max_itr specifies the maximum number of iterations of the algorithm to execute. It must be greater than or equal to zero. Note that if it is zero, the first row of the 4.s: info return value will still be computed. This can be useful for deciding what is a good value for the argument 4.g: epsilon .

4.g: epsilon
The positive scalar epsilon specifies the convergence criteria value; i.e.,  \[
     \varepsilon = epsilon
\] 


4.h: z
The argument z is a two dimensional array, for  k = 1 , \ldots , N  \[
     z_k = z(:, k)
\]
and z has size  m \times N .

4.i: b
The argument b is a two dimensional array, for  k = 1 , \ldots , N  \[
     b_k = b(:, k)
\]
and b has size  \ell \times N . If  \ell = 0 , the problem is not constrained; i.e., it is the affine Kalman-Bucy smoother problem.

4.j: g
The argument g is a two dimensional array, for  k = 1 , \ldots , N  \[
     g_k = g(:, k)
\]
and g has size  n \times N . The value  g_1 serves as the initial state estimate.

4.k: h
The argument h is a two dimensional array, for  k = 1 , \ldots , N  \[
     h_k = h(:, k)
\]
and h has size  m \times N .

4.l: db
The argument db is a three dimensional array, for  k = 1 , \ldots , N  \[
     B_k = db(:,:,k)
\]
and db has size  \ell \times n \times N .

4.m: dg
The argument dg is a three dimensional array, for  k = 1 , \ldots , N  \[
     G_k = dg(:,:,k)
\]
and dg has size  n \times n \times N . The initial state estimate  g_1 does not depend on the value of  x_0 , hence  G_1 should be zero.

4.n: dh
The argument dh is a three dimensional array, for  k = 1 , \ldots , N  \[
     H_k = dh(:,:,k)
\]
and dh has size  m \times n \times N .

4.o: qinv
The argument qinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     Q_k^{-1} = qinv(:,:, k)
\]
and qinv has size  n \times n \times N . The value of  Q_k is the variance of the initial state estimate  g_1 .

4.p: rinv
The argument rinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     R_k^{-1} = rinv(:,:, k)
\]
and rinv has size  m \times m \times N . It is ok to signify a missing data value by setting the corresponding row and column of rinv to zero. This also enables use to interpolate the state vector  x_k to points where there are no measurements.

4.q: xOut
The result xOut contains the optimal sequence  ( x_1 , \ldots , x_N ) . For  k = 1 , \ldots , N  \[
     x_k = xOut(:, k)
\]
and xOut is a two dimensional array with size  n \times N .

4.r: uOut
The result uOut contains the Lagrange multiplier sequence corresponding to xOut. For  k = 1 , \ldots , N  \[
     u_k = uOut(:, k)
\]
and uOut is a two dimensional array with size  \ell \times N . The pair xOut, uOut satisfy the 4.e: first order conditions .

4.s: info
The result info is a matrix with each row corresponding to an iteration of the algorithm. Note that ckbs_affine has satisfied the convergence condition if and only if
      all( 
info(end, 1:3) <= epsilon )


We use  x_k^q (  u_k^q ) to denote the state vector (dual vector) for time point  k and at the end of iteration  q-1 . We use  d_k^q for the partial derivative of  S ( x_1^q , \ldots , x_N^q ) with respect to  x_k .

4.s.a: Constraint Bound
The value info(q, 1) is a bound on the constraint functions at the end of iteration  q-1 . To be specific  \[
info(q, 1) = \max_{i,k} ( b_k + B_k * x_k )_i
\] 


4.s.b: First Order Conditions
The value info(q, 2) is a bound on the partial derivative of the Lagrangian with respect to the state vector sequence at the end of iteration  q-1 :  \[
info(q, 2) = \max \left[ | ( B_k^T * u_k + d_k )_j | \right]
\] 
where the maximum is with respect to  j = 1 , \ldots , n and  k = 1 , \ldots , N .

4.s.c: Complementarity Conditions
The value info(q, 3) is a bound on the complementarity conditions at the end of iteration  q-1 :  \[
info(q, 3) = \max \left[ | (u_k)_i * ( b_k + B_k * x_k)_i | \right]
\] 
where the maximum is with respect to  i = 1 , \ldots , \ell and  k = 1 , \ldots , N .

4.s.d: Step Size
The value info(q, 4) is the line search step size used during iteration  q-1 . Small step sizes indicate problems with the interior point method used to solve the affine problem (with the exception that info(1, 4) is always zero).

4.t: Example
The file 4.1: affine_ok_box.m contains an example and test of ckbs_affine. It returns true if ckbs_affine passes the test and false otherwise.
Input File: src/ckbs_affine.m
4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test

4.1.a: Syntax
[ok] = affine_ok_box(draw_plot)

4.1.b: draw_plot
If this argument is true, a plot is drawn showing the results and the nonlinear_ok_box.out file is written for use with the program nonlinear_ok_box.r.

4.1.c: ok
If the return value ok is true, the test passed, otherwise the test failed.

4.1.d: State Vector
 x1 (t) derivative of function we are estimating
 x2 (t) value of function we are estimating

4.1.e: Measurement
 z1 (t) value of  x2 (t) plus noise

4.1.f: Constraint
 \[
     \begin{array}{c}
     -1 \leq x1 (t) \leq +1
     \\
     -1 \leq x2 (t) \leq +1
     \end{array}
\]
.

4.1.g: Source Code
 
function [ok] = affine_ok_box(draw_plot)
% --------------------------------------------------------
% You can change these parameters
N     = 50;        % number of measurement time points
dt    = 2*pi / N;  % time between measurement points
gamma =  1;        % transition covariance multiplier
sigma =  .5;       % standard deviation of measurement noise
max_itr = 30;      % maximum number of iterations
epsilon = 1e-5;    % convergence criteria
h_min   = 0;       % minimum horizontal value in plots
h_max   = 7;       % maximum horizontal value in plots
v_min   = -2.0;    % minimum vertical value in plots
v_max   = +2.0;    % maximum vertical value in plots
% ---------------------------------------------------------
ok = true;
if nargin < 1
	draw_plot = false;
end
%  Define the problem
rand('seed', 1234);
%
% number of constraints per time point
ell   = 4;
%
% number of measurements per time point
m     = 1;
%
% number of state vector components per time point
n     = 2;
%
% simulate the true trajectory and measurement noise
t       =  (1 : N) * dt;
x1_true = - cos(t);
x2_true = - sin(t);
x_true  = [ x1_true ; x2_true ];
v_true  = sigma * rand(1, N);
%
% measurement values and model
v_true  = sigma * randn(1, N);
z       = x2_true + v_true;
rk      = sigma * sigma;
rinvk   = 1 / rk;
rinv    = zeros(m, m, N);
h       = zeros(m, N);
dh      = zeros(m, n, N);
for k = 1 : N
	rinv(:, :, k) = rinvk;
	h(:, k)       = 0;
	dh(:,:, k)    = [ 0 , 1 ];
end
%
% transition model
g       = zeros(n, N);
dg      = zeros(n, n, N);
qinv    = zeros(n, n, N);
qk      = gamma * [ dt , dt^2/2 ; dt^2/2 , dt^3/3 ];
qinvk   = inv(qk);
for k = 2 : N
	g(:, k)       = 0;
	dg(:,:, k)    = [ 1 , 0 ; dt , 1 ];
	qinv(:,:, k)  = qinvk;
end
%
% initial state estimate
g(:, 1)      = x_true(:, 1);
qinv(:,:, 1) = 100 * eye(2);
%
% constraints
b       = zeros(ell, N);
db      = zeros(ell, n, N);
for k = 1 : N
	%
	% b(:, k) + db(:,:, k) * x(:, k) <= 0
	b(:, k)    = [ -1 ; -1 ; -1 ; -1 ];
	db(:,:, k) = [ -1 , 0 ; 1 , 0 ; 0 , -1 ; 0 , 1 ];
end
%
% -------------------------------------------------------------------------
[xOut, uOut, info] = ...
	ckbs_affine(max_itr, epsilon, z, b, g, h, db, dg, dh, qinv, rinv);
% --------------------------------------------------------------------------
ok   = ok & all( info(end, 1:3) <= epsilon);
d    = ckbs_sumsq_grad(xOut, z, g, h, dg, dh, qinv, rinv);
for k = 1 : N
	xk = xOut(:, k);
	uk = uOut(:, k);
	bk = b(:, k);
	Bk = db(:,:, k);
	dk = d(:, k);
	sk = - bk - Bk * xk;
	%
	ok = ok & (min(uk) >= 0.);
	ok = ok & (max (bk + Bk * xk) <= epsilon);
	ok = ok & (max ( abs( Bk' * uk + dk ) ) <= epsilon);
	ok = ok & (max ( uk .* sk ) <= epsilon );
end
if draw_plot
	figure(1);
	clf
	hold on
	plot(t', x_true(2,:)', 'r-' );
	plot(t', z(1,:)', 'ko' );
	plot(t', xOut(2,:)', 'b-' );
	plot(t', - ones(N,1), 'b-');
	plot(t', ones(N,1), 'b-');
	axis([h_min, h_max, v_min, v_max]);
	title('Constrained');
	hold off
	%
	% constrained estimate
	x_con = xOut;
end
%
% Unconstrained Case
b           = zeros(0, N);
db          = zeros(0, n, N);
% -------------------------------------------------------------------------
[xOut, uOut, info] = ...
	ckbs_affine(max_itr, epsilon, z, b, g, h, db, dg, dh, qinv, rinv);
% --------------------------------------------------------------------------
ok   = ok & all( info(end, 1:3) <= epsilon);
d    = ckbs_sumsq_grad(xOut, z, g, h, dg, dh, qinv, rinv);
for k = 1 : N
	xk = xOut(:, k);
	dk = d(:, k);
	%
	ok = ok & (min(dk) <= epsilon);
end
if draw_plot
	figure(2);
	clf
	hold on
	plot(t', x_true(2,:)', 'r-' );
	plot(t', z(1,:)', 'ko' );
	plot(t', xOut(2,:)', 'b-' );
	plot(t', - ones(N,1), 'b-');
	plot(t', ones(N,1), 'b-');
	axis([h_min, h_max, v_min, v_max]);
	title('Unconstrained');
	hold off
	%
	% unconstrained estimate
	x_free = xOut;
	%
	% write out constrained and unconstrained results
	[fid, msg] = fopen('affine_ok_box.out', 'wt');
	if size(msg, 2) > 0 
		disp(['affine_ok: ', msg]);
	end
	%                      123456789012345678901234'
	heading =             '       t';
	heading = [ heading , ' x2_true  x2_con x2_free' ];
	heading = [ heading , '      z1\n'               ];
	fprintf(fid, heading);
	for k = 1 : N
		fprintf(fid,'%8.3f%8.3f%8.3f%8.3f%8.3f\n', ...
			t(k), ...
			x_true(2,k), x_con(2,k), x_free(2,k), ...
			z(1,k) ...
		);
	end
	fclose(fid);
end
return
end

Input File: example/affine_ok_box.m
5: ckbs Utility Functions

5.a: Contents
ckbs_blkdiag_mul: 5.1Packed Block Diagonal Matrix Times a Vector
ckbs_blkdiag_mul_t: 5.2Transpose of Packed Block Diagonal Matrix Times a Vector
ckbs_sumsq_obj: 5.3Affine Residual Sum of Squares Objective
ckbs_sumsq_grad: 5.4Affine Residual Sum of Squares Gradient
ckbs_sumsq_hes: 5.5Affine Residual Sum of Squares Hessian
ckbs_tridiag_solve: 5.6Symmetric Block Tridiagonal Algorithm
ckbs_newton_step: 5.7Affine Constrained Kalman Bucy Smoother Newton Step
ckbs_kuhn_tucker: 5.8Compute Residual in Kuhn-Tucker Conditions

Input File: omh/utility.omh
5.1: Packed Block Diagonal Matrix Times a Vector

5.1.a: Syntax
[w] = ckbs_blkdiag_mul(Bdiav)

5.1.b: Purpose
This routine enables one to used the packed form of a block diagonal matrix and returns the matrix times a vector; i.e.,  \[
     w = B * v
\] 


5.1.c: Bdia
The argument Bdia is an  m \times n \times N array. For  k = 1 , \ldots , N we define  B_k \in \R^{m \times n} by  \[
     B_k = Bdia(:, :, k)
\] 


5.1.d: B
The matrix  B is defined by  \[

=
\left( \begin{array}{cccc}
     B_1 & 0      & 0      &           \\
     0   & B_2    & 0      & 0         \\
     0   & 0      & \ddots & 0         \\
         & 0      & 0      & B_N 
\end{array} \right)
\] 


5.1.e: v
The argument v is a column vector of length  n * N .

5.1.f: w
The result w is a column vector of length  m * N .

5.1.g: Example
The file 5.1.1: blkdiag_mul_ok.m contains an example and test of ckbs_blkdiag_mul. It returns true if ckbs_blkdiag_mul passes the test and false otherwise.
Input File: src/ckbs_blkdiag_mul.m
5.1.1: blkdiag_mul Example and Test

5.1.1.a: Source Code
 
function [ok] = blkdiag_mul_ok()
ok = true;
% -------------------------------------------------------------
% You can change these parameters
m    = 2; 
n    = 3;
N    = 2;
% -------------------------------------------------------------
% Define the problem
rand('seed', 123);
v     = rand(n * N, 1);
Bdiag = zeros(m, n, N);
B     = zeros(m * N , n * N);
blk_m = 1 : m;
blk_n = 1 : n;
for k = 1 : N
	Bdiag(:, :, k)  = rand(m, n);
	B(blk_m, blk_n) = Bdiag(:, :, k);
	blk_m           = blk_m + m;
	blk_n           = blk_n + n;
end
% -------------------------------------
w     = ckbs_blkdiag_mul(Bdiag, v);
% -------------------------------------
check = B * v;
ok    = ok & ( max(abs(w - check)) < 1e-10 );
return
end

Input File: example/blkdiag_mul_ok.m
5.2: Transpose of Packed Block Diagonal Matrix Times a Vector

5.2.a: Syntax
[w] = ckbs_blkdiag_mul_t(Bdiav)

5.2.b: Purpose
This routine enables one to used the packed form of a block diagonal matrix and returns the transpose of the matrix times a vector; i.e.,  \[
     w = B^\T * v
\] 


5.2.c: Bdia
The argument Bdia is an  m \times n \times N array. For  k = 1 , \ldots , N we define  B_k \in \R^{m \times n} by  \[
     B_k = Bdia(:, :, k)
\] 


5.2.d: B
The matrix  B is defined by  \[

=
\left( \begin{array}{cccc}
     B_1 & 0      & 0      &           \\
     0   & B_2    & 0      & 0         \\
     0   & 0      & \ddots & 0         \\
         & 0      & 0      & B_N 
\end{array} \right)
\] 


5.2.e: v
The argument v is a column vector of length  m * N .

5.2.f: w
The result w is a column vector of length  n * N .

5.2.g: Example
The file 5.2.1: blkdiag_mul_t_ok.m contains an example and test of ckbs_blkdiag_mul_t. It returns true if ckbs_blkdiag_mul_t passes the test and false otherwise.
Input File: src/ckbs_blkdiag_mul_t.m
5.2.1: blkdiag_mul_t Example and Test

5.2.1.a: Source Code
 
function [ok] = blkdiag_mul_t_ok()
ok = true;
% -------------------------------------------------------------
% You can change these parameters
m    = 2; 
n    = 3;
N    = 2;
% -------------------------------------------------------------
% Define the problem
rand('seed', 123);
v     = rand(m * N, 1);
Bdiag = zeros(m, n, N);
B     = zeros(m * N , n * N);
blk_m = 1 : m;
blk_n = 1 : n;
for k = 1 : N
	Bdiag(:, :, k)  = rand(m, n);
	B(blk_m, blk_n) = Bdiag(:, :, k);
	blk_m           = blk_m + m;
	blk_n           = blk_n + n;
end
% -------------------------------------
w     = ckbs_blkdiag_mul_t(Bdiag, v);
% -------------------------------------
check = B' * v;
ok    = ok & ( max(abs(w - check)) < 1e-10 );
return
end

Input File: example/blkdiag_mul_t_ok.m
5.3: Affine Residual Sum of Squares Objective

5.3.a: Syntax
[obj] = ckbs_sumsq_obj(xzghdgdhqinvrinv)

5.3.b: Purpose
This routine computes the value of the affine Kalman-Bucy smoother residual sum of squares objective function.

5.3.c: Notation
The affine Kalman-Bucy smoother residual sum of squares is defined by  \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = & 
\frac{1}{2} 
( z_k - h_k - H_k * x_k )^\T * R_k^{-1} * ( z_k - h_k - H_k * x_k )
\\
& + &
\frac{1}{2} 
( x_k - g_k - G_k * x_{k-1} )^\T * Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\\
\end{array}
\] 
where the matrices  R_k and  Q_k are symmetric positive definite and  x_0 is the constant zero.

5.3.d: x
The argument x is a two dimensional array, for  k = 1 , \ldots , N  \[
     x_k = x(:, k)
\]
and x has size  n \times N .

5.3.e: z
The argument z is a two dimensional array, for  k = 1 , \ldots , N  \[
     z_k = z(:, k)
\]
and z has size  m \times N .

5.3.f: g
The argument g is a two dimensional array, for  k = 1 , \ldots , N  \[
     g_k = g(:, k)
\]
and g has size  n \times N .

5.3.g: h
The argument h is a two dimensional array, for  k = 1 , \ldots , N  \[
     h_k = h(:, k)
\]
and h has size  m \times N .

5.3.h: dg
The argument dg is a three dimensional array, for  k = 1 , \ldots , N  \[
     G_k = dg(:,:,k)
\]
and dg has size  n \times n \times N .

5.3.i: dh
The argument dh is a three dimensional array, for  k = 1 , \ldots , N  \[
     H_k = dh(:,:,k)
\]
and dh has size  m \times n \times N .

5.3.j: qinv
The argument qinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     Q_k^{-1} = qinv(:,:, k)
\]
and qinv has size  n \times n \times N .

5.3.k: rinv
The argument rinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     R_k^{-1} = rinv(:,:, k)
\]
and rinv has size  m \times m \times N .

5.3.l: obj
The result obj is a scalar given by  \[
     obj = S ( x_1 , \ldots , x_N )
\] 


5.3.m: Example
The file 5.3.1: sumsq_obj_ok.m contains an example and test of ckbs_sumsq_obj. It returns true if ckbs_sumsq_obj passes the test and false otherwise.
Input File: src/ckbs_sumsq_obj.m
5.3.1: ckbs_sumsq_obj Example and Test

5.3.1.a: Source Code
 
function [ok] = sumsq_obj_ok()
ok = true;
% --------------------------------------------------------
% You can change these parameters
m    = 1;   % number of measurements per time point
n    = 2;   % number of state vector components per time point
N    = 3;   % number of time points
% ---------------------------------------------------------
%  Define the problem
rand('seed', 123);
x    = rand(n, N);
z    = rand(m, N);
h    = rand(m, N);
g    = rand(n, N);
dh   = zeros(m, n, N);
dg   = zeros(n, n, N);
qinv = zeros(n, n, N);
rinv = zeros(m, m, N);
for k = 1 : N
	dh(:, :, k)   = rand(m, n);
	dg(:, :, k)   = rand(n, n);
	tmp           = rand(m, m);
	rinv(:, :, k) = (tmp + tmp') / 2 + 2 * eye(m);
	tmp           = rand(n, n);
	qinv(:, :, k) = (tmp + tmp') / 2 + 2 * eye(n);
end
% ---------------------------------------------------------
% Compute the Objective using ckbs_sumsq_obj
obj  = ckbs_sumsq_obj(x, z, g, h, dg, dh, qinv, rinv);
% ---------------------------------------------------------
sumsq = 0;
xk    = zeros(n, 1);
for k = 1 : N
	xkm   = xk;
	xk    = x(:, k);
	xres  = xk      - g(:, k) - dg(:,:, k) * xkm;
	zres  = z(:, k) - h(:, k) - dh(:,:, k) * xk;
	sumsq = sumsq + xres' * qinv(:,:, k) * xres;
	sumsq = sumsq + zres' * rinv(:,:, k) * zres;
end
ok = ok & ( abs(obj - sumsq/2) < 1e-10 );
return
end

Input File: example/sumsq_obj_ok.m
5.4: Affine Residual Sum of Squares Gradient

5.4.a: Syntax
[grad] = ckbs_sumsq_grad(xzghdgdhqinvrinv)

5.4.b: Purpose
This computes the gradient of the of the affine Kalman-Bucy smoother residual sum of squares.

5.4.c: Notation
The affine Kalman-Bucy smoother residual sum of squares is defined by  \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = & 
\frac{1}{2} 
( z_k - h_k - H_k * x_k )^\T * R_k^{-1} * ( z_k - h_k - H_k * x_k )
\\
& + &
\frac{1}{2} 
( x_k - g_k - G_k * x_{k-1} )^\T * Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\end{array}
\] 
where the matrices  R_k and  Q_k are symmetric positive definite and  x_0 is the constant zero.

5.4.d: Gradient
We define  Q_{N+1} to be the  n \times n identity matrix and  G_{N+1} to be zero,  \[
\begin{array}{rcl}
\nabla_k S_k^{(1)} ( x_k , x_{k-1} ) 
& = &  H_k^\T * R_k^{-1} * ( h_k + H_k * x_k - z_k ) 
  +    Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\\
\nabla_k S_{k+1}^{(1)} ( x_{k+1} , x_k ) 
& = & G_{k+1}^\T * Q_{k+1}^{-1} * ( g_{k+1} + G_{k+1} * x_k  - x_{k+1} )
\end{array}
\] 
It follows that the gradient of the affine Kalman-Bucy smoother residual sum of squares is  \[
\begin{array}{rcl}
\nabla S ( x_1 , \ldots , x_N ) 
& = &
\left( \begin{array}{c}
     d_1 \\ \vdots \\ d_N
\end{array} \right)
\\
d_k & = & \nabla_k S_k^{(1)}     ( x_k , x_{k-1} ) 
      +   \nabla_k S_{k+1}^{(1)} ( x_{k+1} , x_k )
\end{array}
\] 
where  S_{N+1} ( x_{N+1} , x_N ) is defined as identically zero.

5.4.e: x
The argument x is a two dimensional array, for  k = 1 , \ldots , N  \[
     x_k = x(:, k)
\]
and x has size  n \times N .

5.4.f: z
The argument z is a two dimensional array, for  k = 1 , \ldots , N  \[
     z_k = z(:, k)
\]
and z has size  m \times N .

5.4.g: g
The argument g is a two dimensional array, for  k = 1 , \ldots , N  \[
     g_k = g(:, k)
\]
and g has size  n \times N .

5.4.h: h
The argument h is a two dimensional array, for  k = 1 , \ldots , N  \[
     h_k = h(:, k)
\]
and h has size  m \times N .

5.4.i: dg
The argument dg is a three dimensional array, for  k = 1 , \ldots , N  \[
     G_k = dg(:,:,k)
\]
and dg has size  n \times n \times N .

5.4.j: dh
The argument dh is a three dimensional array, for  k = 1 , \ldots , N  \[
     H_k = dh(:,:,k)
\]
and dh has size  m \times n \times N .

5.4.k: qinv
The argument qinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     Q_k^{-1} = qinv(:,:,k)
\]
and qinv has size  n \times n \times N .

5.4.l: rinv
The argument rinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     R_k^{-1} = rinv(:,:,k)
\]
and rinv has size  m \times m \times N .

5.4.m: grad
The result grad is a two dimensional array, for  k = 1 , \ldots , N  \[
     d_k = grad(:, k)
\]
and grad has size  n \times N .

5.4.n: Example
The file 5.4.1: sumsq_grad_ok.m contains an example and test of ckbs_sumsq_grad. It returns true if ckbs_sumsq_grad passes the test and false otherwise.
Input File: src/ckbs_sumsq_grad.m
5.4.1: ckbs_sumsq_grad Example and Test

5.4.1.a: Source Code
 
function [ok] = sumsq_grad_ok()
ok = true;
% --------------------------------------------------------
% You can change these parameters
m    = 1;   % number of measurements per time point
n    = 2;   % number of state vector components per time point
N    = 3;   % number of time points
% ---------------------------------------------------------
%  Define the problem
rand('seed', 123);
x    = rand(n, N);
z    = rand(m, N);
h    = rand(m, N);
g    = rand(n, N);
dg   = zeros(n, n, N);
dh   = zeros(m, n, N);
qinv = zeros(n, n, N);
rinv = zeros(m, m, N);
for k = 1 : N
	dh(:, :, k)   = rand(m, n);
	dg(:, :, k)   = rand(n, n);
	tmp           = rand(m, m);
	rinv(:, :, k) = (tmp + tmp') / 2 + 2 * eye(m);
	tmp           = rand(n, n);
	qinv(:, :, k) = (tmp + tmp') / 2 + 2 * eye(n);
end
% ---------------------------------------------------------
% Compute the gradient using ckbs_sumsq_grad
grad = ckbs_sumsq_grad(x, z, g, h, dg, dh, qinv, rinv);
% ---------------------------------------------------------
% Use finite differences to check gradient
step   = 1;
for k  = 1 : N
for i  = 1 : n
	% Check second partial w.r.t x(i,k) 
	xm        = x;
	xm(i, k)  = xm(i, k) - step;
	Sm        = ckbs_sumsq_obj(xm, z, g, h, dg, dh, qinv, rinv);
	%
	xp        = x;
	xp(i, k)  = xp(i, k) + step;
	Sp        = ckbs_sumsq_obj(xp, z, g, h, dg, dh, qinv, rinv);
	%
	check  = (Sp - Sm) / ( 2 * step);
	diff   = grad(i, k) - check;
	ok     = ok & ( abs(diff) < 1e-10 );
end
end
return
end

Input File: example/sumsq_grad_ok.m
5.5: Affine Residual Sum of Squares Hessian

5.5.a: Syntax
[DA] = ckbs_sumsq_hes(dgdhqinvrinv)

5.5.b: Purpose
This routine returns the diagonal and off diagonal blocks corresponding to the Hessian of the affine Kalman-Bucy smoother residual sum of squares.

5.5.c: Notation
The affine Kalman-Bucy smoother residual sum of squares is defined by  \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = & 
\frac{1}{2} 
( z_k - h_k - H_k * x_k )^\T * R_k^{-1} * ( z_k - h_k - H_k * x_k )
\\
& + &
\frac{1}{2} 
( x_k - g_k - G_k * x_{k-1} )^\T * Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\end{array}
\] 
where the matrices  R_k and  Q_k are symmetric positive definite and  x_0 is the constant zero.

5.5.d: Hessian
If we define  Q_{N+1} to be the  n \times n identity matrix and  G_{N+1} to be zero, the Hessian of the affine Kalman-Bucy smoother residual sum of squares is  \[
\begin{array}{rcl}
S^{(2)} ( x_1 , \ldots , x_N ) & = &
\left( \begin{array}{cccc}
D_1 & A_2^\T & 0      &           \\
A_2 & D_2    & A_3^\T & 0         \\
0   & \ddots & \ddots & \ddots    \\
    & 0      & A_N    & D_N 
\end{array} \right)
\\
D_k & = & H_k^\T * R_k^{-1} * H_k + Q_k^{-1} 
      + G_{k+1}^\T * Q_{k+1}^{-1} * G_{k+1} 
\\
A_k & = & - Q_k^{-1} * G_k
\end{array}
\] 


5.5.e: dg
The argument dg is a three dimensional array, for  k = 1 , \ldots , N  \[
     G_k = dg(:,:,k)
\]
and dg has size  n \times n \times N .

5.5.f: dh
The argument dh is a three dimensional array, for  k = 1 , \ldots , N  \[
     H_k = dh(:,:,k)
\]
and dh has size  m \times n \times N .

5.5.g: qinv
The argument qinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     Q_k^{-1} = qinv(:,:,k)
\]
and qinv has size  n \times n \times N .

5.5.h: rinv
The argument rinv is a three dimensional array, for  k = 1 , \ldots , N  \[
     R_k^{-1} = rinv(:,:,k)
\]
and rinv has size  m \times m \times N .

5.5.i: D
The result D is a three dimensional array, for  k = 1 , \ldots , N  \[
     D_k = D(:,:,k)
\]
and D has size  n \times n \times N .

5.5.j: A
The result A is a three dimensional array, for  k = 2 , \ldots , N  \[
     A_k = A(:,:,k)
\]
and A has size  n \times n \times N .

5.5.k: Example
The file 5.5.1: sumsq_hes_ok.m contains an example and test of ckbs_sumsq_hes. It returns true if ckbs_sumsq_hes passes the test and false otherwise.
Input File: src/ckbs_sumsq_hes.m
5.5.1: ckbs_sumsq_hes Example and Test

5.5.1.a: Source Code
 
function [ok] = sumsq_hes_ok()
ok = true;
% --------------------------------------------------------
% You can change these parameters
m    = 1;   % number of measurements per time point
n    = 2;   % number of state vector components per time point
N    = 3;   % number of time points
% ---------------------------------------------------------
%  Define the problem
rand('seed', 123);
dg   = zeros(n, n, N);
dh   = zeros(m, n, N);
qinv = zeros(n, n, N);
rinv = zeros(m, m, N);
for k = 1 : N
	dh(:, :, k)   = rand(m, n);
	dg(:, :, k)   = rand(n, n);
	tmp           = rand(m, m);
	rinv(:, :, k) = (tmp + tmp') / 2 + 2 * eye(m);
	tmp           = rand(n, n);
	qinv(:, :, k) = (tmp + tmp') / 2 + 2 * eye(n);
end
% ---------------------------------------------------------
% Compute the Hessian using ckbs_sumsq_hes
[D, A] = ckbs_sumsq_hes(dg, dh, qinv, rinv);
% ---------------------------------------------------------
H    = zeros(n * N , n * N );
for k = 1 : N
	index           = (k - 1) * n + (1 : n);
	H(index, index) = D(:, :, k);
	if k > 1 
		H(index - n, index) = A(:, :, k)';
		H(index, index - n) = A(:, :, k);
	end
end
%
% Use finite differences to check Hessian
x    = rand(n, N);
z    = rand(m, N);
h    = rand(m, N);
g    = rand(n, N);
%
step   = 1;
for k = 1 : N
	for i = 1 : n
		% Check second partial w.r.t x(i, k) 
		xm       = x;
		xm(i, k) = xm(i, k) - step;
		gradm = ckbs_sumsq_grad(xm, z, g, h, dg, dh, qinv, rinv);
		%
		xp       = x;
		xp(i, k) = xp(i, k) + step;
		gradp = ckbs_sumsq_grad(xp, z, g, h, dg, dh, qinv, rinv);
		%
		check  = (gradp - gradm) / ( 2 * step );
		for k1 = 1 : N
			for i1 = 1 : n
				value = H(i + (k-1)*n, i1 + (k1-1)*n);
				diff  = check(i1, k1) - value;
				ok     = ok & ( abs(diff) < 1e-10 );
			end
		end
	end
end
return
end

Input File: example/sumsq_hes_ok.m
5.6: Symmetric Block Tridiagonal Algorithm

5.6.a: Syntax
[elambda] = ckbs_tridiag_solve(bcr)

5.6.b: Purpose
This routine solves the following linear system of equations for  e :  \[
     A * e = r
\] 
where the symmetric block tridiagonal matrix  A is defined by  \[
A =
\left( \begin{array}{ccccc}
b_1    & c_2^\T & 0       & \cdots & 0      \\
c_2    & b_2    & c_3^\T  &        & \vdots \\
\vdots &        & \ddots  &        &        \\
0      & \cdots &         & b_N    & c_N
\end{array} \right)
\] 


5.6.c: b
The argument b is a three dimensional array, for  k = 1 , \ldots , N  \[
     b_k = b(:,:,k)
\]
and b has size  n \times n \times N .

5.6.d: c
The argument c is a three dimensional array, for  k = 2 , \ldots , N  \[
     c_k = c(:,:,k)
\]
and c has size  n \times n \times N .

5.6.e: r
The argument r is an  (n * N) \times m matrix.

5.6.f: e
The result e is an  (n * N) \times m matrix.

5.6.g: lambda
The result lambda is a scalar equal to the logarithm of the determinant of  A .

5.6.h: Reference
The marginal likelihood for parameters in a discrete Gauss Markov process
, Bradley M. Bell, IEEE Transactions on Signal Processing, Vol. 48, No. 3, March 2000.

5.6.i: Lemma
The following result is proven as Lemma 6 of the reference above: Suppose that for  k = 1 , \ldots , N  \[
\begin{array}{rcl}
     b_k & = & u_k + q_{k-1}^{-1} + a_k * q_k^{-1} * a_k^\T  \\
     c_k & = & q_{k-1}^{-1} * a_k^\T
\end{array}
\] 
where  u_k is symmetric positive semi-definite and  q_k is symmetric positive definite. It follows that the algorithm used by ckbs_tridiag_solve is well conditioned and will not try to invert singular matrices.

5.6.j: Example
The file 5.6.1: tridiag_solve_ok.m contains an example and test of ckbs_tridiag_solve. It returns true if ckbs_tridiag_solve passes the test and false otherwise.
Input File: src/ckbs_tridiag_solve.m
5.6.1: ckbs_tridiag_solve Example and Test

5.6.1.a: Source Code
 
function [ok] = tridiag_solve_ok()
ok = true;
m  = 2;
n  = 3;
N  = 4;
% case where uk = 0, qk = I, and ak is random
rand('seed', 123);
a = rand(n, n, N);
r = rand(n * N, m);
c = zeros(n, n, N);
for k = 1 : N
	ak         = a(:, :, k);
	b(:, :, k) = 2 * eye(n) + ak * ak';
	c(:, :, k) = ak';
end  
% ----------------------------------------
[e, lambda] = ckbs_tridiag_solve(b, c, r);
% ----------------------------------------
%
% check equation corresponding to first row
blk_n = 1 : n;
lhs   = b(:, :, 1) * e(blk_n, :) + c(:, :, 2)' * e(blk_n + n, :);
ok    = ok & ( max( max( abs( lhs - r(blk_n, :) ) ) ) < 1e-10 );
%
% check equation corresponding to last row
blk_n = (N-1) * n + (1 : n);
lhs   = c(:, :, N) * e(blk_n - n, :) + b(:, :, N) * e(blk_n, :);
ok    = ok & ( max( max( abs( lhs - r(blk_n, :) ) ) ) < 1e-10 );
%
% check other rows
blk_n = 1 : n;
for k = 2 : N-1
	blk_n = blk_n + n;
	lhs   = c(:, :, k)    * e(blk_n - n, :)  ...
	      + b(:, :, k)    * e(blk_n, :)    ...
	      + c(:, :, k+1)' * e(blk_n + n, :);
	ok    = ok & ( max( max( abs( lhs - r(blk_n, :) ) ) ) < 1e-10 );
end
return
end

Input File: example/tridiag_solve_ok.m
5.7: Affine Constrained Kalman Bucy Smoother Newton Step

5.7.a: Syntax
[dsdydu] = ckbs_newton_step(musyubdBdiaHdiaHlow)

5.7.b: Purpose
This routine computes one step of Newton's method for solving the non-linear Kuhn-Tucker conditions for the  \mu -relaxed affine constrained Kalman-Bucy smoother problem.

5.7.c: Problem
Given  \mu \in \R_+ ,  H \in \R^{p \times p} ,  d \in \R^p ,  b \in \R^r , and  B \in \R^{r \times p} , the  \mu -relaxed affine constrained Kalman-Bucy smoother problem is:  \[
\begin{array}{rl}
{\rm minimize} & \frac{1}{2} y^\T H y + d^\T y
- \mu \sum_{i=1}^r \log(s_i)
\; {\rm w.r.t} \; y \in \R^p \; , \; s \in \R_+^r
\\
{\rm subject \; to} & s + b + B y  = 0 
\end{array}
\] 
In addition,  H is symmetric block tri-diagonal with each block of size  n \times n and  B is block diagonal with each block of size  m \times n (there is an integer  N such that  p = n * N and  r = m * N ).

5.7.c.a: Lagrangian
We use  u \in \R^r to denote the Lagrange multipliers corresponding to the constraint equation. The corresponding Lagrangian is  \[
L(y, s, u)  =
\frac{1}{2} y^\T H y + d^\T y
- \mu \sum_{i=1}^r \log(s_i)
+ u^\T (s + b + B y)
\] 
The partial gradients of the Lagrangian are given by  \[
\begin{array}{rcl}
\nabla_y L(y, s, u ) & = & H y + B^\T u + d  \\
\nabla_s L(y, s, u ) & = & u - \mu / s \\
\nabla_u L(y, s, u ) & = & s + b + B y \\
\end{array}
\] 
where  \mu / s  is the component by component division of  \mu  by the components of the  s . Note, from the second equation, that we only need consider  u \geq 0 because  s \geq 0 .

5.7.c.b: Kuhn-Tucker Conditions
We use  D(s) to denote the diagonal matrix with  s along its diagonal and  1_r to denote the vector, of length  r with all its components equal to one. We define  F : \R^{r + p + r} \rightarrow \R^{r + p + r} by  \[
F(s, y, u)
=
\left(
\begin{array}{c}
s + b + B y       \\
H y + B^\T u + d   \\
D(s) D(u) 1_r - \mu 1_r
\end{array}
\right)
\] 
The Kuhn-Tucker conditions for a solution of the  \mu -relaxed constrained affine Kalman-Bucy smoother problem is  F(s, y, u) = 0  .

5.7.c.c: Newton Step
Given a value for  (s, y, u) , the Newton step  ( \Delta s^\T , \Delta y^\T , \Delta u^\T )^\T solves the problem:  \[
F^{(1)} (s, y, u) 
\left( \begin{array}{c} \Delta s \\ \Delta y \\ \Delta u \end{array} \right)
=
- F(s, y, u)
\] 


5.7.d: mu
The argument mu is a positive scalar specifying the relaxation parameter  \mu .

5.7.e: s
The argument s is a column vector of length  r . All the elements of s are greater than zero.

5.7.f: y
The argument y is a column vector of length  p

5.7.g: u
The argument u is a column vector of length  r . All the elements of s are greater than zero.

5.7.h: b
The argument b is a column vector of length  r .

5.7.i: d
The argument d is a column vector of length  p

5.7.j: Bdia
The argument Bdia is an  m \times n \times N array. For  k = 1 , \ldots , N we define  B_k \in \R^{m \times n} by  \[
     B_k = Bdia(:, :, k)
\] 


5.7.k: B
The matrix  B is defined by  \[

=
\left( \begin{array}{cccc}
B_1 & 0      & 0      &           \\
0   & B_2    & 0      & 0         \\
0   & 0      & \ddots & 0         \\
    & 0      & 0      & B_N 
\end{array} \right)
\] 


5.7.l: Hdia
The argument Hdia is an  n \times n \times N array. For  k = 1 , \ldots , N we define  H_k \in \R^{n \times n} by  \[
     H_k = Hdia(:, :, k)
\] 


5.7.m: Hlow
The argument Hlow is an  n \times n \times N array. For  k = 1 , \ldots , N we define  L_k \in \R^{n \times n} by  \[
     L_k = Hlow(:, :, k)
\] 


5.7.n: H
The matrix  H is defined by  \[

=
\left( \begin{array}{cccc}
H_1 & L_2^\T & 0      &           \\
L_2 & H_2    & L_3^\T & 0         \\
0   & \ddots & \ddots & \ddots    \\
    & 0      & L_N    & H_N 
\end{array} \right)
\] 


5.7.o: ds
The result ds is a column vector of length  r equal to the  \Delta s components of the Newton step.

5.7.p: dy
The result dy is a column vector of length  p equal to the  \Delta y components of the Newton step.

5.7.q: du
The result du is a column vector of length  r equal to the  \Delta u components of the Newton step.

5.7.r: Example
The file 5.7.1: newton_step_ok.m contains an example and test of ckbs_newton_step. It returns true if ckbs_newton_step passes the test and false otherwise.

5.7.s: Method
The derivative of  F is given by  \[
F^{(1)} (s, y, u) =
\left(
\begin{array}{ccccc}
D( 1_r ) & B   & 0    \\
0        & H   & B^\T  \\
D( u )   & 0   & D(s)
\end{array}
\right)
\] 
It follows that  \[
\left( \begin{array}{ccc}
D( 1_r ) & B   & 0    \\
0        & H   & B^\T  \\
D( u )   & 0   & D(s)
\end{array} \right)
\left( \begin{array}{c} \Delta s \\ \Delta y \\ \Delta u \end{array} \right)
=
-
\left( \begin{array}{c}
s + b + B y       \\
H y + B^\T u + d   \\
D(s) D(u) 1_r - \mu 1_r
\end{array} \right)
\] 
Replacing the bottom row by the bottom row minus  D(u) times the top row, we obtain the reduced set of equations  \[
\left( \begin{array}{cc}
 H      & B^\T  \\
-D(u) B & D(s) 
\end{array} \right) 
\left( \begin{array}{c} \Delta y \\ \Delta u \end{array} \right)
=
\left( \begin{array}{c}
- H y - B^\T u - d   \\
\mu 1_r + D(u) ( b + B y )
\end{array} \right)
\] 
Replacing the top row by  H^{-1} times the top row we obtain  \[
\left( \begin{array}{cc}
 D( 1_p )      & H^{-1} B^\T  \\
-D(u) B        & D(s) 
\end{array} \right) 
\left( \begin{array}{c} \Delta y \\ \Delta u \end{array} \right)
=
\left( \begin{array}{c}
- y - H^{-1}( B^\T u + d )   \\
\mu 1_r + D(u) ( b + B y )
\end{array} \right)
\] 
Replacing the bottom row by the  D(u)^{-1} times the bottom row plus  B times the top row, we obtain the reduced equation  \[
\begin{array}{rcl}
[ B H^{-1} B^\T + D(s / u) ] \Delta u 
& = &
\mu ( 1_r / u ) + b - B H^{-1}( B^\T u + d ) 
\end{array}
\] 
Thus we obtain the following solution for the Newton step:  \[
\begin{array}{rcl}
\Delta u & = & 
     [ B H^{-1} B^\T + D(s / u) ]^{-1} 
          [ \mu ( 1_r / u ) + b - B H^{-1}( B^\T u + d ) ]
\\
\Delta y & = &
- y - H^{-1} [ d + B^\T ( u + \Delta u ) ]   \\
\\
\Delta s & = &
- s - b - B ( y + \Delta y )
\end{array}
\] 
It follows from the matrix inversion lemma that  \[
[ B H^{-1} B^\T + D(s / u) ]^{-1} 

D(s/u)^{-1} - D(s/u)^{-1} B [ H + B^\T D(s/u)^{-1} B ]^{-1} B^\T D(s/u)^{-1}
\] 

Input File: src/ckbs_newton_step.m
5.7.1: ckbs_newton_step Example and Test

5.7.1.a: Source Code
 
function [ok] = newton_step_ok()
ok = true;
% --------------------------------------------------------
% You can change these parameters
m    = 2;   % number of measurements per time point
n    = 3;   % number of state vector components per time point
N    = 4;   % number of time points
% ---------------------------------------------------------
%  Define the problem
rand('seed', 123);
mu    = .5;
r     = m * N;
p     = n * N;
s     = rand(r, 1) + 1;
y     = rand(p, 1);
u     = rand(r, 1) + 1;
b     = rand(r, 1);
d     = rand(p, 1);
H     = zeros(p, p);
B     = zeros(r, r);
Hdia  = zeros(n, n, N);
Hlow  = zeros(n, n, N);
Bdia  = zeros(m, n, N);
blk_m = m * N + (1 : m);
blk_n = n * N + (1 : n);
for k = N : -1 : 1
        blk_n = blk_n - n;
        blk_m = blk_m - m;
        %
        Bdia(:,:, k)    = rand(m, n);
        tmp             = rand(n, n);
        Hlow(:,:, k)    = tmp;
        Hdia(:,:, k)   = (tmp * tmp') + 4 * eye(n);
        %
        B(blk_m, blk_n) = Bdia(:,:, k);
        H(blk_n, blk_n) = Hdia(:,:, k);
        if k > 1
                H(blk_n, blk_n - n) = Hlow(:,:, k);
        end
        if k < N
                H(blk_n, blk_n + n) = Hlow(:,:, k + 1)';
        end
end
% -------------------------------------------------------------------
[ds, dy, du] = ckbs_newton_step(mu, s, y, u, b, d, Bdia, Hdia, Hlow);
% -------------------------------------------------------------------
F      = [ ...
        s + b + B * y
        H * y + B' * u + d
        s .* u - mu ...
];
dF     = [ ...
        eye(r)      , B           , zeros(r, r)
        zeros(p, r) , H           , B'
        diag(u)     , zeros(r, p) , diag(s) ...
];
delta        = [ ds ; dy ; du ];
ok           = ok & ( max( abs( F + dF * delta ) ) <= 1e-10 );
return
end

Input File: example/newton_step_ok.m
5.8: Compute Residual in Kuhn-Tucker Conditions

5.8.a: Syntax
[F] = ckbs_kuhn_tucker(musyubdBdiaHdiaHlow)

5.8.b: Purpose
This routine computes the residual in the Kuhn-Tucker conditions for the  \mu -relaxed affine constrained Kalman-Bucy smoother problem.

5.8.c: Problem
Given  \mu \in \R_+ ,  H \in \R^{p \times p} ,  d \in \R^p ,  b \in \R^r , and  B \in \R^{r \times p} , the  \mu -relaxed affine constrained Kalman-Bucy smoother problem is:  \[
\begin{array}{rl}
{\rm minimize} & \frac{1}{2} y^\T H y + d^\T y
- \mu \sum_{i=1}^r \log(s_i)
\; {\rm w.r.t} \; y \in \R^p \; , \; s \in \R_+^r
\\
{\rm subject \; to} & s + b + B y  = 0 
\end{array}
\] 
In addition,  H is symmetric block tri-diagonal with each block of size  n \times n and  B is block diagonal with each block of size  m \times n (there is an integer  N such that  p = n * N and  r = m * N ).

5.8.d: Lagrangian
We use  u \in \R^r to denote the Lagrange multipliers corresponding to the constraint equation. The corresponding Lagrangian is  \[
L(y, s, u)  =
\frac{1}{2} y^\T H y + d^\T y
- \mu \sum_{i=1}^r \log(s_i)
+ u^\T (s + b + B y)
\] 
The partial gradients of the Lagrangian are given by  \[
\begin{array}{rcl}
\nabla_y L(y, s, u ) & = & H y + B^\T u + d  \\
\nabla_s L(y, s, u ) & = & u - \mu / s \\
\nabla_u L(y, s, u ) & = & s + b + B y \\
\end{array}
\] 
where  \mu / s  is the component by component division of  \mu  by the components of the  s . Note, from the second equation, that we only need consider  u \geq 0 because  s \geq 0 .

5.8.e: Kuhn-Tucker Residual
We use  D(s) to denote the diagonal matrix with  s along its diagonal and  1_r to denote the vector, of length  r with all its components equal to one. The Kuhn-Tucker Residual function  F : \R^{r + p + r} \rightarrow \R^{r + p + r} is defined by  \[
F(s, y, u)
=
\left(
\begin{array}{c}
s + b + B y       \\
H y + B^\T u + d   \\
D(s) D(u) 1_r - \mu 1_r
\end{array}
\right)
\] 
The Kuhn-Tucker conditions for a solution of the  \mu -relaxed constrained affine Kalman-Bucy smoother problem is  F(s, y, u) = 0  .

5.8.f: mu
The argument mu is a positive scalar specifying the relaxation parameter  \mu .

5.8.g: s
The argument s is a column vector of length  r . All the elements of s are greater than zero.

5.8.h: y
The argument y is a column vector of length  p

5.8.i: u
The argument u is a column vector of length  r . All the elements of s are greater than zero.

5.8.j: b
The argument b is a column vector of length  r .

5.8.k: d
The argument d is a column vector of length  p

5.8.l: Bdia
The argument Bdia is an  m \times n \times N array. For  k = 1 , \ldots , N we define  B_k \in \R^{m \times n} by  \[
     B_k = Bdia(:, :, k)
\] 


5.8.m: B
The matrix  B is defined by  \[

=
\left( \begin{array}{cccc}
B_1 & 0      & 0      &           \\
0   & B_2    & 0      & 0         \\
0   & 0      & \ddots & 0         \\
    & 0      & 0      & B_N 
\end{array} \right)
\] 


5.8.n: Hdia
The argument Hdia is an  n \times n \times N array. For  k = 1 , \ldots , N we define  H_k \in \R^{n \times n} by  \[
     H_k = Hdia(:, :, k)
\] 


5.8.o: Hlow
The argument Hlow is an  n \times n \times N array. For  k = 1 , \ldots , N we define  L_k \in \R^{n \times n} by  \[
     L_k = Hlow(:, :, k)
\] 


5.8.p: H
The matrix  H is defined by  \[

=
\left( \begin{array}{cccc}
H_1 & L_2^\T & 0      &           \\
L_2 & H_2    & L_3^\T & 0         \\
0   & \ddots & \ddots & \ddots    \\
    & 0      & L_N    & H_N 
\end{array} \right)
\] 


5.8.q: F
The result F is a column vector of length  r + p + r containing the value of the 5.8.e: Kuhn-Tucker residual ; i.e.,  F(s, y, u) .

5.8.r: Example
The file 5.8.1: kuhn_tucker_ok.m contains an example and test of ckbs_kuhn_tucker. It returns true if ckbs_kuhn_tucker passes the test and false otherwise.
Input File: src/ckbs_kuhn_tucker.m
5.8.1: ckbs_kuhn_tucker Example and Test

5.8.1.a: Source Code
 
function [ok] = kuhn_tucker_ok()
ok = true;
% -------------------------------------------------------------
% You can change these parameters
m    = 5; 
n    = 4;
N    = 3;
% -------------------------------------------------------------
rand('seed', 123);
p     = n * N;
r     = m * N;
mu    = 1.5;
s     = rand(r, 1);
y     = rand(p, 1);
u     = rand(r, 1);
b     = rand(r, 1);
d     = rand(p, 1);
Bdia  = rand(m, n, N);
Hdia  = rand(n, n, N);
Hlow  = rand(n, n, N);
B     = zeros(r, p);
H     = zeros(p, p);
blk_n = 1 : n;
blk_m = 1 : m;
for k = 1 : N
	B(blk_m, blk_n) = Bdia(:,:, k);
	H(blk_n, blk_n) = Hdia(:,:, k);
	blk_n           = blk_n + n;
	blk_m           = blk_m + m;
end
blk_n = 1 : n;
for k = 2 : N;
	blk_minus           = blk_n;
	blk_n               = blk_n + n;
	H(blk_n, blk_minus) = Hlow(:,:, k);
	H(blk_minus, blk_n) = Hlow(:,:, k)';
end 
% -------------------------------------
F = ckbs_kuhn_tucker(mu, s, y, u, b, d, Bdia, Hdia, Hlow);
% -------------------------------------
check = [ ...
	s + b + B * y; ...
	H * y + B' * u + d; ...
	s .* u - mu ...
];
ok    = ok & ( max(abs(F - check)) < 1e-10 );
return
end

Input File: example/kuhn_tucker_ok.m
6: Run All Correctness Tests

6.a: Syntax
all_ok
all_ok(
quick)

6.b: quick
If quick is true (or not present), only the quick tests are run, otherwise all the tests are run.

6.c: Test Utility Functions
6.1: test_path.m Ensure Path is Set for Testing

6.d: Source Code
 
function [ok] = all_ok(quick)
if nargin < 1
	quick = true;
end
test_path;
ok = true;
t0 = cputime;
ok = ok & one_ok('affine_ok_box');
ok = ok & one_ok('blkdiag_mul_ok');
ok = ok & one_ok('blkdiag_mul_t_ok');
ok = ok & one_ok('kuhn_tucker_ok');
ok = ok & one_ok('newton_step_ok');
ok = ok & one_ok('nonlinear_ok_simple');
ok = ok & one_ok('sumsq_grad_ok');
ok = ok & one_ok('sumsq_hes_ok');
ok = ok & one_ok('sumsq_obj_ok');
ok = ok & one_ok('tridiag_solve_ok');
if ~ quick
	ok = ok & one_ok('nonlinear_ok_box');
	ok = ok & one_ok('nonlinear_ok_sin');
end
% 
t1 = cputime;
if ok 
	disp(['All tests passed: cputime (secs) = ', num2str(t1-t0)]);
else
	disp(['One or more tests failed: cputime (secs) = ', num2str(t1-t0)]);
end
return
end
function [ok] = one_ok(name)
ok = feval(name);
if ok 
	['Ok:    ', name ]
else
	['Error: ', name ]
end
return
end

Input File: example/all_ok.m
6.1: Ensure Path is Set for Testing

6.1.a: Syntax
test_path


6.1.b: Purpose
The directory ../src is added to the current path setting. This is needed when running the tests in the test subdirectory.

6.1.c: Source Code
 
function test_path()
addpath('../src')
end

Input File: example/test_path.m
7: Changes and Additions to ckbs

7.a: Introduction
This section contains a list of changes to ckbs (in reverse order by date). Its purpose is to assist you in learning about changes between versions.

7.b: 10-03-35
  1. Moves the test directory to example
  2. Created the file test/affine_line_search_ok.m which demonstrated a bug in 4: ckbs_affine .
  3. Fixed the bug in 4: ckbs_affine .


7.c: 10-02-28
Change d: download instructions to correspond COIN-OR (http://www.coin-or.org) distribution instead of form Brad Bell's home page (http://www.seanet.com/~bradbell) .

7.d: 07-09-02
The 3: ckbs_nonlinear algorithm has been modified to conform with the global convergence theory of Burke and Han. This involved adding the exact penalty parameter  \alpha which is included in the trace and info return.

7.e: 07-08-31
In the case where the constraints were not active, the ckbs_affine return value 4.r: uOut was a three dimensional array instead of a matrix. This has been fixed.

7.f: 06-11-05
Fix plotting the wrong variables in constrained case for 3.2: nonlinear_ok_box.m .

Improve the documentation of the initial state estimate in 3.g: g_fun and 3.m: qinv for ckbs_nonlinear.

Improve the documentation of the initial state estimate in 3.g: g_fun and 3.m: qinv for ckbs_nonlinear.

7.g: 06-11-03
The order of the variables in the state vector for 3.2: nonlinear_ok_box.m and for 3.3: nonlinear_ok_sin.m was changed to group each component of velocity with the corresponding position (instead of grouping position with position and velocity with velocity). This made the variance matrix  Q_k block diagonal (simpler).

7.h: 06-10-24
The fonts got misaligned in the ckbs_affine 4.a: syntax documentation. This has been fixed.

7.i: 06-10-20
The 3: ckbs_nonlinear implementation was greatly simplified using the fact that 4: ckbs_affine no longer requires a feasible starting point. This removed the need for an augmented problem and hence the 3.r: info return value from ckbs_nonlinear no longer has a penalty parameter; i.e., it has one less value per row.

6: all_ok.m was extended so that if no argument is provided, only the quick tests are run.

7.j: 06-10-19
The 4: ckbs_affine routine has been improved so that it no longer requires a feasible input point so the input argument xIn has been removed. Since the sequence of iterates is no longer necessarily feasible, the 4.s: info return value now has been changed so that it now includes a measure of feasibility. The input argument 4.f: max_itr has been added to ckbs_affine. The 4.1: affine_ok_box.m example has been changed to use the new meaning for the arguments and return values of ckbs_affine.

The itr_out return value has been removed from the 3: ckbs_nonlinear syntax. It was the same as size(info, 1). The following examples have been changed to remove the use of itr_out in the calls to ckbs_nonlinear: 3.1: nonlinear_ok_simple.m , 3.2: nonlinear_ok_box.m , 3.3: nonlinear_ok_sin.m .

The implementation of 3: ckbs_nonlinear has been changed to use the new calling sequence to 4: ckbs_affine (this is implementation detail and does not affect the user).

7.k: 06-10-13
The line search in 4: ckbs_affine had a bug where the sign of the condition it was checking was reversed. This has been changed. In addition, the minimum line search step size in the sub-problem was added to the 3.r: info return value for 3: ckbs_nonlinear .

There was a syntax error in 3.1: nonlinear_ok_simple.m that caused a problem in Matlab®, but not in octave. This has been fixed.

7.l: 06-10-05
  1. Matlab® does not combine character matrices with different numbers of columns. A debugging message assignment in ckbs_nonlinear.m was removed to avoid this.
  2. Documentation for missing data values was added to 3.n: rinv .
  3. Some minor Matlab compatibility problems in the files example/nonlinear_ok_box.m and example/nonlinear_ok_sin.m were fixed.
  4. The directory created when the d.a: download file was extracted was ckbs instead of ckbs-yy-mm-dd. This has been fixed.
  5. The 5: utility section, and all its subsections, were missing from the documentation.
  6. The number of input arguments and output values for 3: ckbs_nonlinear has been changes to allow for control of the number of iterations.
  7. Line search failure in 3: ckbs_nonlinear was converted from an error to a warning.


7.m: 06-10-01
  1. The 6.1: test_path.m function was added so that it is easier to run a single test at the command line (you need to run this function first).
  2. A tracing argument called level was added to 3: ckbs_nonlinear . This tracing was turned on in the 3.2: nonlinear_ok_box.m example.
  3. The nonlinear constraint example 3.3: nonlinear_ok_sin.m was added. It demonstrates significant improvement by the inclusion of the constraints.
  4. The draw_plot argument to the 3.2: nonlinear_ok_box.m example was documented.
  5. The quick argument was added to 6: all_ok.m .
  6. The files example/nonlinear_ok_sin.r and example/nonlinear_ok_box.r were added to the distribution. (These are R source code files that are used to make plots from the corresponding example output files.)


7.n: 06-09-30
The function ckbs has been changed to 3: ckbs_nonlinear . This enables the : ckbs section to refer to the documentation for the entire system.
Input File: omh/whatsnew.omh
8: Alphabetic Listing of Cross Reference Tags
A
4.1: affine_ok_box.m
ckbs_affine Box Constrained Smoothing Spline Example and Test
6: all_ok.m
Run All Correctness Tests
B
5.1.1: blkdiag_mul_ok.m
blkdiag_mul Example and Test
5.2.1: blkdiag_mul_t_ok.m
blkdiag_mul_t Example and Test
C
: ckbs
ckbs-0.20100325.0: A Constrained Kalman-Bucy Smoother
4: ckbs_affine
Constrained Affine Kalman Bucy Smoother
5.1: ckbs_blkdiag_mul
Packed Block Diagonal Matrix Times a Vector
5.2: ckbs_blkdiag_mul_t
Transpose of Packed Block Diagonal Matrix Times a Vector
5.8: ckbs_kuhn_tucker
Compute Residual in Kuhn-Tucker Conditions
5.7: ckbs_newton_step
Affine Constrained Kalman Bucy Smoother Newton Step
3: ckbs_nonlinear
The Nonlinear Constrained Kalman-Bucy Smoother
5.4: ckbs_sumsq_grad
Affine Residual Sum of Squares Gradient
5.5: ckbs_sumsq_hes
Affine Residual Sum of Squares Hessian
5.3: ckbs_sumsq_obj
Affine Residual Sum of Squares Objective
5.6: ckbs_tridiag_solve
Symmetric Block Tridiagonal Algorithm
K
5.8.1: kuhn_tucker_ok.m
ckbs_kuhn_tucker Example and Test
L
2: license
Your License to use the ckbs Software
N
5.7.1: newton_step_ok.m
ckbs_newton_step Example and Test
3.2: nonlinear_ok_box.m
ckbs_nonlinear Box Constrained Tracking Example and Test
3.2.1: nonlinear_ok_box_f.m
Example Constraint Function: nonlinear_ok_box_f
3.2.2: nonlinear_ok_box_g.m
Example Transition Function: nonlinear_ok_box_g
3.2.3: nonlinear_ok_box_h.m
Example Measurement Function: nonlinear_ok_box_h
3.2.4: nonlinear_ok_box_nof.m
Example no Constraint Function: nonlinear_ok_box_nof
3.1: nonlinear_ok_simple.m
Simple ckbs_nonlinear Example and Test
3.1.1: nonlinear_ok_simple_f.m
Example Constraint Function: nonlinear_ok_simple_f
3.1.2: nonlinear_ok_simple_g.m
Example Transition Function: nonlinear_ok_simple_g
3.1.3: nonlinear_ok_simple_h.m
Example Measurement Function: nonlinear_ok_simple_h
3.3: nonlinear_ok_sin.m
ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
3.3.1: nonlinear_ok_sin_f.m
Example Constraint Function: nonlinear_ok_sin_f
S
5.4.1: sumsq_grad_ok.m
ckbs_sumsq_grad Example and Test
5.5.1: sumsq_hes_ok.m
ckbs_sumsq_hes Example and Test
5.3.1: sumsq_obj_ok.m
ckbs_sumsq_obj Example and Test
T
6.1: test_path.m
Ensure Path is Set for Testing
5.6.1: tridiag_solve_ok.m
ckbs_tridiag_solve Example and Test
U
5: utility
ckbs Utility Functions
W
7: whatsnew
Changes and Additions to ckbs

9: Keyword Index
A
affine
     constrained smoother 4: Constrained Affine Kalman Bucy Smoother
     example and test 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
all_ok 6: Run All Correctness Tests
B
Bucy
     constrained smoother 3: The Nonlinear Constrained Kalman-Bucy Smoother
blkdiag_mul 5.1: Packed Block Diagonal Matrix Times a Vector
     example and test 5.1.1: blkdiag_mul Example and Test
blkdiag_mul_5.2: Transpose of Packed Block Diagonal Matrix Times a Vector
     example and test 5.2.1: blkdiag_mul_t Example and Test
block
     diagonal multiply 5.2: Transpose of Packed Block Diagonal Matrix Times a Vector
     diagonal multiply 5.1: Packed Block Diagonal Matrix Times a Vector
box
     constraint example 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
C
ckbs
     example and test 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
     example and test 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
     example and test 3.1: Simple ckbs_nonlinear Example and Test
ckbs_affine 4: Constrained Affine Kalman Bucy Smoother
     example and test 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
ckbs_blkdiag_mul 5.1: Packed Block Diagonal Matrix Times a Vector
     example and test 5.1.1: blkdiag_mul Example and Test
ckbs_blkdiag_mul_5.2: Transpose of Packed Block Diagonal Matrix Times a Vector
     example and test 5.2.1: blkdiag_mul_t Example and Test
ckbs_kuhn_tucker 5.8: Compute Residual in Kuhn-Tucker Conditions
     example and test 5.8.1: ckbs_kuhn_tucker Example and Test
ckbs_newton_step 5.7: Affine Constrained Kalman Bucy Smoother Newton Step
     example and test 5.7.1: ckbs_newton_step Example and Test
ckbs_nonlinear 3: The Nonlinear Constrained Kalman-Bucy Smoother
ckbs_sumsq_grad 5.4: Affine Residual Sum of Squares Gradient
     example and test 5.4.1: ckbs_sumsq_grad Example and Test
ckbs_sumsq_hes 5.5: Affine Residual Sum of Squares Hessian
     example and test 5.5.1: ckbs_sumsq_hes Example and Test
ckbs_sumsq_obj 5.3: Affine Residual Sum of Squares Objective
     example and test 5.3.1: ckbs_sumsq_obj Example and Test
ckbs_tridiag_solve 5.6: Symmetric Block Tridiagonal Algorithm
     example and test 5.6.1: ckbs_tridiag_solve Example and Test
constrained
     affine smoother 4: Constrained Affine Kalman Bucy Smoother
     Kalman-Bucy Smoother 3: The Nonlinear Constrained Kalman-Bucy Smoother
correct
     test all 6: Run All Correctness Tests
D
diagonal
     block multiply 5.2: Transpose of Packed Block Diagonal Matrix Times a Vector
     block multiply 5.1: Packed Block Diagonal Matrix Times a Vector
E
example
     affine 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
     blkdiag_mul 5.1.1: blkdiag_mul Example and Test
     blkdiag_mul_5.2.1: blkdiag_mul_t Example and Test
     ckbs_nonlinear 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
     ckbs_nonlinear 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
     ckbs_nonlinear 3.1: Simple ckbs_nonlinear Example and Test
     kuhn_tucker 5.8.1: ckbs_kuhn_tucker Example and Test
     newton_step 5.7.1: ckbs_newton_step Example and Test
     smoother spline 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
     sumsq_grad 5.4.1: ckbs_sumsq_grad Example and Test
     sumsq_hes 5.5.1: ckbs_sumsq_hes Example and Test
     sumsq_obj 5.3.1: ckbs_sumsq_obj Example and Test
     tracking 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
     tracking 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
     tridiag_solve 5.6.1: ckbs_tridiag_solve Example and Test
G
gradient
     of objective 5.4: Affine Residual Sum of Squares Gradient
H
Hessian
     of objective 5.5: Affine Residual Sum of Squares Hessian
K
Kalman
     constrained smoother 3: The Nonlinear Constrained Kalman-Bucy Smoother
Kuhn-Tucker
     residual 5.8: Compute Residual in Kuhn-Tucker Conditions
kuhn_tucker 5.8: Compute Residual in Kuhn-Tucker Conditions
     example and test 5.8.1: ckbs_kuhn_tucker Example and Test
M
multiply
     block diagonal 5.2: Transpose of Packed Block Diagonal Matrix Times a Vector
     block diagonal 5.1: Packed Block Diagonal Matrix Times a Vector
N
newton_step 5.7: Affine Constrained Kalman Bucy Smoother Newton Step
     example and test 5.7.1: ckbs_newton_step Example and Test
O
objective
     gradient 5.4: Affine Residual Sum of Squares Gradient
     Hessian 5.5: Affine Residual Sum of Squares Hessian
     residual sum of squares 5.3: Affine Residual Sum of Squares Objective
P
path
     for testing 6.1: Ensure Path is Set for Testing
R
residual
     gradient 5.4: Affine Residual Sum of Squares Gradient
     Hessian 5.5: Affine Residual Sum of Squares Hessian
     Kuhn-Tucker 5.8: Compute Residual in Kuhn-Tucker Conditions
     sum of squares objective 5.3: Affine Residual Sum of Squares Objective
S
sine
     constraint example 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
smoother
     affine constrained 4: Constrained Affine Kalman Bucy Smoother
     constrained Kalman-Bucy 3: The Nonlinear Constrained Kalman-Bucy Smoother
smoothing
     spline example and test 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
solve
     block tridiagonal 5.6: Symmetric Block Tridiagonal Algorithm
step
     Newton 5.7: Affine Constrained Kalman Bucy Smoother Newton Step
sum of squares
     gradient 5.4: Affine Residual Sum of Squares Gradient
     Hessian 5.5: Affine Residual Sum of Squares Hessian
     residual objective 5.3: Affine Residual Sum of Squares Objective
sumsq_grad 5.4: Affine Residual Sum of Squares Gradient
     example and test 5.4.1: ckbs_sumsq_grad Example and Test
sumsq_hes 5.5: Affine Residual Sum of Squares Hessian
     example and test 5.5.1: ckbs_sumsq_hes Example and Test
sumsq_obj 5.3: Affine Residual Sum of Squares Objective
     example and test 5.3.1: ckbs_sumsq_obj Example and Test
T
Tucker
     Kuhn residual 5.8: Compute Residual in Kuhn-Tucker Conditions
test
     affine 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
     blkdiag_mul 5.1.1: blkdiag_mul Example and Test
     blkdiag_mul_t% 5.2.1: blkdiag_mul_t Example and Test
     ckbs_nonlinear 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
     ckbs_nonlinear 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
     ckbs_nonlinear 3.1: Simple ckbs_nonlinear Example and Test
     kuhn_tucker 5.8.1: ckbs_kuhn_tucker Example and Test
     newton_step 5.7.1: ckbs_newton_step Example and Test
     path 6.1: Ensure Path is Set for Testing
     run all 6: Run All Correctness Tests
     smoothing spline 4.1: ckbs_affine Box Constrained Smoothing Spline Example and Test
     sumsq_grad 5.4.1: ckbs_sumsq_grad Example and Test
     sumsq_hes 5.5.1: ckbs_sumsq_hes Example and Test
     sumsq_obj 5.3.1: ckbs_sumsq_obj Example and Test
     tracking 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
     tracking 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
     tridiag_solve 5.6.1: ckbs_tridiag_solve Example and Test
test_path 6.1: Ensure Path is Set for Testing
tracking
     example and test 3.3: ckbs_nonlinear Sine Wave Constrained Tracking Example and Test
     example and test 3.2: ckbs_nonlinear Box Constrained Tracking Example and Test
tridiag_solve 5.6: Symmetric Block Tridiagonal Algorithm
     example and test 5.6.1: ckbs_tridiag_solve Example and Test
tridiagonal
     block solve 5.6: Symmetric Block Tridiagonal Algorithm

10: External Internet References
Reference Location
_priintable.xml: ckbs
_printable.htm: ckbs
ckbs.htm: ckbs
ckbs.xml: ckbs
http://www.coin-or.org7.c: whatsnew#10-02-28
http://www.coin-or.org/download/source/CoinBazaard.a: ckbs#Download.Release Tarballs
http://www.coin-or.org/download/source/CoinBazaard.b: ckbs#Download.Using Subversion
http://www.gnu.org/software/octave/c: ckbs#System Requirements
http://www.mathworks.com/c: ckbs#System Requirements
http://www.seanet.com/~bradbell7.c: whatsnew#10-02-28
http://www.seanet.com/~bradbell/packages.htmb: ckbs#MathML