|   | Previous | Next | 
[xOut, uOut, info] = ckbs_affine(max_itr, epsilon, 
      z, b, g, h, db, dg, dh, qinv, rinv)
 \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = &
\frac{1}{2}
( z_k - h_k - H_k * x_k )^\R{T} * R_k^{-1} * ( z_k - h_k - H_k * x_k )
\\
& + &
\frac{1}{2}
( x_k - g_k - G_k * x_{k-1} )^\R{T} * Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\\
\end{array}
\] 
where the matrices 
 R_k
 and 
 Q_k
 are
symmetric positive definite and
 x_0
 is the constant zero.
Note that 
 g_1
 is the initial state estimate
and 
 Q_1
 is the corresponding covariance.
 \[
\begin{array}{rll}
{\rm minimize}
      & S( x_1 , \ldots , x_N )
      & {\rm w.r.t.} \; x_1 \in \B{R}^n , \ldots , x_N \in \B{R}^n
\\
{\rm subject \; to}
      & b_k + B_k * x_k  \leq 0
      & {\rm for} \; k = 1 , \ldots , N
\end{array}
\] 
 ( x_1 , \ldots , x_N )
 is considered a solution
if there is a Lagrange multiplier sequence 
 ( u_1 , \ldots , u_N )
such that the following conditions are satisfied.
 \[
\begin{array}{rcl}
      b_k + B_k * x_k                 & \leq & \varepsilon    \\
      0                               & \leq & u_k         \\
      | ( B_k^T * u_k + d_k )_j |        & \leq & \varepsilon \\
      | (u_k)_i * ( b_k + B_k * x_k)_i | & \leq & \varepsilon
\end{array}
\] 
for 
 j = 1 , \ldots , n
,
 i = 1 , \ldots , \ell
, and
 k = 1 , \ldots , N
.
Here
 d_k
 is the partial derivative of 
 S ( x_1 , \ldots , x_N )
with respect to 
 x_k
and 
 (u_k)_i
 denotes the i-th component of 
 u_k
.
max_itr
 specifies the maximum number of
iterations of the algorithm to execute. It must be greater than or
equal to zero. Note that if it is zero, the first row of the
info
 return value will still be computed.
This can be useful for deciding what is a good value for the argument
epsilon
.
epsilon
 specifies the convergence
criteria value; i.e.,
 \[
      \varepsilon = epsilon
\] 
z
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      z_k = z(:, k)
\]
and 
z
 has size 
 m \times N
.
b
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      b_k = b(:, k)
\]
and 
b
 has size 
 \ell \times N
.
If 
 \ell = 0
, the problem is not constrained; i.e.,
it is the affine Kalman-Bucy smoother problem.
g
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      g_k = g(:, k)
\]
and 
g
 has size 
 n \times N
.
The value 
 g_1
 serves as the initial state estimate.
h
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      h_k = h(:, k)
\]
and 
h
 has size 
 m \times N
.
db
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      B_k = db(:,:,k)
\]
and 
db
 has size 
 \ell \times n \times N
.
dg
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      G_k = dg(:,:,k)
\]
and 
dg
 has size 
 n \times n \times N
.
The initial state estimate 
 g_1
 does not depend on the value of
 x_0
, hence 
 G_1
 should be zero.
dh
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      H_k = dh(:,:,k)
\]
and 
dh
 has size 
 m \times n \times N
.
qinv
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      Q_k^{-1} = qinv(:,:, k)
\]
and 
qinv
 has size 
 n \times n \times N
.
The value of 
 Q_k
 is the variance of the initial state
estimate 
 g_1
.
rinv
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      R_k^{-1} = rinv(:,:, k)
\]
and 
rinv
 has size 
 m \times m \times N
.
It is ok to signify a missing data value by setting the corresponding
row and column of 
rinv
 to zero. This also enables use
to interpolate the state vector 
 x_k
 to points where
there are no measurements.
xOut
 contains the optimal sequence
 ( x_1 , \ldots , x_N )
.
For 
 k = 1 , \ldots , N
 \[
      x_k = xOut(:, k)
\]
and 
xOut
 is a two dimensional array with size 
 n \times N
.
uOut
 contains the Lagrange multiplier sequence
corresponding to 
xOut
.
For 
 k = 1 , \ldots , N
 \[
      u_k = uOut(:, k)
\]
and 
uOut
 is a two dimensional array with size
 \ell \times N
.
The pair 
xOut
, 
uOut
 satisfy the
first order conditions
.
info
 is a matrix with each row corresponding
to an iteration of the algorithm.
Note that ckbs_affine has satisfied the convergence condition if
and only if
      all( info(end, 1:3) <= epsilon )
 x_k^q
 (
 u_k^q
)
to denote the state vector (dual vector)
for time point 
 k
 and at the end of iteration 
 q-1
.
We use 
 d_k^q
 for the partial derivative of
 S ( x_1^q , \ldots , x_N^q )
 with respect to 
 x_k
.
info(q, 1)
 is
a bound on the constraint functions
at the end of iteration 
 q-1
. To be specific
 \[
info(q, 1) = \max_{i,k} ( b_k + B_k * x_k )_i
\] 
info(q, 2)
 is
a bound on the partial derivative of the Lagrangian with respect to
the state vector sequence
at the end of iteration 
 q-1
:
 \[
info(q, 2) = \max \left[ | ( B_k^T * u_k + d_k )_j | \right]
\] 
where the maximum is with respect to 
 j = 1 , \ldots , n
and 
 k = 1 , \ldots , N
.
info(q, 3)
 is
a bound on the complementarity conditions
at the end of iteration 
 q-1
:
 \[
info(q, 3) = \max \left[ | (u_k)_i * ( b_k + B_k * x_k)_i | \right]
\] 
where the maximum is with respect to
 i = 1 , \ldots , \ell
 and
 k = 1 , \ldots , N
.
info(q, 4)
 is the line search step size used during
iteration 
 q-1
.
Small step sizes indicate problems with the interior point method
used to solve the affine problem
(with the exception that 
info(1, 4)
 is always zero).
ckbs_affine.
It returns true if ckbs_affine passes the test
and false otherwise.