|   | Previous | Next | 
[xOut, rOut, sOut, pPlusOut, pMinusOut, info] = ckbs_L1_affine(...
       max_itr, epsilon, z, b, g, h, db, ...
       dg, dh, qinv, rinv)
 \[
\begin{array}{rcl}
S ( x_1 , \ldots , x_N ) & = & \sum_{k=1}^N S_k ( x_k , x_{k-1} ) \\
S_k ( x_k , x_{k-1} )    & = &
\frac{1}{2}
\|R_k^{-1/2}( z_k - h_k - H_k * x_k )^\R{T}\|_1
\\
& + &
\frac{1}{2}
( x_k - g_k - G_k * x_{k-1} )^\R{T} * Q_k^{-1} * ( x_k - g_k - G_k * x_{k-1} )
\\
\end{array}
\] 
where the matrices 
 R_k
 and 
 Q_k
 are
symmetric positive definite and
 x_0
 is the constant zero.
Note that 
 g_1
 is the initial state estimate
and 
 Q_1
 is the corresponding covariance.
 \[
\begin{array}{ll}
{\rm minimize}
& \frac{1}{2} y^\R{T} H(x) y + d(x)^\R{T} y
      + \sqrt{\B{2}}^\R{T} (p^+ + p^-)
   \;{\rm w.r.t} \;  y \in \B{R}^{nN}\; , \; p^+ \in \B{R}_+^{M} \; , \; p^- \in \B{R}_+^{M}
\\
{\rm subject \; to} &  b(x) + B(x) y - p^+ + p^- = 0
\end{array}
\] 
 q
, the lagrange multiplier
associated to the non-negativity constraint on 
 p^+
 by
 r
, and the lagrange multiplier associated to the
non-negativity of 
 p^-
 by 
 s
. We also use  
 \B{1}
to denote the vector of length 
 mN
 with all its components
equal to one, and 
 \B{\sqrt{2}}
 to denote the vector of
length 
 mN
 with all its components equal to 
\sqrt{2}
.
The Lagrangian corresponding to the L1 Kalman-Bucy problem is
given by
 \[
L(p^+, p^-, y, q, r, s)  =
\frac{1}{2} y^\R{T} H y + d^\R{T} y + \B{\sqrt{2}}^T(p^+ + p^-)
+ q^\R{T} (b + B y - p^+ + p^-) - r^\R{T} p^+ - s^\R{T} p^- \;.
\] 
The partial gradients of the Lagrangian are given by
 \[
\nabla L :=
\begin{array}{rcl}
\nabla_p^+ L(p^+, p^-, y, q ) & = & \B{\sqrt{2}} - q - r \\
\nabla_p^- L(p^+, p^-, y, q) & = & \B{\sqrt{2}} + q - s \\
\nabla_y L(p^+, p^-, y, q ) & = & H y + c + B^\R{T} q \\
\nabla_q L(p^+, p^-, y, q ) & = & b + B y - p^+ + p^- \\
\end{array}
\] 
At optimality, all of these gradients will be zero
(or smaller than a tolerance 
 \varepsilon
), and
complementarity conditions hold, i.e.
\[ r_i p_i^+ = s_i p_i^- = 0 \; \forall \; i \;.
\]
(or again, each pairwise product is smaller than 
\varepsilon
.
max_itr
 specifies the maximum number of
iterations of the algorithm to execute. It must be greater than or
equal to zero.
epsilon
 specifies the convergence
criteria value; i.e.,
 \[
      \|\nabla L\|_1 \leq epsilon\;.
\] 
z
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      z_k = z(:, k)
\]
and 
z
 has size 
 m \times N
.
g
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      g_k = g(:, k)
\]
and 
g
 has size 
 n \times N
.
The value 
 g_1
 serves as the initial state estimate.
h
 is a two dimensional array,
for 
 k = 1 , \ldots , N
 \[
      h_k = h(:, k)
\]
and 
h
 has size 
 m \times N
.
dg
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      G_k = dg(:,:,k)
\]
and 
dg
 has size 
 n \times n \times N
.
The initial state estimate 
 g_1
 does not depend on the value of
 x_0
, hence 
 G_1
 should be zero.
dh
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      H_k = dh(:,:,k)
\]
and 
dh
 has size 
 m \times n \times N
.
qinv
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      Q_k^{-1} = qinv(:,:, k)
\]
and 
qinv
 has size 
 n \times n \times N
.
The value of 
 Q_k
 is the variance of the initial state
estimate 
 g_1
.
rinv
 is a three dimensional array,
for 
 k = 1 , \ldots , N
 \[
      R_k^{-1} = rinv(:,:, k)
\]
and 
rinv
 has size 
 m \times m \times N
.
It is ok to signify a missing data value by setting the corresponding
row and column of 
rinv
 to zero. This also enables use
to interpolate the state vector 
 x_k
 to points where
there are no measurements.
xOut
 contains the optimal sequence
 ( x_1 , \ldots , x_N )
.
For 
 k = 1 , \ldots , N
 \[
      x_k = xOut(:, k)
\]
and 
xOut
 is a two dimensional array with size 
 n \times N
.
rOut
 contains the Lagrange multiplier sequence
corresponding to the non-negativity constraint on 
 p^+
.
For 
 k = 1 , \ldots , N
 \[
      r_k = rOut(:, k)
\]
and 
rOut
 is a two dimensional array with size
 m \times N
.
rOut
 contains the Lagrange multiplier sequence
corresponding to the non-negativity constraint on 
 p^-
.
For 
 k = 1 , \ldots , N
 \[
      s_k = sOut(:, k)
\]
and 
sOut
 is a two dimensional array with size
 m \times N
.
pPlusOut
 contains the positive parts
of the measurement residual 
 b(x) + B(x) y
.
For 
 k = 1 , \ldots , N
 \[
      p_k = pPlusOut(:, k)
\]
and 
pPlusOut
 is a two dimensional array with size
 m \times N
.
pMinusOut
 contains the negative parts
of the measurement residual 
 b(x) + B(x) y
.
For 
 k = 1 , \ldots , N
 \[
      p_k = pMinusOut(:, k)
\]
and 
pMinusOut
 is a two dimensional array with size
 m \times N
.
info
 is a matrix with each row corresponding
to an iteration of the algorithm.
There are five numbers output per iteration:
the infinity norm of the Kuhn-Tucker conditions,
the one-norm of the Kuhn-Tucker conditions, the duality gap,
 \mu
, and the number of line search
Note that ckbs_L1_affine has satisfied the convergence condition if
       info(end, 2) <= epsilon
ckbs_L1_affine.
It returns true if ckbs_L1_affine passes the test
and false otherwise.