*xt T C t* ˆ( | ) () ( | ) *y*ˆ *t T* , (98)

(99)

2 2 *A B C D* 

<sup>140</sup> Estimating the Past, Present and Future

1 1 *A B C D* 

2 have state-space parameters 1 1

1 1 21 2 2 1 21 2 2 1

() () () () 0 () () () ( ) () () () () () () ˆ ( | ) () () () () () () () () ()

Step 2. In lieu of (96), operate the adjoint of (96) on the time-reversed transpose of α(*t*).

That is, a smoother for state estimation is given by (89), (96) and (97). In frequency-domain estimation problems, minimum-order solutions are found by exploiting pole-zero cancellations, see Example 1.13 of Chapter 1. Here in the time-domain, (89), (96), (97) is not a minimum-order solution and some numerical model order reduction may be required.

Suppose that *C*(*t*) is of rank *n* and *D*(*t*) = 0. In this special case, an *n*2-order solution for state

"In questions of science, the authority of a thousand is not worth the humble reasoning of a single

. (96)

*t A t C tK t C tR t t t C tK t A t C tR t t wt T QtD tK t QtB t QtD tR t t*

*T TT T T T T T T T T T*

*Procedure 2.* Input estimates can be calculated via the following two steps. Step 1. Operate ˆ <sup>1</sup> on the measurements *z*(*t*) using (89) to obtain α(*t*).

Smoothed state estimates can be obtained by defining the reference system

 <sup>1</sup> # () () () () *<sup>T</sup> <sup>T</sup> C t C tCt C t*

Then take the time-reversed transpose of the result.

*A B* 0 *BC A BD DC C DD* 

systems

Then 

is realised by

in which

as

where

individual." *Galileo Galilei*

1 and

**State Estimation** 

2 1 is parameterised by

( )*t <sup>n</sup>* is an auxiliary state.

estimation can be obtained from (91) and

denotes the Moore-Penrose pseudoinverse.

An analysis of minimum-variance smoother performance requires an identity which is described after introducing some additional notation. Let *α* = <sup>0</sup> *w* denote the output of linear time-varying system having the realisation

$$
\dot{\mathbf{x}}(t) = A(t)\mathbf{x}(t) + w(t) \tag{100}
$$

$$\mathbf{x}(t) = \mathbf{x}(t) \,. \tag{101}$$

where *w*(*t*) *<sup>n</sup>* and *A*(*t*) *n n* . By inspection of (100) – (101), the output of the inverse system *w* = <sup>1</sup> <sup>0</sup> *y* is given by

$$w(t) = \dot{a}(t) - A(t)a(t) \,. \tag{102}$$

Similarly, let *β* = <sup>0</sup> *H u* denote the output of the adjoint system <sup>0</sup> *H* , which from Lemma 1 has the realisation

$$-\dot{\tilde{\zeta}}(t) = A^{\text{T}}(t)\tilde{\zeta}(t) + u(t) \tag{103}$$

$$
\beta(t) = \zeta(t) \,. \tag{104}
$$

It follows that the output of the inverse system *u* = <sup>0</sup> *H* is given by

$$
\mu(t) = -\dot{\beta}(t) - A^T(t)\beta(t) \,. \tag{105}
$$

The following identity is required in the characterisation of smoother performance

$$-P(t)A^\top(t) - A(t)P(t) = P(t)\mathcal{G}\_0^{-H} + \mathcal{G}\_0^{-1}P(t) \, , \tag{106}$$

where *P*(*t*) is an arbitrary matrix of compatible dimensions. The above equation can be verified by using (102) and (105) within (106). Using the above notation, the exact Wiener-Hopf factor satisfies

$$
\Delta\Delta^H = C(t)\mathcal{J}\_0 B(t)Q(t)B^\top(t)\mathcal{J}\_0^H C^\top(t) + R(t) \,. \tag{107}
$$

It is observed below that the approximate Wiener-Hopf factor (86) approaches the exact Wiener Hopf-factor whenever the problem is locally stationary, that is, whenever *A*(*t*), *B*(*t*), *C*(*t*), *Q*(*t*) and *R*(*t*) change sufficiently slowly, so that *P t*( ) of (87) approaches the zero matrix. 

*Lemma 10 [8]: In respect of the signal model (1) – (2) with D(t) = 0, E{w(t)} = E{v(t)} = 0, E{w(t)wT(t)} = Q(t), E{v(t)vT(t)} = R(t), E{w(t)vT(t)} = 0 and the quantities defined above,* 

$$
\hat{\Delta}\hat{\boldsymbol{\Delta}}^H = \boldsymbol{\Delta}\boldsymbol{\Delta}^H - \boldsymbol{C}(t)\mathcal{J}\_0\dot{\mathcal{P}}(t)\mathcal{J}\_0^H\boldsymbol{C}^T(t) \,. \tag{108}
$$

*<sup>&</sup>quot;*Every great advance in natural knowledge has involved the absolute rejection of authority." *Thomas Henry Huxley*

−31

signal *w*(*t*) = <sup>1</sup>

(iv) minimum-variance filter.

**6.6 Conclusion** 

 sin( ) sin( ) *<sup>t</sup> t* , where <sup>2</sup>

−30

−29

(i),(ii)

MSE, dB

−28

−27

(SNR) is shown in Fig. 2. As expected, it can be seen that the smoothers outperform the filter. Although the minimum-variance smoother exhibits the lowest mean-square error, the

*Example 6 [9].* Suppose instead that the process noise is the unity-variance deterministic

simulations employing the sinusoidal process noise and Gaussian measurement noise are shown in Fig. 3. Once again, the smoothers exhibit better performance than the filter. It can be seen that the minimum-variance smoother provides the best mean-square-error performance. The minimum-variance smoother appears to be less perturbed by nongaussian

The fixed-point smoother produces state estimates at some previous point in time, that is,

ˆ <sup>1</sup> () () () () () () ( | ) ˆ *<sup>T</sup>*

*t tC tR t zt Ctxt t* ,

In fixed-lag smoothing, state estimates are calculated at a fixed time delay *τ* behind the

<sup>1</sup> ˆ( | ) () ( | ) () () () () ( | ) ( | ) ˆ ˆ ˆ *<sup>T</sup> xt t Atxt t BtQtB tP t xt t xt t*

"He who rejects change is the architect of decay. The only human institution which rejects progress is

 

,

 <sup>1</sup> () ( ,) ( ) ( ) ( ) ( ) ( ) ˆ *<sup>T</sup> <sup>T</sup> Pt t tC t R t zt Ct xt* 

where Ф(*t* + *τ, t*) is the transition matrix of the minimum-variance filter.

noises because it does not rely on assumptions about the underlying distributions.

MSE, dB

−5 0 5

(i)

sin( )*t* denotes the sample variance of sin(*t*). The results of

and (iv) minimum-variance filter.

 

  SNR, dB

Figure 3. MSE versus SNR for Example 5: (i) minimum-variance smoother, (ii) Fraser-Potter smoother, (iii) maximum-likelihood smoother

(ii),(iii)

(iv)

(iv)

performance benefit diminishes at high signal-to-noise ratios.

−5 <sup>0</sup> <sup>5</sup> −32

Figure 2. MSE versus SNR for Example 4: (i) minimum-variance smoother, (ii) Fraser-Potter smoother, (iii) maximum-likelihood smoother and

(iii)

SNR, dB

current measurements. This smoother has the form

where *Σ*(*t*) is the smoother error covariance.

the cemetery." *James Harold Wilson*

*Proof: The approximate Wiener-Hopf factor may be written as* ˆ  *=* 1 2 0 / *Ct KtR t* () () ()  *+* 1 2 *R t* / ( ) *. It is easily shown that* ˆ ˆ *<sup>H</sup> =* 0 0 () ( *<sup>H</sup> Ct P*  *+* <sup>1</sup> 0 *P +* <sup>0</sup> ( ) ( ) ( )) ( ) *T HT K tRtK t C t and using* (*106*) *gives* ˆ ˆ *<sup>H</sup> =* <sup>0</sup> () ( () () () *<sup>T</sup> Ct BtQtB t* <sup>0</sup> () () *H T Pt C t + R*(*t*)*. The result follows by comparing*  ˆ ˆ *<sup>H</sup> and* (*107*)*. □*

Consequently, the minimum-variance smoother (88) achieves the best-possible estimator performance, namely 2 2 <sup>2</sup> *H* *ei ei = 0,* whenever the problem is locally stationary.

*Lemma 11 [8]: The output estimation smoother (88) satisfies* 

$$\mathcal{R}\_{i12} = R(t)[\left(\Delta\boldsymbol{\Delta}^{H}\right)^{-1} - \left(\Delta\boldsymbol{\Delta}^{H} - C(t)\mathcal{J}\_{0}\dot{P}(t)\mathcal{J}\_{0}^{H}\boldsymbol{C}^{T}(t)\right)^{-1}]\boldsymbol{\Delta}\,. \tag{109}$$

*Proof: Substituting (88) into (77) yields* 

$$\mathcal{R}\_{\text{tr2}} = R(t)[\{\Delta\boldsymbol{\Delta}^H\}^{-1} - \{\hat{\Delta}\hat{\boldsymbol{\Delta}}^H\}^{-1}]\boldsymbol{\Delta} \,. \tag{110}$$

*The result is now immediate from (108) and (110). □*

Conditions for the convergence of the Riccati difference equation solution (87) and hence the asymptotic optimality of the smoother (88) are set out below.

*Lemma 12 [8]: Let S(t) =CT(t)R−1(t)C(t). If( i) there exist solutions P(t) ≥ P(t+δt) of* (*87*) *for a t > δt > 0; and* 

 *(ii)* 

$$
\begin{bmatrix} Q(t) & A(t) \\ A^T(t) & -S(t) \end{bmatrix} \succeq \begin{bmatrix} Q(t + \delta\_t) & A(t + \delta\_t) \\ A^T(t + \delta\_t) & -S(t + \delta\_t) \end{bmatrix} \tag{111}
$$

*for all t > δt then* 

$$\lim\_{t \to \infty} \left\| \mathcal{R}\_{\mathbb{H}2} \mathcal{R}\_{\mathbb{H}2}^H \right\|\_2 = 0 \,. \tag{112}$$

*Proof: Conditions (i) and (ii) together with Theorem 1 imply P(t) ≥ P(t+δt) for all t > δt and*  lim ( ) *<sup>t</sup> P t = 0. The claim (112) is now immediate from Lemma 11. □*

#### **6.5.5Performance Comparison**

The following scalar time-invariant examples compare the performance of the minimumvariance filter (92), maximum-likelihood smoother (50), Fraser-Potter smoother (73) and minimum-variance smoother (88) under Gaussian and nongaussian noise conditions.

*Example 5 [9].* Suppose that *A* = – 1 and *B* = *C* = *Q* = 1. Simulations were conducted using *T = 100* s, d*t = 1* ms and 1000 realisations of Gaussian noise processes. The mean-square-error (MSE) exhibited by the filter and smoothers as a function of the input signal-to-noise ratio

<sup>&</sup>quot;The definition of insanity is doing the same thing over and over again and expecting different results." *Albert Einstein* 

Smoothing, Filtering and Prediction:

 *+* 1 2 *R t* / ( ) *.* 

(111)

 *and using* (*106*)

 *=* 1 2 0 / *Ct KtR t* () () () 

 *+ R*(*t*)*. The result follows by comparing* 

. (109)

*P +* <sup>0</sup> ( ) ( ) ( )) ( ) *T HT K tRtK t C t*

*ei R t* . (110)

<sup>142</sup> Estimating the Past, Present and Future

 *+* <sup>1</sup> 0 

ˆ ˆ *<sup>H</sup> and* (*107*)*. □*

Consequently, the minimum-variance smoother (88) achieves the best-possible estimator

1 1

1 1

*The result is now immediate from (108) and (110). □* Conditions for the convergence of the Riccati difference equation solution (87) and hence the

*Lemma 12 [8]: Let S(t) =CT(t)R−1(t)C(t). If( i) there exist solutions P(t) ≥ P(t+δt) of* (*87*) *for a t > δt*

<sup>≥</sup> ()() ()() *t t*

*Qt At A t St* 

*Proof: Conditions (i) and (ii) together with Theorem 1 imply P(t) ≥ P(t+δt) for all t > δt and* 

*<sup>t</sup> P t = 0. The claim (112) is now immediate from Lemma 11. □*

The following scalar time-invariant examples compare the performance of the minimumvariance filter (92), maximum-likelihood smoother (50), Fraser-Potter smoother (73) and

*Example 5 [9].* Suppose that *A* = – 1 and *B* = *C* = *Q* = 1. Simulations were conducted using *T = 100* s, d*t = 1* ms and 1000 realisations of Gaussian noise processes. The mean-square-error (MSE) exhibited by the filter and smoothers as a function of the input signal-to-noise ratio

"The definition of insanity is doing the same thing over and over again and expecting different results."

minimum-variance smoother (88) under Gaussian and nongaussian noise conditions.

*t t*

 

*.* (112)

*T*

2 2 <sup>2</sup> <sup>0</sup> lim *<sup>H</sup> ei ei <sup>t</sup>* 

*<sup>H</sup> <sup>H</sup> H T*

ˆ ˆ ( )[( ) ( ) ] *<sup>H</sup> <sup>H</sup>*

*ei R t*( )[( ) ( ( ) ( ) ( )) ] *Ct Pt C t*

*ei ei = 0,* whenever the problem is locally stationary.

*Proof: The approximate Wiener-Hopf factor may be written as* ˆ

*H*

*Lemma 11 [8]: The output estimation smoother (88) satisfies* 

2

asymptotic optimality of the smoother (88) are set out below.

() () () () *<sup>T</sup> Qt At A t St* 

 () ( *<sup>H</sup> Ct P* 

<sup>0</sup> () () *H T Pt C t*

2 0 0

*It is easily shown that* ˆ ˆ *<sup>H</sup> =* 0 0

ˆ ˆ *<sup>H</sup> =* <sup>0</sup> () ( () () () *<sup>T</sup> Ct BtQtB t* 

performance, namely 2 2 <sup>2</sup>

*Proof: Substituting (88) into (77) yields* 

*gives* 

*> 0; and (ii)* 

 lim ( )

*Albert Einstein* 

*for all t > δt then* 

**6.5.5Performance Comparison** 

(SNR) is shown in Fig. 2. As expected, it can be seen that the smoothers outperform the filter. Although the minimum-variance smoother exhibits the lowest mean-square error, the performance benefit diminishes at high signal-to-noise ratios. Continuous-Time Smoothing 14314

Figure 2. MSE versus SNR for Example 4: (i) minimum-variance smoother, (ii) Fraser-Potter smoother, (iii) maximum-likelihood smoother and (iv) minimum-variance filter.

Figure 3. MSE versus SNR for Example 5: (i) minimum-variance smoother, (ii) Fraser-Potter smoother, (iii) maximum-likelihood smoother and (iv) minimum-variance filter.

*Example 6 [9].* Suppose instead that the process noise is the unity-variance deterministic signal *w*(*t*) = <sup>1</sup> sin( ) sin( ) *<sup>t</sup> t* , where <sup>2</sup> sin( )*t* denotes the sample variance of sin(*t*). The results of simulations employing the sinusoidal process noise and Gaussian measurement noise are shown in Fig. 3. Once again, the smoothers exhibit better performance than the filter. It can be seen that the minimum-variance smoother provides the best mean-square-error performance. The minimum-variance smoother appears to be less perturbed by nongaussian noises because it does not rely on assumptions about the underlying distributions.

#### **6.6 Conclusion**

The fixed-point smoother produces state estimates at some previous point in time, that is,

$$
\hat{\xi}(t) = \Sigma(t)\mathbf{C}^{\top}(t)\boldsymbol{R}^{-1}(t)\left(\boldsymbol{z}(t) - \mathbf{C}(t)\hat{\boldsymbol{x}}(t|t)\right),
$$

where *Σ*(*t*) is the smoother error covariance.

In fixed-lag smoothing, state estimates are calculated at a fixed time delay *τ* behind the current measurements. This smoother has the form

$$\dot{\hat{\mathbf{x}}}(t\mid t+\tau) = A(t)\dot{\hat{\mathbf{x}}}(t\mid t+\tau) + B(t)Q(t)B^{\top}(t)P^{-1}(t)\left(\hat{\mathbf{x}}(t\mid t+\tau) - \hat{\mathbf{x}}(t\mid t)\right)$$

$$+ P(t)\Phi^{\top}(t+\tau, t)\mathbf{C}^{\top}(t+\tau)R^{-1}(t+\tau)\left(z(t+\tau) - \mathbf{C}(t+\tau)\hat{\mathbf{x}}(t+\tau)\right),$$

where Ф(*t* + *τ, t*) is the transition matrix of the minimum-variance filter.

<sup>&</sup>quot;He who rejects change is the architect of decay. The only human institution which rejects progress is the cemetery." *James Harold Wilson*

2 2 *H*

2 2 <sup>2</sup> { }*<sup>H</sup>* 

**6.7 Problems** 

*ei ei* is a function of the estimator solution. The optimal smoother solution achieves

*ei ei* = 0 and provides the best mean-square-error performance, provided of course that the problem assumptions are correct. The minimum-variance smoother solution also attains best-possible performance whenever the problem is locally stationary, that is, when

(i) Substitute the above matrices into ( ) ( ) *<sup>a</sup> P t* = () () () () *a a A tP t* + () () ( )( ) ( ) *a aT P tA t* −

(ii) Develop expressions for the continuous-time fixed-point smoother estimate and the

**Problem 2.** The Hamiltonian equations (60) were derived from the forward version of the

() () () () (| ) () () () (| )

from the backward smoother (50). Hint: use the backward Kalman-Bucy filter and the

**Problem 3.** It is shown in [6] and [17] that the intermediate variable within the Hamiltonian

<sup>1</sup> ( | ) ( , ) ( ) ( )( ( ) ( ) ( | )) <sup>ˆ</sup> *<sup>T</sup> T T*

*t T s t C s R s z s C s x s s ds ,* 

where (,) *<sup>T</sup> s t* is the transition matrix of the Kalman-Bucy filter. Use the above equation to

<sup>1</sup> <sup>1</sup> ( | ) () () () ( | ) () () () () () ˆ *<sup>T</sup> <sup>T</sup> <sup>T</sup>*

.

*t T C tR tCtxt T A t t C tR tzt*

"It is not the strongest of the species that survive, nor the most intelligent, but the most responsive to

**Problem 4.** Show that the adjoint of system having state space parameters () ()

.

ˆ(| ) () () () () ˆ(| ) 0

() () 1 () () ( )( ) ( ) ( ) ( ) ( ) *a aT a a P t C tR tC tP t* + ( ) ( ) ( ) ( )( ( )) *<sup>a</sup> a T B tQt B t* to obtain the component

1 1

*C tR tCt A t t T C tR tzt*

*T T T T*

(*t*), *B*(*a*)

(*t*) and *C*(*a*)

(*t*) for the

() () *A t Bt Ct Dt* 

is

*A*(*t*), *B*(*t*), *C*(*t*), *Q*(*t*) and *R*(*t*) change sufficiently slowly.

continuous-time fixed-point smoother problem.

Riccati differential equations.

smoother gain.

*t T*

backward Riccati equation.

equations (60) is given by

derive

**Problem 1.** Write down augmented state-space matrices *A*(*a*)

maximum likelihood smoother (54). Derive the alternative form

*t*

() () *T T T T A t Ct Bt Dt* 

.

parameterised by () ()

change." *Charles Robert Darwin*

*xt T At BtQtB t xt T*

Three common fixed-interval smoothers are listed in Table 1, which are for retrospective (or off-line) data analysis. The Rauch-Tung-Streibel (RTS) smoother and Fraser-Potter (FP) smoother are minimum-order solutions. The RTS smoother differential equation evolves backward in time, in which *G*( ) = <sup>1</sup> () () () () *<sup>T</sup> BQB P* is the smoothing gain. The FP smoother employs a linear combination of forward state estimates and backward state estimates obtained by running a filter over the time-reversed measurements. The optimum minimum-variance solution, in which *A t*( ) = *At KtCt* () () () , where K(*t*) is the predictor gain, involves a cascade of forward and adjoint predictions. It can be seen that the optimum minimum-variance smoother is the most complex and so any performance benefits need to be reconciled with the increased calculation cost.


Table 1. Continuous-time fixed-interval smoothers.

The output estimation error covariance for the general estimation problem can be written as *H* *ei ei* = 1 1 *H* *ei ei* + 2 2 *H* *ei ei* , where 1 1 *H* *ei ei* specifies a lower performance bound and

<sup>&</sup>quot;Remember a dead fish can float downstream but it takes a live one to swim upstream." *William Claude Fields*

2 2 *H* *ei ei* is a function of the estimator solution. The optimal smoother solution achieves 2 2 <sup>2</sup> { }*<sup>H</sup>* *ei ei* = 0 and provides the best mean-square-error performance, provided of course that the problem assumptions are correct. The minimum-variance smoother solution also attains best-possible performance whenever the problem is locally stationary, that is, when *A*(*t*), *B*(*t*), *C*(*t*), *Q*(*t*) and *R*(*t*) change sufficiently slowly.

#### **6.7 Problems**

Smoothing, Filtering and Prediction:

is the smoothing gain. The FP

<sup>144</sup> Estimating the Past, Present and Future

Three common fixed-interval smoothers are listed in Table 1, which are for retrospective (or off-line) data analysis. The Rauch-Tung-Streibel (RTS) smoother and Fraser-Potter (FP) smoother are minimum-order solutions. The RTS smoother differential equation evolves

= <sup>1</sup> () () () () *<sup>T</sup> BQB P*

smoother employs a linear combination of forward state estimates and backward state estimates obtained by running a filter over the time-reversed measurements. The optimum minimum-variance solution, in which *A t*( ) = *At KtCt* () () () , where K(*t*) is the predictor gain, involves a cascade of forward and adjoint predictions. It can be seen that the optimum minimum-variance smoother is the most complex and so any performance benefits need to

ASSUMPTIONS MAIN RESULTS

 

*xt Atxt Btwt* () () () () ()

*y*() () () *t Ctxt*

*zt yt vt* () () ()

*x T Ax T G x T x* ˆ( | ) ()( | ) () ( | ) ( | )

ˆ ˆ ˆ

   

1/ 2 1/ 2

 

 

<sup>1</sup> 1 1 *xt T P t t t t* ˆ( | ) ( ( | ) ( | ))

The output estimation error covariance for the general estimation problem can be written as

"Remember a dead fish can float downstream but it takes a live one to swim upstream." *William Claude* 

*H*

 <sup>1</sup> <sup>1</sup> ( ( | ) ( | ) ( | ) ( | )) *P t txt t t t t t* <sup>ˆ</sup>

*T*

1/ 2 1/ 2 ˆ( ) () () ˆ( ) ( ) () () () ( ) *x t At Kt x t*

() () () () ( ) () () () ( )

*t A t C tR t t t Kt R t t*

 *t R tCt R t z t* 

*T T*

*y*ˆ( | ) () () () *t T zt Rt t*

*ei ei* specifies a lower performance bound and

backward in time, in which *G*( )

Signals and

RTS smoother

FP

Optimal smoother

*H* 

*Fields*

*ei ei* = 1 1

*H*

*ei ei* + 2 2

smoother

system

be reconciled with the increased calculation cost.

*E*{*w*(*t*)} = *E*{v(*t*)} = 0. *E*{*w*(*t*)*wT*(*t*)} = *Q*(*t*) > 0 and *E*{v(*t*)*vT*(*t*)} = *R*(*t*) > 0 are known. *A*(*t*), *B*(*t*) and *C*(*t*) are known.

Assumes that the filtered and smoothed states are normally distributed. *xt t* ˆ(|)

Kalman filter.

filter.

*xt t* ˆ(|) previously calculated by Kalman

previously calculated by

Table 1. Continuous-time fixed-interval smoothers.

*H*

*ei ei* , where 1 1

**Problem 1.** Write down augmented state-space matrices *A*(*a*) (*t*), *B*(*a*) (*t*) and *C*(*a*) (*t*) for the continuous-time fixed-point smoother problem.


**Problem 2.** The Hamiltonian equations (60) were derived from the forward version of the maximum likelihood smoother (54). Derive the alternative form

$$
\begin{bmatrix}
\dot{\xi}(t\mid T)
\end{bmatrix} = \begin{bmatrix}
A(t) & B(t)\mathbf{Q}(t)\mathbf{B}^{\top}(t) \\
\mathbf{C}^{\top}(t)\mathbf{R}^{-1}(t)\mathbf{C}(t) & A^{\top}(t)
\end{bmatrix} \begin{bmatrix}
\hat{\mathbf{x}}(t\mid T) \\
\boldsymbol{\xi}(t\mid T)
\end{bmatrix} - \begin{bmatrix}
0 \\
\mathbf{C}^{\top}(t)\mathbf{R}^{-1}(t)\mathbf{z}(t)
\end{bmatrix}.
$$

from the backward smoother (50). Hint: use the backward Kalman-Bucy filter and the backward Riccati equation.

**Problem 3.** It is shown in [6] and [17] that the intermediate variable within the Hamiltonian equations (60) is given by

$$\mathcal{L}(t \mid T) = \int\_t^T \Phi^T(\mathbf{s}, t) \mathbf{C}^T(\mathbf{s}) \mathbf{R}^{-1}(\mathbf{s}) (\mathbf{z}(\mathbf{s}) - \mathbf{C}(\mathbf{s}) \hat{\mathbf{x}}(\mathbf{s} \mid \mathbf{s})) d\mathbf{s}\_f$$

where (,) *<sup>T</sup> s t* is the transition matrix of the Kalman-Bucy filter. Use the above equation to derive

$$-\dot{\tilde{\xi}}(t\mid T) = -\mathbf{C}^{\top}(t)\mathbf{R}^{-1}(t)\mathbf{C}(t)\hat{\mathbf{x}}(t\mid T) + A^{\top}(t)\tilde{\xi}(t) - \mathbf{C}^{\top}(t)\mathbf{R}^{-1}(t)\mathbf{z}(t) \dots$$

**Problem 4.** Show that the adjoint of system having state space parameters () () () () *A t Bt Ct Dt* is

parameterised by () () () () *T T T T A t Ct Bt Dt* .

<sup>&</sup>quot;It is not the strongest of the species that survive, nor the most intelligent, but the most responsive to change." *Charles Robert Darwin*

*ei*

ˆ

**6.9 References** 

2007.

2009.

*Packer*

387 – 390, Aug., 1969.

7, no. 2, pp. 16 – 21, 1973.

*G*(*t*) Gain of the minimum-variance smoother developed by Rauch,

generates the output estimation error *e*.

Δ The Wiener-Hopf factor which satisfies *<sup>H</sup>* = *<sup>H</sup>*

.

[1] J. S. Meditch, "A Survey of Data Smoothing for Linear and Nonlinear Dynamic

[2] T. Kailath, "A View of Three Decades of Linear Filtering Theory", *IEEE Transactions on* 

[3] H. E. Rauch, F. Tung and C. T. Striebel, "Maximum Likelihood Estimates of Linear

[4] D. C. Fraser and J. E. Potter, "The Optimum Linear Smoother as a Combination of Two Optimum Linear Filters", *IEEE Transactions on Automatic Control*, vol. AC-14, no. 4, pp.

[5] J. S. Meditch, "On Optimal Fixed Point Linear Smoothing", *International Journal of* 

[6] A. P. Sage and J. L. Melsa, *Estimation Theory with Applications to Communications and* 

[7] J. B. Moore, "Fixed-Lag Smoothing Results for Linear Dynamical Systems", *A.T.R.*, vol.

[8] G. A. Einicke, "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother", *IEEE Transactions on Signal Processing,* vol. 55, no. 4, pp. 1543 – 1547, Apr.

[9] G. A. Einicke, J. C. Ralston, C. O. Hargrave, D. C. Reid and D. W. Hainsworth, "Longwall Mining Automation, An Application of Minimum-Variance Smoothing",

[10] G. A. Einicke, "A Solution to the Continuous-Time H-infinity Fixed-Interval Smoother Problem", *IEEE Transactions on Automatic Control,* vo. 54, no. 12, pp. 2904 – 2908, Dec.

"I don't want to be left behind. In fact, I want to be here before the action starts." *Kerry Francis Bullmore* 

*IEEE Control Systems Magazine*, vol. 28, no. 6, pp. 28 – 37, Dec. 2008.

Dynamic Systems", *AIAA Journal*, vol. 3, no. 8, pp. 1445 – 1450, Aug., 1965.

A linear system that operates on the inputs *i* = *<sup>T</sup> T T v w* and

*Q* + *R*.

Tung and Striebel.
