**3. Hybrid Estimation Algorithms**

We assume the following condition:

 **A1:** The matrices *Ad* ¯(*r*) (*k*), *k* =0, 1, ⋯ are invertible.

**Remark 3.1.** *As described in [24], note that A1 is the reasonable assumption in the discrete-time models. First we consider the following continuous-time model:*

$$\dot{\boldsymbol{x}}\_c(t) = \boldsymbol{A}\_c(t, \,\,\boldsymbol{\theta}\_c(t))\boldsymbol{x}\_c(t) + \boldsymbol{w}\_c(t, \,\,\boldsymbol{\theta}\_c(t))$$

*where θc(t) ∈ M, t ≥ 0 is the switching mode process. If we discretize this model with stepsize h ≥ 0, let x(k) = xc(kh ) and the following discretized equation holds:*

$$\mathbf{x}(k+1) = \mathbf{I} + h \, A\_c(kh \, \wedge \, \partial\_c(kh \, )) \mathbf{J} \mathbf{x}(k) + w\_d(k \, \wedge \, \partial \langle k \rangle).$$

*where wd (k, θ(k)) = h wc(kh , θc(kh )). Let*

$$A\_d(k, \ i) = I + A\_c(kh, \ i)$$

*and then we obtain*

$$\overline{A\_d}^{(r)}(k) = \sum\_{i=1}^m \phi\_i^{(r)}(k) A\_d(k, i)$$

$$= \sum\_{i=1}^m \phi\_i^{(r)}(k) \mathbf{I} + h \ A\_c(kh, i) \mathbf{I}$$

$$= I + h \sum\_{i=1}^m \phi\_i^{(r)}(k) A\_c(kh, i).$$

*If we assume that Ac(t, i) is uniformly bounded, then Ad ¯(r) (k), k = 0, 1, ⋯ is invertible for h small enough.*

#### **3.1 Optimal Hybrid Filtering**

The dynamic programming (DP) equations associated with the forward control problem to minimize *Jf* (*r*) with regard to *wd* ( ⋅ ) are given as follows:

$$V\_f^{(r)}(k+1, \mathbf{x}) = \min\_{w\_d} \left\{ L^{(r)}(k, \overline{A\_d}^{(r), -1}(k)(\mathbf{x} - w\_d), w\_{d'}, y(k)) + V\_f^{(r)}(k, \mathbf{x}) \right\}$$

$$V\_f^{(r)}(0, \mathbf{x}) = \mathfrak{O}\_0(\mathbf{x}), \ r \in \mathbb{N}\_0$$

Let

**2.1 Definition**

168 Advances in Discrete Time Systems

indices (4) and (5).

Find the pair (*r*

Find the pair (*r*

Given the matrices *Md*, *Nd*, *D0* and *DN*, (*r*

(*r*)

causal part Y*<sup>k</sup>* ={*y*(*l*)|0≤*l* ≤*k*} of the observed information Y*<sup>N</sup>* .

(*k*, *x*). (*r* ^ *<sup>s</sup>*(*k*), *x* ^

**The Optimal Hybrid Filtering Problem for Linear Discrete-Time Systems:**

**The Optimal Hybrid Smoothing Problem for Linear Discrete-Time Systems:**

(*r*) (*k*, *x*).

MPT sense) if it minimizes *Vf*

^ *<sup>f</sup>* (*l*), *x* ^ *f* (*r* ^ *<sup>f</sup>* (*l*))

^ *<sup>s</sup>*(*k*), *x* ^ *s* (*r* ^ *<sup>s</sup>*(*k* ))

the whole observed information Y*<sup>N</sup>* .

**3. Hybrid Estimation Algorithms**

We assume the following condition:

**A1:** The matrices *Ad*

*where wd (k, θ(k)) = h wc(kh , θc(kh )). Let*

*and then we obtain*

*Ad* (*k*, *i*)= *I* + *Ac*(*kh* , *i*)

*models. First we consider the following continuous-time model:*

*x*˙ *<sup>c</sup>*(*t*)= *Ac*(*t*, *θc*(*t*))*xc*(*t*) + *wc*(*t*, *θc*(*t*))

*let x(k) = xc(kh ) and the following discretized equation holds:*

*x*(*k* + 1)= *I* + *h Ac*(*kh* , *θc*(*kh* )) *x*(*k*) + *wd* (*k*, *θ*(*k*))

the MPT sense) if it minimizes *Vs*

^ *<sup>f</sup>* (*k*), *x*

Then we formulate the following optimal hybrid estimation problems for the performance

**Remark 2.1.** *In general, if we directly adopt dynamic programming (DP) method for mode-depend‐ ent systems, it can arise that computational complexity increases exponentially with time k ([5,11]). On the other hand in this chapter we consider the averaged systems and averaged performance indices for them with regard to the candidates of the mode distributions. Note that this introduction of the averaged systems and performance indices prevents the computational complexity from increasing ex‐*

*ponentially by applying the dynamical programming (DP) method as seen in the next section.*

¯(*r*)

**Remark 3.1.** *As described in [24], note that A1 is the reasonable assumption in the discrete-time*

*where θc(t) ∈ M, t ≥ 0 is the switching mode process. If we discretize this model with stepsize h ≥ 0,*

^ *<sup>f</sup>* (*k*)), *<sup>k</sup>* <sup>≥</sup>0, is called an optimal filter (in the

*<sup>s</sup>*(*k*)), 0≤*k* ≤ *N* is called an optimal smoother (in

(*l*)), *l* ∈ 0, *k* minimizing the performance index (4) based on the

(*k*)), *k* ∈ 0, *N* minimizing the performance index (4)+(5) based on

(*k*), *k* =0, 1, ⋯ are invertible.

$$W\_f^{(r)}(k,\ \mathbf{x}) = \mathbf{x}' K\_f^{(r)}(k)\mathbf{x} + 2(p\_f^{(r)}(k))'\mathbf{x} + q\_f^{(r)}(k)\tag{6}$$

for some functions *Kf* (*r*) and *qf* (*r*) with appropriate dimensions. Then we obtain the following minimizing *wd* ( ⋅ ). ¯(*r*) ¯(*r*)

or some functions  $\kappa\_f$  - and  $\eta\_f$  - want appropriate dimensions. Then we obtain use to minimize  $w\_d(\cdot)$ .

$$w\_{d,f}^{(r)\*}(k,\mathbf{x}) = \mathbf{x} - \overline{A\_d}^{(r)}(k)S\_d^{(r)}(k)\overline{A\_d}\overline{M\_d}^{(r)}(k)\mathbf{x} + \overline{H\_d}^{\prime}\overline{N\_d}^{(r)}(k)y(k) - p\_f^{(r)}(k)\mathbf{f}$$
 where 
$$S\_d^{(r)}(k) = \mathbf{L}\overline{A\_d}\overline{M\_d}\overline{A\_d}^{(r)}(k) + \overline{H\_d}^{\prime}\overline{N\_d}\overline{H\_d} + \mathbf{K}\_f^{(r)}(k)\mathbf{I}^{-1}.$$
 Then we obtain the following matrix difference equations. forward vector equation

where

$$\mathbf{S}\_d^{(r)}(\mathbf{k}) = \mathbf{\overline{L\_d}\acute{,}\mathcal{M}\_d\acute{,}A\_d}^{(r)}(\mathbf{k}) + \overline{H\_d\acute{,}\mathcal{N}\_dH\_d} + \mathbf{K}\_f^{(r)}(\mathbf{k})\mathbf{I}^{-1}.$$

Then we obtain the following matrix difference equations, forward vector equations and scalar equations with initial conditions: ¯(*r*) ¯(*r*)

$$\begin{aligned} \text{Parameters} & \text{ } \text{mean mean} = \text{column} \\\\ K\_{\uparrow}^{(r)}(k+1) &= \overline{M\_{d}}^{(r)}(k) - \overline{M\_{d}} \overline{A\_{d}}^{(r)}(k) \mathbf{S}\_{d}^{(r)}(k) \overline{A\_{d}} \overline{M\_{d}}^{(r)}(k), \; K\_{\uparrow}^{(r)}(0) = \mathbf{D}\_{0} \end{aligned} \tag{7}$$

$$K\_{f}^{(r)}(k+1) = \overline{M\_{d}}^{(r)}(k) - \overline{M\_{d}} \overline{A\_{d}}^{(r)}(k) S\_{d}^{(r)}(k) \overline{A\_{d}} \overline{M\_{d}}^{(r)}(k), \; K\_{f}^{(r)}(0) = D\_{0} \tag{7}$$

$$p\_{f}^{(r)}(k+1) = -\overline{M\_{d}} \overline{A\_{d}}^{(r)}(k) S\_{d}^{(r)} \overline{\left[\overline{H\_{d}} \overline{N\_{d}}^{(r)}(k) y(k) - p\_{f}^{(r)}(k)\right]} \; p\_{f}^{(r)}(0) = -D\_{0} \hat{\mathbf{x}}\_{0} \tag{8}$$

$$p\_{f}^{(r)}(k+1) = -\left[\overline{H\_{d}} \overline{M\_{d}}^{(r)}(k) y(k) - p\_{f}^{(r)}(k) \right] S\_{d}^{(r)}(k) \left[\overline{H\_{d}} \overline{M\_{d}}^{(r)}(k) y(k) - p\_{f}^{(r)}(k) \right] \tag{9}$$

$$\mathbf{A} = \mathbf{A} \tag{10}$$

$$\begin{aligned} q\_{\not\!\!\!/}^{\{\prime\}}(\mathbf{k}+1) &= -\Big[\overline{H\_{d}^{\!\!\!\!/}N\_{d}}^{\prime\!\!\!/}(\mathbf{k})y(\mathbf{k}) - p\_{\not\!\!\!/}^{\{\prime\}}(\mathbf{k})\Big] \mathbf{S}\_{d}^{\{\prime\}}(\mathbf{k}) \overline{H\_{d}^{\!\!\!/}N\_{d}}^{\prime\!\!\!/}(\mathbf{k})y(\mathbf{k}) - p\_{\not\!\!\!/}^{\{\prime\}}(\mathbf{k}) \Big] \\ &+ y^{\not\!\!\!/}(\mathbf{k}) \overline{N\_{d}}^{\prime\!\!\!/}(\mathbf{k})y(\mathbf{k}) + q\_{\not\!\!\!/}^{\{\prime\}}(\mathbf{k}), q\_{\not\!\!\!/}^{\{\prime\}}(\mathbf{0}) = \overset{\sim}{\mathbf{x}\_{0}}D\_{0} \overset{\sim}{\mathbf{x}\_{0}} \end{aligned} \tag{9}$$

For any given *k*, by letting ∂*Vf* (*r*) / ∂*x* =0, we obtain

$$K\_f^{(r)}(k)x + p\_f^{(r)}(k) = 0.1$$

Since it can be shown that the matrix *Kf* (*r*) (*k*) is positive-definite, we obtain

$$\overset{\wedge}{\mathbf{x}}\_{f}^{(r)}(k) = -(K\_{f}^{(r)}(k))^{-1} p\_{f}^{(r)}(k)$$

Let

*Vb* (*r*)

> *Vb* (*r*)

> > *Vb* (*r*)

> > *Vb* (*r*)

¯(*r*)

¯

(*r*) ′ (*k* + 1)*Td*

(*k*)= − *pb*

For any given *k*, by letting ∂*Vb*

*Kb*

*x*

as the minimizer of *Vb*

(*r*) , *pb* (*r*) and *qb* (*r*)

(*k*)*Td* (*r*) (*k*)*pb* (*r*)

Since it can be also shown that the matrix *Kb*

(*r*)

(*r*) , *pb* (*r*) and *qb* (*r*)

(*r*)∗(*k*, *<sup>x</sup>*)={ <sup>−</sup> *Ad*

(*r*)

(*k*, *x*)= *x* ′

(*k*)− *Ad* ′ *Md*

> (*r*) (*k*)*pb* (*r*)

> > (*r*)

(*r*)

^ *b* (*r*) (*k*)*x* + *pb*

(*k*)= −(*Kb*

(*k*, *x*). Then we obtain

for some functions *Kb*

*wd* ,*<sup>b</sup>*

where

Let

lowing minimizing *wd* ( ⋅ ).

*Td*

for some functions *Kb*

terminal conditions:

*Kb* (*r*) (*k*)= *Ad* ′ *Md Ad*

*pb* (*r*) (*k*)= *Ad* ′ *Md*

*qb* (*r*) (*k*, *x*)=min *wd*

(*k*, *x*)= *x* ′

{*L* (*r*)

*Kb* (*r*)

¯(*r*)

(*k*)= *Md*

(*k*) + *Td* (*r*)

¯(*r*)

*Kb* (*r*)

¯(*r*)

(*k*)*x* + 2(*pb*

(*k*) + *Kb* (*r*)

(*k*)*x* + 2(*pb*

(*k*)*Td* (*r*)

(*k* + 1) - *H*¯

(*k* + 1) + *qb*

/ ∂*x* =0, we obtain

(*r*) (*k*)=0.

(*r*) (*k*)) <sup>−</sup><sup>1</sup> *pb* (*r*) (*k*)

(*r*)

+ *y* ′

(*N* , *x*)=Φ*<sup>N</sup>* (*x*), *r* ∈N<sup>0</sup>

(*k*, *x*, *wd* , *y*(*k*)) + *Vb*

(*r*) (*k*)) ′ *x* + *qb* (*r*)

(*k*)*Md Ad*

(*r*) (*k*)) ′ *x* + *qb* (*r*)

lowing matrix difference equations, backward vector equations and scalar equations with

(*k*)*Md Ad*

*Kb* (*r*)

*<sup>d</sup>Nd*

(*r*) (*k* + 1)

(*k*)*Nd* ¯

¯(*r*)

(*N* )=*DN*

(*r*) (*k*)*y*(*k*) , *pb*

(*k* + 1)*y*(*k*), *qb*

(*k*) + *Hd* ′ *NdHd*

(*r*)

¯

(*k* + 1) <sup>−</sup><sup>1</sup> .

(*r*)

An Approach to Hybrid Smoothing for Linear Discrete-Time Systems with Non-Gaussian Noises

(*k*)}*x*(*k*)−*Td*

(*k* + 1, *A* ¯ *d* (*r*)

with appropriate dimensions. Then we obtain the fol‐

(*r*) (*k*)*pf* (*r*) (*k* + 1)

with appropriate dimensions. Then we obtain the fol‐

¯(*r*)

(*N* )= −*DN x*

(*r*) (*N* )= *x* ^ *N* ' *DN x* ^ *N*

(*k*) is positive-definite, we obtain

(*k*)*x* + *wd* )}

http://dx.doi.org/10.5772/51385

171

(*k*) (12)

(*k*) (13)

(*k*),

^

(14)

(16)

*<sup>N</sup>* (15)

as the minimizer of *Vf* (*r*) (*k*, *x*). Then we obtain

$$\begin{aligned} \text{minimizer of } V\_f^{(r)}(k, x). \text{ I.Hen we obtain} \\ \begin{aligned} \text{subject} \\ \text{x}\_f^{(r)}(k+1) &= -\text{(}^{\text{K}\_f^{(r)}(k)} \text{p}\_f^{(r)}(k+1) \\ &= -\text{[}\overline{M}\_d^{\text{(r)}}(k) - \overline{M\_d A\_d}^{\text{(r)}}(k) \text{S}\_d^{(r)}(k) \text{A}\_d^{\text{(r)}} \text{M}\_d^{\text{(r)}}(k) \text{I}^{-1} \overline{M\_d A\_d}^{\text{(r)}}(k) \text{S}\_d^{(r)} \\ &\qquad \times \text{[}\overline{H\_d N\_d}^{\text{(r)}}(k) y(k) + K\_f^{(r)}(k) \text{x}\_f^{(r)}(k) \text{J}\_c \text{x}\_f^{(r)}(0) = \text{x}\_0 \end{aligned} \end{aligned} \tag{10}$$

and

$$\begin{aligned} \mathbf{T}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} &= \mathbf{F}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} + \mathbf{F}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} + \mathbf{F}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} + \mathbf{F}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} \\\\ \mathbf{T}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} + \mathbf{F}\_{d}\mathbf{T}\_{d}\mathbf{T}\_{d} &= \mathbf{F}\_{d}\mathbf{T}\_{d} \\\\ &\times \mathbf{\overline{H}\_{d}^{'}\mathbf{N}\_{d}}^{'}(\mathbf{k})y(\mathbf{k}) + \mathbf{K}\_{f}^{'T}(\mathbf{k})\mathbf{\widehat{x}}\_{f}^{'T}(\mathbf{k}) \end{aligned} \tag{11}$$
 
$$+ \begin{aligned} \mathbf{\widehat{x}} &= \mathbf{\widehat{x}} + \mathbf{\widehat{y}} \\ + \mathbf{\widehat{y}} &= (\mathbf{k} + \mathbf{\widehat{x}})\mathbf{\widehat{y}} - (\mathbf{k} + \mathbf{\widehat{y}})\mathbf{\widehat{z}} \\ + (\mathbf{k} + \mathbf{\widehat{z}})\mathbf{\widehat{x}} &= (\mathbf{k} + \mathbf{\widehat{z}})\mathbf{\widehat{x}} - (\mathbf{k} + \mathbf{\widehat{z}})\mathbf{\widehat{x}} \end{aligned} \tag{12}$$

We also obtain

$$V\_{\not{f}}^{(r)}(k) = -\left(\mathring{\mathbf{x}}\_{\not{f}}^{(r)}(k)\right)\mathring{\mathbf{K}}\_{\not{f}}^{(r)}(k)\mathring{\mathbf{x}}\_{\not{f}}^{(r)}(k) + q\_{\not{f}}^{(r)}(k).$$

Now we have the following filtering algorithm, which gives the solution of **the Optimal Hy‐ brid Filtering Problem for Linear Continuous-Time Systems**.

*\*\*\*Optimal hybrid filtering algorithm\*\*\**

Step 1) Obtain *Kf* (*r*) (*k*), *x* ^ *f* (*r*) (*k*) and *qf* (*r*) (*k*) for *r* ∈N0 and *k* ∈ 0, *N* by solving (7), (10) and (11) with initial conditions.

Step 2) Choose *r* ^ *<sup>f</sup>* (*k*) that minimizes

$$V\_{\not\nearrow}^{(r)}(k) = -\left(\stackrel{\scriptstyle \prec}{\times}\_{\not\nearrow}^{(r)}(k)\right) \stackrel{\scriptstyle}{K}\_{\not\nearrow}^{(r)}(k) \stackrel{\scriptstyle \prec}{\times}\_{\not\nearrow}^{(r)}(k) + q\_{\not\nearrow}^{(r)}(k).$$

Then the most probable distribution is *ϕ*(*<sup>r</sup>* ^ *<sup>f</sup>* (*k* )) (*k*) and the optimal filter is given by

$$(\stackrel{\frown}{r}\_f(k), \stackrel{\frown}{\mathfrak{x}}\_f(k)) = (\stackrel{\frown}{r}\_f(k), \stackrel{\frown}{\mathfrak{x}}\_f^{(r(k))}(k)).$$

#### **3.2 Optimal Hybrid Smoothing**

The dynamic programming (DP) equations associated with the backward control problem to minimize *Jb* (*r*) with regard to *wd* ( ⋅ ) are given as follows:

An Approach to Hybrid Smoothing for Linear Discrete-Time Systems with Non-Gaussian Noises http://dx.doi.org/10.5772/51385 171

$$\begin{aligned} V\_b^{(r)}(k,\ \mathbf{x}) &= \min\_{w\_d} \left\{ L^{(r)}(k,\ \mathbf{x},\ w\_{d'},\ y(k)) + V\_b^{(r)}(k+1,\ \bar{A}\_d^{(r)}(k)\mathbf{x}+w\_d) \right\} \\ V\_b^{(r)}(N,\ \mathbf{x}) &= \mathsf{O}\_N\{\mathbf{x}\},\ \ r \in \mathsf{N}\_0 \end{aligned}$$

Let

For any given *k*, by letting ∂*Vf*

170 Advances in Discrete Time Systems

*Kf*

*x*

(*k* + 1)= −(

*qf* (*r*)

*Vf*

*\*\*\*Optimal hybrid filtering algorithm\*\*\**

(*r*) (*k*), *x* ^ *f* (*r*)

*Vf*

(*r*

**3.2 Optimal Hybrid Smoothing**

(*r*)

Then the most probable distribution is *ϕ*(*<sup>r</sup>*

^

as the minimizer of *Vf*

*x* ^ *f* (*r*)

We also obtain

Step 1) Obtain *Kf*

Step 2) Choose *r*

minimize *Jb*

with initial conditions.

and

Since it can be shown that the matrix *Kf*

(*r*)

*Kf* (*r*) ( *k* + 1 ) ) *pf* (*r*) (*k* + 1)

(*k* + 1)= − *Hd*

<sup>=</sup> <sup>−</sup> *Md* ¯(*<sup>r</sup>*)

(*r*)

/ ∂*x* =0, we obtain

(*k*)*x* + *pf*

(*r*)

(*k*)= −(*Kf*

(*r*) (*k*)=0.

> (*r*) (*k*)) <sup>−</sup><sup>1</sup> *pf* (*r*) (*k*)

> > *Md*

(*k*)*y*(*k*) + *Kf*

(*r*) (*k*)*x* ^ *f* (*r*) (*k*) ' *Sd* (*r*) (*k*)

(*r*) (*k*), *qf* (*r*) (0)= *x* ^ <sup>0</sup>*D*0*x* ^ 0

(*r*) (*k*)*x* ^ *f* (*r*) (*k*)

*Kf* (*r*) (*k*)*x* ^ *f* (*r*) (*k*) + *qf* (*r*) (*k*).

¯(*r*)

(*k*) is positive-definite, we obtain

(*k*) <sup>−</sup><sup>1</sup>

(*r*) (*k*)*x* ^ *f* (*r*) (*k*) , *x* ^ *f* (*r*) (0)= *x* ^ 0

*M*¯ *<sup>d</sup> Ad* (*r*) (*k*)*Sd* (*r*)

(*k*) for *r* ∈N0 and *k* ∈ 0, *N* by solving (7), (10) and (11)

(*k*) and the optimal filter is given by

(10)

(11)

(*r*)

^ *f* (*r*)

(*k*, *x*). Then we obtain

*<sup>d</sup> Ad* (*r*) (*k*)*Sd* (*r*) (*k*)*Ad* '

'

*Nd*

× *Hd* '

+ *y* '

(*r*)

**brid Filtering Problem for Linear Continuous-Time Systems**.

(*k*) and *qf*

*<sup>f</sup>* (*k*) that minimizes

*Nd*

(*k*)*Nd* ¯(*r*)

(*k*)= −(*x* ^ *f* (*r*) (*k*)) '

(*r*)

(*r*)

^ *<sup>f</sup>* (*k*), *x*

with regard to *wd* ( ⋅ ) are given as follows:

(*k*)= −(*x* ^ *f* (*r*) (*k*)) '

> ^ *<sup>f</sup>* (*k* ))

^ *<sup>f</sup>* (*k*))=(*<sup>r</sup>* ^ *<sup>f</sup>* (*k*), *x* ^ *f* (*r*(*k* )) (*k*)).

The dynamic programming (DP) equations associated with the backward control problem to

¯(*r*)

¯(*r*)

× *Hd* '

*Nd*

(*k*)*y*(*k*) + *Kf*

(*k*)*y*(*k*) + *Kf*

*Kf* (*r*) (*k*)*x* ^ *f* (*r*) (*k*) + *qf* (*r*) (*k*).

Now we have the following filtering algorithm, which gives the solution of **the Optimal Hy‐**

(*k*)*y*(*k*) + *qf*

¯(*r*)

(*k*)−*M*¯

$$V\_b^{(r)}(k,\ \mathbf{x}) = \mathbf{x}^\prime K\_b^{(r)}(k)\mathbf{x} + 2(p\_b^{(r)}(k))^\prime \mathbf{x} + q\_b^{(r)}(k) \tag{12}$$

for some functions *Kb* (*r*) , *pb* (*r*) and *qb* (*r*) with appropriate dimensions. Then we obtain the fol‐ lowing minimizing *wd* ( ⋅ ). ¯

\*\*rowing\*\* шишилалде  $w\_d(
 \cdot,
 \cdot,
 \cdot,
 \cdot)$ . 
$$w\_d(r)^\*(k,
 \times) = \{-\overline{A\_d}^{(r)}(k) + T\_d^{(r)}(k)\overline{M\_d A\_d}(k)\}x(k) - T\_d^{(r)}(k)p\_f^{(r)}(k+1)$$
.

where

where 
$$T\_d^{(r)}(k) = \left[\overline{M\_d}^{(r)}(k) + K\_b^{(r)}(k+1)\right]^{-1}.$$

Let

$$V\_b^{(r)}(k,\ \mathbf{x}) = \mathbf{x}^\prime K\_b^{(r)}(k)\mathbf{x} + 2(p\_b^{(r)}(k))^\prime \mathbf{x} + q\_b^{(r)}(k) \tag{13}$$

for some functions *Kb* (*r*) , *pb* (*r*) and *qb* (*r*) with appropriate dimensions. Then we obtain the fol‐ lowing matrix difference equations, backward vector equations and scalar equations with terminal conditions: ¯(*r*) ¯(*r*) ¯(*r*) ¯(*r*)

$$K\_{b}^{(r)}(\mathbf{k}) = \overline{A\_{d}^{\prime}M\_{d}A\_{d}}^{(r)}(\mathbf{k}) - \overline{A\_{d}^{\prime}M\_{d}}^{(r)}(\mathbf{k})T\_{d}^{(r)}(\mathbf{k})\overline{M\_{d}A\_{d}}^{(r)}(\mathbf{k}) + \overline{H\_{d}^{\prime}N\_{d}H\_{d}}^{(r)}(\mathbf{k}),\tag{14}$$

$$K\_{b}^{(r)}(\mathbf{N}) = D\_{N}$$

$$p\_{b}^{(r)}(\mathbf{k}) = \overline{A\_{d}^{\prime}M\_{d}}^{(r)}(\mathbf{k})T\_{d}^{(r)}(\mathbf{k})p\_{b}^{(r)}(\mathbf{k}+1) - \overline{H\_{d}N\_{d}}^{(r)}(\mathbf{k})y(\mathbf{k})\mathbf{j} \quad p\_{b}^{(r)}(\mathbf{N}) = -D\_{N}\overset{\wedge}{x}\_{N}\tag{15}$$

$$p\_b^{(r)}(k) = \overline{A\_d M\_d}(k) T\_d^{(r)}(k) p\_b^{(r)}(k+1) - \overline{H\_d N\_d}^{(r)}(k) y(k) \mathbf{j} \quad p\_b^{(r)}(N) = -D\_N \stackrel{\wedge}{x}\_N \tag{15}$$

$$\begin{split} q\_b^{(r)}(k) = -p\_b^{(r)'}(k+1)T\_d^{(r)}(k)p\_b^{(r)}(k+1) + q\_b^{(r)}(k+1) \\ &+ y^{'}(k)\overline{N\_d}(k+1)y(k), \ q\_b^{(r)}(N) = \overset{\wedge}{\mathbf{x}\_N^{\cdot}}D\_N \overset{\wedge}{\mathbf{x}\_N} \end{split} \tag{16}$$

For any given *k*, by letting ∂*Vb* (*r*) / ∂*x* =0, we obtain

$$K\_b^{(r)}(k) \ge +\ p\_b^{(r)}(k) = 0.1$$

Since it can be also shown that the matrix *Kb* (*r*) (*k*) is positive-definite, we obtain

$$\overset{\wedge}{\mathbf{x}}\_{b}^{(r)}(k) = -(K\_{b}^{(r)}(k))^{-1} p\_{b}^{(r)}(k)$$

as the minimizer of *Vb* (*r*) (*k*, *x*). Then we obtain

$$\begin{aligned} \text{Parameters in Distance 5 systems} \\\\ \begin{aligned} \text{a. } &\overset{\text{a}}{\text{x}}\_{k}^{(r)}(k) = -(K\_{b}^{(r)}(k))^{-1} p\_{b}^{(r)}(k) \\ &= \left[ \overline{A\_{d} \, ^\circ M\_{d} A\_{d}}^{(r)}(k) - \overline{A\_{d} \, ^\circ M\_{d}}^{(r)}(k) \, T\_{d}^{(r)}(k) \overline{M\_{d} A\_{d}}^{(r)}(k) + \overline{H\_{d} \, ^\circ N\_{d} H\_{d}}^{(r)}(k) \right]^{-1} \\ &\quad \times \left[ \overline{A\_{d} \, ^\circ M\_{d}}^{(r)}(k) \, T\_{d}^{(r)}(k) \, k\_{b}^{(r)}(k+1) \, \widehat{\mathbf{x}}\_{b}^{(r)}(k+1) + \overline{H\_{d} \, ^\circ N\_{d}}^{(r)}(k) \, y(k) \right] \end{aligned} \tag{17}$$

*Vb*

where *Ks*

(*r* ^ *<sup>s</sup>*(*k* )) (*r*) (*k*)= − *x* ^ *b* (*r*) ′ (*k*)*Kb* (*r*) (*k*)*x* ^ *b* (*r*) (*k*) + *qb* (*r*) (*k*).

(*k*) + *Kb* (*r* ^ *<sup>s</sup>*(*k* )) (*k*) <sup>−</sup><sup>1</sup> .

^ *<sup>s</sup>*(*k* ))

**Remark 3.2.** *Note that, if the system (1) is a single mode system, i.e., the system (1) is independent of θ(k), the forms of the filter and smoother presented in this section are reduced to the well-known ones*

In this section, we study numerical examples to demonstrate the effectiveness of the present‐

We consider the following two mode systems and assume that the system parameters are

0

<sup>−</sup>0.4 0.6 ,

0 1

=( 3 5 , 2 5 )

<sup>0</sup> =*col*(−0.1, 0) and the distribution of the initial mode *i0* as (1/ 2, 1/ 2).

*x*(*k* + 1)= *Ad* (*k*, *θ*(*k*))*x*(*k*) + *wd* (*k*, *θ*(*k*)),

*x*(0)= *x*0, *θ*(0)=*i*

⋅ Mode 1: ⋅ Mode 2:

<sup>−</sup>0.8 0.6 , *<sup>A</sup>*<sup>2</sup> <sup>=</sup> 0.5 <sup>1</sup>

<sup>0</sup> <sup>1</sup> , *<sup>N</sup>* (*t*, *<sup>i</sup>*)=1, *<sup>D</sup>*<sup>0</sup> <sup>=</sup> <sup>1</sup> <sup>0</sup>

=( 1 2 , 1

*wd* ( ⋅ , ⋅ ) and *vd* ( ⋅ , ⋅ ) are stochastic noises which aren''t restricted to be Gaussian white. The

<sup>2</sup> ), *<sup>ϕ</sup>*(3)

*<sup>A</sup>*<sup>1</sup> <sup>=</sup> <sup>0</sup> <sup>1</sup>

candidates of mode distributions are given as follows:

=( 2 5 , 3

<sup>5</sup> ), *<sup>ϕ</sup>*(2)

*H* = 1, 0

*<sup>M</sup>* (*t*, *<sup>i</sup>*)= <sup>1</sup> <sup>0</sup>

^

for *i=1,2*. We set *x*

*ϕ*(1)

*y*(*k*)=*Hd* (*k*, *θ*(*k*))*x*(*k*) + *vd* (*k*, *θ*(*k*))

(*k*) and the optimal smoother is given by

http://dx.doi.org/10.5772/51385

173

(19)

(*k*) + *Kb* (*r* ^ *<sup>s</sup>*(*k* )) (*k*)*x* ^ *b* (*r* ^ *<sup>s</sup>*(*k* )) (*k*) )

An Approach to Hybrid Smoothing for Linear Discrete-Time Systems with Non-Gaussian Noises

Then the most probable distribution is *ϕ*(*<sup>r</sup>*

(*r* ^ *<sup>s</sup>*(*k*), *x* ^ *<sup>s</sup>*(*k*))=(*r* ^ *<sup>s</sup>*(*k*), *x* ^ *s* (*r* ^ *<sup>s</sup>*(*k* )) (*k*))

=(*r* ^ *<sup>s</sup>*(*k*), *Ks* (*r* ^ *<sup>s</sup>*(*k* )) (*k*) *Kf* (*r* ^ *<sup>s</sup>*(*k* )) (*k*)*x* ^ *f* (*r* ^ *<sup>s</sup>*(*k* ))

(*k*)= *Kf* (*r* ^ *<sup>s</sup>*(*k* ))

*of the Kalman filter and smoother.*

**4. Numerical Examples**

ed design algorithms.

as follows:

where

and

and

$$\begin{split} q\_{\flat}^{(r)}(k) &= -\mathop{\rm x}\nolimits^{(r)}\_{b}(k+1)\mathop{\rm K}^{(r)}\_{b}(k+1)\mathop{\rm T}^{(r)}\_{d}(k)\mathop{\rm K}^{(r)}\_{b}(k+1)\mathop{\rm x}\nolimits^{(r)}\_{b}(k+1) \\ &\quad + q\_{\flat}^{(r)}(k+1) + \mathop{\rm y}\nolimits^{(r)}(k)\overline{\mathop{\rm N}}^{}\_{d}(k+1)\boldsymbol{y}(k), \ q\_{\flat}^{(r)}(N) \mathop{\rm x}\nolimits^{\wedge}\_{N}\big{D}\_{N}\mathop{\rm x}^{\wedge}\_{N}. \end{split} \tag{18}$$

We also obtain

$$V\_{\boldsymbol{b}}^{(r)}(k) = -\left(\stackrel{\scriptstyle \wedge}{\mathbf{x}}\_{\boldsymbol{b}}^{(r)}(k)\right)' \mathcal{K}\_{\boldsymbol{b}}^{(r)}(k) \stackrel{\scriptstyle \wedge}{\mathbf{x}}\_{\boldsymbol{b}}^{(r)}(k) + q\_{\boldsymbol{b}}^{(r)}(k) .$$

Using (6) and (12), we can express *Vs* (*r*) (*k*, *x*) as

$$V\_s^{(r)}(k,\mathbf{x}) = \mathbf{x}' \mathsf{L} \mathbf{x}'^{(r)}(k) + K\_b^{(r)}(k) \mathbf{J} \mathbf{x} + 2 \mathsf{L} p\_f^{(r)}(k) + p\_b^{(r)}(k) \mathbf{J}' \mathbf{x} + q\_f^{(r)}(k) + q\_b^{(r)}(k)$$

Let

 ∂*Vs* (*r*) / ∂*x* =0

and we obtain the following form.

$$\overset{\wedge}{\mathbf{x}}\_{s}^{(r)}(k) = -\mathsf{f}\,K\_{f}^{(r)}(k) + K\_{b}^{(r)}(k)\mathbf{J}^{-1}(p\_{f}^{(r)}(k) + p\_{b}^{(r)}(k))$$

Since *pf* (*r*) (*k*)= − *Kf* (*r*) (*k*)*x* ^ *f* (*r*) (*k*) and *pb* (*r*) (*k*)= − *Kb* (*r*) (*k*)*x* ^ *b* (*r*) (*k*), for each candidate *r* of given distri‐ butions, we can obtain the following form of smoothed estimate at time *k* by the forward and backward filtered estimates.

$$\overset{\wedge}{\mathfrak{X}}\_{s}^{(r)}(k) = K\_{s}^{(r)}(k)\mathbb{I}K\_{f}^{(r)}(k)\overset{\wedge}{\mathfrak{X}}\_{f}^{(r)}(k) + K\_{b}^{(r)}(k)\overset{\wedge}{\mathfrak{X}}\_{b}^{(r)}(k)\mathbb{I}$$

where *Ks* (*r*) (*k*)= *Kf* (*r*) (*k*) + *Kb* (*r*) (*k*) <sup>−</sup><sup>1</sup> .

Now we have the following smoothing algorithm, which gives the solution of **the Optimal Hybrid Smoothing Problem for Linear Continuous-Time Systems**.

*\*\*\*Optimal hybrid smoothing algorithm\*\*\**

Step 1) Obtain *Kb* (*r*) (*k*), *x* ^ *b* (*r*) (*k*) and *qb* (*r*) (*k*) for *r* ∈N0 and *k* ∈ 0, *N* by solving (14), (17) and (18) with terminal conditions.

Step 2) Choose *r* ^ *<sup>s</sup>*(*k*) that minimizes

$$V\_s^{\ (r)}(k) = V\_f^{\ (r)}(k) + V\_b^{\ (r)}(k)$$

where

$$V\_b^{(r)}(k) = -\hat{\boldsymbol{\chi}}\_b^{(r)\dot{\cdot}}(k)K\_b^{(r)}(k)\hat{\boldsymbol{\chi}}\_b^{(r)}(k) + q\_b^{(r)}(k).$$

Then the most probable distribution is *ϕ*(*<sup>r</sup>* ^ *<sup>s</sup>*(*k* )) (*k*) and the optimal smoother is given by

$$
\begin{split}
&\stackrel{\scriptstyle(\hat{r}\_{s}(k),\stackrel{\scriptstyle\frown}{x}\_{s}(k))=(\stackrel{\scriptstyle\frown}{r\_{s}}(k),\stackrel{\scriptstyle\frown}{x}\_{s}^{\hat{(r\_{s}(k))}}(k))}{\displaystyle} \\
&=(\stackrel{\scriptstyle\frown}{r\_{s}(k),\stackrel{\scriptstyle\frown}{K\_{s}}^{\hat{(r\_{s}(k))}}(k)\!\!\!K\_{\stackrel{\scriptstyle\frown}{f}^{\hat{(r\_{s}(k))}}}(k)\!\!\!\!K\_{\stackrel{\scriptstyle\frown}{f}^{\hat{(r\_{s}(k))}}}^{\hat{(r\_{s}(k))}}(k)+K\_{\stackrel{\scriptstyle\frown}{b}}^{\hat{(r\_{s}(k))}}(k)\!\!\!K\_{\stackrel{\scriptstyle\frown}{b}}^{\hat{(r\_{s}(k))}}(k)\!\!\!L)}{\displaystyle} \\
&\stackrel{\scriptstyle\frown}{K\_{f}^{\hat{(r\_{s}(k))}}(k)+K\_{\stackrel{\scriptstyle\frown}{b}}^{\hat{(r\_{s}(k))}}(k)\!\!\!L)}{\displaystyle}
\end{split}
$$

where *Ks* (*r* ^ *<sup>s</sup>*(*k* )) (*k*)= *Kf* (*r <sup>s</sup>*(*k* )) (*k*) + *Kb* (*r <sup>s</sup>*(*k* )) (*k*) <sup>−</sup><sup>1</sup>

**Remark 3.2.** *Note that, if the system (1) is a single mode system, i.e., the system (1) is independent of θ(k), the forms of the filter and smoother presented in this section are reduced to the well-known ones of the Kalman filter and smoother.*

#### **4. Numerical Examples**

In this section, we study numerical examples to demonstrate the effectiveness of the present‐ ed design algorithms.

We consider the following two mode systems and assume that the system parameters are as follows:

$$\mathbf{x}(k+1) = A\_d(k, \,\theta(k))\mathbf{x}(k) + w\_d(k, \,\theta(k)),$$

$$\mathbf{x}(0) = \mathbf{x}\_0, \,\theta(0) = \mathbf{i}\_0$$

$$\mathbf{y}(k) = H\_d(k, \,\theta(k))\mathbf{x}(k) + v\_d(k, \,\theta(k))$$

where

*x* ^ *b* (*r*)

and

*qb* (*r*) (*k*)= − *x* ^ *b* (*r*) ′ (*k* + 1)*Kb*

We also obtain

*Vs*

Since *pf*

 *x* ^ *s* (*r*) (*k*)= *Ks* (*r*) (*k*) *Kf* (*r*) (*k*)*x* ^ *f* (*r*) (*k*) + *Kb* (*r*) (*k*)*x* ^ *b* (*r*) (*k*)

where *Ks*

Let

(*r*)

(*k*)= −(*Kb*

172 Advances in Discrete Time Systems

= *Ad* ′ *Md Ad*

(*r*) (*k*)) <sup>−</sup><sup>1</sup> *pb* (*r*) (*k*)

¯

× *Ad* ′ *Md*

*Vb*

Using (6) and (12), we can express *Vs*

(*k*, *<sup>x</sup>*)= *<sup>x</sup>* ′ *Kf*

∂*Vs*

*x*

(*k*)= − *Kf*

(*r*)

(*r*)

Step 1) Obtain *Kb*

Step 2) Choose *r*

where

*Vs*

and we obtain the following form.

(*r*) (*k*)*x* ^ *f* (*r*)

(*r*) (*k*) + *Kb* (*r*) (*k*) <sup>−</sup><sup>1</sup> .

*\*\*\*Optimal hybrid smoothing algorithm\*\*\**

(*r*) (*k*), *x* ^ *b* (*r*)

and backward filtered estimates.

(*k*)= *Kf*

(18) with terminal conditions.

^

(*k*)− *Ad* ′ *Md*

> (*k*)*Td* (*r*) (*k*)*Kb* (*r*) (*k* + 1)*x* ^ *b* (*r*)

> > (*r*)

(*r*) (*k*) + *Kb* (*r*)

^ *s* (*r*)

(*k* + 1)*Td*

(*r*)

+ *qb* (*r*)

> (*r*) / ∂*x* =0

(*k*)= − *Kf*

(*k*) and *pb*

(*r*) (*k*) + *Kb* (*r*) (*k*) <sup>−</sup><sup>1</sup> (*pf* (*r*) (*k*) + *pb* (*r*) (*k*))

(*r*)

**Hybrid Smoothing Problem for Linear Continuous-Time Systems**.

(*r*)

(*k*) and *qb*

*<sup>s</sup>*(*k*) that minimizes

(*r*) (*k*)=*Vf* (*r*) (*k*) + *Vb* (*r*) (*k*)

(*k*)= − *Kb*

(*r*) (*k*)*x* ^ *b* (*r*)

butions, we can obtain the following form of smoothed estimate at time *k* by the forward

Now we have the following smoothing algorithm, which gives the solution of **the Optimal**

¯(*r*)

(*k*)*Td* (*r*)

> (*r*) (*k*)*Kb* (*r*) (*k* + 1)*x* ^ *b* (*r*) (*k* + 1)

(*k*)= −(*x* ^ *b* (*r*) (*k*)) ′ *Kb* (*r*) (*k*)*x* ^ *b* (*r*) (*k*) + *qb* (*r*) (*k*).

(*r*)

(*k*) *x* + 2 *pf*

(*k*, *x*) as

(*r*) (*k*) + *pb* (*r*) (*k*) ′ *x* + *qf* (*r*) (*k*) + *qb* (*r*) (*k*)

(*k* + 1) + *y* ′

(*k*)*Md Ad*

¯(*r*)

(*k*) + *Hd* ′ *NdHd*

> ′ *Nd*

(*k* + 1)*y*(*k*), *qb*

¯(*r*)

(*k* + 1) + *Hd*

(*k*)*Nd* ¯ ¯(*r*)

(*k*) −1

(*k*)*y*(*k*) , *x*

(*r*) (*N* )= *x* ^ *N* ′ *DN x* ^ *N* .

^ *b* (*r*) (*N* )= *x* ^ *N*

(*k*), for each candidate *r* of given distri‐

(*k*) for *r* ∈N0 and *k* ∈ 0, *N* by solving (14), (17) and

(17)

(18)

¯(*r*)

$$\begin{array}{cccc} \cdot & \text{Mode 1:} & & \cdot & \text{Mode 2:}\\ A\_1 = \begin{bmatrix} 0 & 1\\ -0.8 & 0.6 \end{bmatrix} & & A\_2 = \begin{bmatrix} 0.5 & 1\\ -0.4 & 0.6 \end{bmatrix} \\\\ H = \begin{bmatrix} 1, & 0 \end{bmatrix} \end{array}$$

and

$$M(t, \ i) = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \quad N(t, \ i) = 1, \ D\_0 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$$

for *i=1,2*. We set *x* ^ <sup>0</sup> =*col*(−0.1, 0) and the distribution of the initial mode *i0* as (1/ 2, 1/ 2). *wd* ( ⋅ , ⋅ ) and *vd* ( ⋅ , ⋅ ) are stochastic noises which aren''t restricted to be Gaussian white. The candidates of mode distributions are given as follows:

$$\phi^{(1)} = \begin{pmatrix} \frac{2}{5} & \frac{3}{5} \end{pmatrix}, \phi^{(2)} = \begin{pmatrix} \frac{1}{2} & \frac{1}{2} \end{pmatrix}, \phi^{(3)} = \begin{pmatrix} \frac{3}{5} & \frac{2}{5} \end{pmatrix}.$$

The paths of *θ(k)* are generated randomly, and the performances are compared under the same circumstance, that is, the same set of the paths so that the performances can be easi‐ ly compared.

We consider the whole system (19) with the true mode distribution *ϕ* (3) over the time inter‐ val *k* ∈ 0, 100 . We verify the effectiveness of the presented hybrid estimation algorithms and compare the estimation performances for the optimal filtering and smoothing algo‐ rithms. In order to carry out these algorithms we solve the forward or backward triplet of the difference equations (7)(10)(11) or (14)(17)(18) with the initial or terminal conditions for given observation *y*( ⋅ ) and each candidate *r=1,2,3* of given distributions, and obtain the pair (*r* ^ *<sup>f</sup>* (*k*), *x* ^ *<sup>f</sup>* (*k*)) minimizing *Vf* (*r*) (*k*, *x*) in the filtering case or the pair (*r* ^ *<sup>s</sup>*(*k*), *x* ^ *<sup>s</sup>*(*k*)) minimizing *Vs* (*r*) (*k*, *x*) in the smoothing case for *k* ∈ 0, 100 .

**Figure 1.** The state of the system and filtered values: 1st

**Figure 3.** The square errors between the state and filtered

**Figure 5.** The state of the system and filtered values: 2nd

**Figure 2.** The state of the system and smoothed values:

http://dx.doi.org/10.5772/51385

175

**Figure 4.** The square errors between the state and

**Figure 6.** The state of the system and smoothed values:

smoothed values: 1st components

2nd components

1st components

An Approach to Hybrid Smoothing for Linear Discrete-Time Systems with Non-Gaussian Noises

components

values: 1st components

components

Filtered and smoothed values of the first components of the whole system states are given by Fig. 1 and Fig. 2 respectively. Fig. 3 and Fig. 4 show the square errors between the states and filtered values, and the states and smoothed values respectively. The mean square er‐ rors over the time interval *[0, 100]* are 0.0276 in the filtering case, and 0.0151 in the smooth‐ ing case respectively. From these figures and calculation results it is shown that the smoother gives better estimation than the filter. Filtered and smoothed values of the second components of the whole system states are given by Fig. 5 and Fig. 6 respectively. Fig. 7 and Fig. 8 show the square errors between the states and filtered values, and the states and smoothed values respectively. The mean square errors over the time interval *[0, 100]* are 0.0151 in the filtering case, and 0.0118 in the smoothing case respectively. From these figures and calculation results it is shown that the smoother gives better estimation than the filter. Filtered and smoothed mode distributions are given by Fig. 9 and Fig. 10. Notice that the vertical axes show the candidates of the mode distributions not the modes themselves. In Fig. 9 the filtered values of the mode distributions rapidly change to be left undecided. To the contrary in Fig. 10 the smoothed values of the mode distributions are firmly decided. Through these ten figures it is shown that the optimal smoother presented in this chapter gives better estimate performance than the optimal filter presented in the previous work [24] from the point of view of both state and modes estimation.

An Approach to Hybrid Smoothing for Linear Discrete-Time Systems with Non-Gaussian Noises http://dx.doi.org/10.5772/51385

The paths of *θ(k)* are generated randomly, and the performances are compared under the same circumstance, that is, the same set of the paths so that the performances can be easi‐

val *k* ∈ 0, 100 . We verify the effectiveness of the presented hybrid estimation algorithms and compare the estimation performances for the optimal filtering and smoothing algo‐ rithms. In order to carry out these algorithms we solve the forward or backward triplet of the difference equations (7)(10)(11) or (14)(17)(18) with the initial or terminal conditions for given observation *y*( ⋅ ) and each candidate *r=1,2,3* of given distributions, and obtain the pair

(*k*, *x*) in the filtering case or the pair (*r*

Filtered and smoothed values of the first components of the whole system states are given by Fig. 1 and Fig. 2 respectively. Fig. 3 and Fig. 4 show the square errors between the states and filtered values, and the states and smoothed values respectively. The mean square er‐ rors over the time interval *[0, 100]* are 0.0276 in the filtering case, and 0.0151 in the smooth‐ ing case respectively. From these figures and calculation results it is shown that the smoother gives better estimation than the filter. Filtered and smoothed values of the second components of the whole system states are given by Fig. 5 and Fig. 6 respectively. Fig. 7 and Fig. 8 show the square errors between the states and filtered values, and the states and smoothed values respectively. The mean square errors over the time interval *[0, 100]* are 0.0151 in the filtering case, and 0.0118 in the smoothing case respectively. From these figures and calculation results it is shown that the smoother gives better estimation than the filter. Filtered and smoothed mode distributions are given by Fig. 9 and Fig. 10. Notice that the vertical axes show the candidates of the mode distributions not the modes themselves. In Fig. 9 the filtered values of the mode distributions rapidly change to be left undecided. To the contrary in Fig. 10 the smoothed values of the mode distributions are firmly decided. Through these ten figures it is shown that the optimal smoother presented in this chapter gives better estimate performance than the optimal filter presented in the previous work [24]

over the time inter‐

*<sup>s</sup>*(*k*)) minimizing

^ *<sup>s</sup>*(*k*), *x* ^

We consider the whole system (19) with the true mode distribution *ϕ* (3)

(*r*)

from the point of view of both state and modes estimation.

(*k*, *x*) in the smoothing case for *k* ∈ 0, 100 .

ly compared.

174 Advances in Discrete Time Systems

(*r* ^ *<sup>f</sup>* (*k*), *x*

*Vs* (*r*) ^ *<sup>f</sup>* (*k*)) minimizing *Vf*

tion of the averaged systems and performance indices prevents the computational complexi‐ ty from increasing exponentially with time passage. For these performance indices we have formulated the optimal filtering and smoothing problems based on the available observed information. The estimation problems have been reduced to the optimal control problems to find the noises minimizing the introduced performance indices. We have derived the for‐ ward and backward matrix difference equations and the forward and backward filter equa‐ tions with the initial and terminal conditions respectively, which give the necessary conditions for the solvability of the optimal estimation problems. Then we have presented the optimal hybrid smoothing algorithm by the two filters approach. Finally we have stud‐ ied the numerical examples to compare the estimation performances by filtering and smoothing. We have obtained the better estimation performance by the smoothing algo‐ rithm than the filtering algorithm from the point of view of both state and modes estimation.

An Approach to Hybrid Smoothing for Linear Discrete-Time Systems with Non-Gaussian Noises

http://dx.doi.org/10.5772/51385

177

With regard to continuous-time cases, refer to [16,17,25]. In particular, in [17,25], the cases that concerned systems are assumed to be Markovian jump systems is also considered. In these papers the concept of quasi-stationary distributions is introduced for the Markovian mode processes and near optimality of limiting estimators with the quasi-stationary distri‐ butions is shown. It is well known that the concept of quasi-stationary distribution is very important and highly practical to grasp behavior of stochastic processes over long run time. As a further research issue it is very significant that the quasi-stationary distributions of sto‐ chastic mode processes and estimator with these distributions are investigated for the dis‐

With regard to the optimal control problems considered in this chapter, it is obvious that principle of optimality does not hold for the optimal trajectory *x* <sup>∗</sup>( ⋅ ) with optimal control

mality hold. These principles give a basis of validity for the hybrid estimation algorithms

*<sup>x</sup>*(*<sup>k</sup>* <sup>+</sup> 1)= *<sup>A</sup>*¯

*y*(*k*)=*Hd* (*k*, *θ*(*k*))*x*(*k*) + *vd* (*k*, *θ*(*k*))

*d* (*r*)

(*r*)∗( ⋅ ) and each performance index for each mode distribution candidate *r* ∈N0.

^( <sup>⋅</sup> ), *<sup>x</sup>*( <sup>⋅</sup> )) of the optimal mode distribution candidate and optimal

(*r*)∗( ⋅ ), the following principles of hybrid opti‐

(*k*)*x*(*k*) + *wd* (*k*) (20)

crete-time hybrid systems.

However, for the pair (*r*

presented in this chapter.

Consider the following system

and the performance indices

input *wd*

**Appendix: Principle of Hybrid Optimality**

trajectory with the optimal control inputs *wd*
