**3.1. LMI characterizations**

**Theorem 1**. Consider the time-varying system (1), observer dynamics (3), and error dynamics (4) satisfying system property (S1) and (S2). Then, (T1) implies (T2), where (T1) and (T2) are as follows.

**(T1)** There exist matrices P1≻0, P2≻0, *K*, and *L* and positive scalars γ and β such that

$$
\begin{pmatrix}
\Pi\_1(P\_1, K) & \star \\
K^T B\_1^T P\_1 + P\_2(I - F(t))A & \Pi\_2(P\_2, L)
\end{pmatrix} \prec 0,\tag{10}
$$

and matrices *P*3≻0 and *Q* ≻0 with adaptive scheme of (t) satisfying

$$\dot{\epsilon}(t) = -P\_3 \left( Q + \frac{1}{2} \beta^2 diag[\dot{\varrho}(t)]^T diag[\dot{\varrho}(t)] \right) \epsilon(t). \tag{11}$$

The matrices, *Π*1 and *Π*2, defined in (10) are

$$\Pi\_1(P\_1, K) = \left(F(t)A + B\_1K\right)^T P\_1 + P\_1\left(F(t)A + B\_1K\right) + \gamma^{-2} P\_1 B\_2 B\_2^T P\_1 + D^T D,\tag{12}$$

$$\mathcal{T}\_{\alpha} = \bigcap\_{0}^{\infty} \left\| z(t) \right\|^{2} \, dt \le \gamma^{2} \int\limits\_{0}^{\infty} \left\| \mathbf{w}(t) \right\|^{2} \, dt \tag{13}$$

**(T2)** (O1) and (O2) hold, that is, the problem of observer-based control via contaminated measured feedback is solvable.

**Proof**: the implication between (T1) and (T2) is shown in the Appendix.

**Remark 5**. It is shown in the Theorem 1 that if the matrix inequality (10) is satisfied and if (*t*) is computationally adjusted according to (11), then the overall closed-loop system is not merely quadratically stabilizable, but the performance index (9) is fulfilled. It is highlighted that (*t*) in (11) is exponentially approaching zero as *t* →*∞* for any *y* ^(*t*). It is also noticed that there remains a problem, that is, to compute the observed states *x* ^(*t*) in (3), in addition to the input *u*(*t*) and exogenous signal *w*(*t*), the time-varying vector function *ς*(*t*) is needed in the compu‐ tation. Therefore, the following modified least-squares algorithm is derived for recursive estimation of time-varying vector-valued function *ς*(*t*).

### **3.2. Modified least-squares algorithms**

Prior to stating the modified least-squares scheme for computing *ς*(*t*), the following assump‐ tion is made

$$
\zeta(\mathfrak{a}) = \zeta(\mathfrak{z}), \ \mathfrak{a}, \mathfrak{z} \in l\_i, \ i = 0, 1, 2, \cdots \; \tag{14}
$$

where *Ii* ={*t* |*ti* ≤*t* <*ti* + *Δt*}. This is to say that *ς*(*t*) is kept constant within the small time interval *Δt*, which, equivalently, is assumed that *ς*(*t*) is a piecewise continuous time-varying function. The problem in this section is to determine an adaptation law for the vector-valued function *ς*(*t*) in such a way that the *x*˜(*t*) computed from the model (4) agree as closely as possible to zero in the sense of least squares. The following least-squares algorithms are developed by summing the index of each small time interval with cost function defined as follows

$$\mathcal{J} = \min\_{\zeta} \sum\_{i} \mathcal{J}\_{i}(\zeta) = \min\_{\zeta} \sum\_{i} \left\{ \frac{1}{2} \int\_{t\_{i}}^{t\_{i} + \Delta t} \mathring{\mathfrak{k}}^{T} \mathring{\mathfrak{k}} d\tau \right\}. \tag{15}$$

To minimize the cost function ℑ, each index ℑ*<sup>i</sup>* should be minimized as well and the following conditions may be obtained for each time interval

Robust Observer-Based Output Feedback Control of a Nonlinear Time-Varying System http://dx.doi.org/10.5772/62697 9

$$\frac{\partial}{\partial \underline{\boldsymbol{\sigma}}} \mathcal{J}\_{l} = \int\_{l\_{l}}^{l} \mathcal{W}^{T}(\boldsymbol{\tau}) \left( F(\boldsymbol{\tau}) A \tilde{\boldsymbol{x}}(\boldsymbol{\tau}) + (I - F(\boldsymbol{\tau})) A \hat{\boldsymbol{x}}(\boldsymbol{\tau}) + \mathcal{W}(\boldsymbol{\tau}) \boldsymbol{\xi}(\boldsymbol{\tau}) - L \boldsymbol{y}(\boldsymbol{\tau}) \right) d\boldsymbol{\tau} = 0,\tag{16}$$

where *t* ∈*Ii* and

(12)

(13)

(14)

(15)

^(*t*). It is also noticed that there

^(*t*) in (3), in addition to the input

should be minimized as well and the following

**(T2)** (O1) and (O2) hold, that is, the problem of observer-based control via contaminated

**Remark 5**. It is shown in the Theorem 1 that if the matrix inequality (10) is satisfied and if (*t*) is computationally adjusted according to (11), then the overall closed-loop system is not merely quadratically stabilizable, but the performance index (9) is fulfilled. It is highlighted that (*t*)

*u*(*t*) and exogenous signal *w*(*t*), the time-varying vector function *ς*(*t*) is needed in the compu‐ tation. Therefore, the following modified least-squares algorithm is derived for recursive

Prior to stating the modified least-squares scheme for computing *ς*(*t*), the following assump‐

where *Ii* ={*t* |*ti* ≤*t* <*ti* + *Δt*}. This is to say that *ς*(*t*) is kept constant within the small time interval *Δt*, which, equivalently, is assumed that *ς*(*t*) is a piecewise continuous time-varying function. The problem in this section is to determine an adaptation law for the vector-valued function *ς*(*t*) in such a way that the *x*˜(*t*) computed from the model (4) agree as closely as possible to zero in the sense of least squares. The following least-squares algorithms are developed by

summing the index of each small time interval with cost function defined as follows

**Proof**: the implication between (T1) and (T2) is shown in the Appendix.

in (11) is exponentially approaching zero as *t* →*∞* for any *y*

remains a problem, that is, to compute the observed states *x*

estimation of time-varying vector-valued function *ς*(*t*).

**3.2. Modified least-squares algorithms**

To minimize the cost function ℑ, each index ℑ*<sup>i</sup>*

conditions may be obtained for each time interval

tion is made

measured feedback is solvable.

8 Robust Control - Theoretical Models and Case Studies

$$\mathcal{W}(\mathfrak{r}) = L \operatorname{diag} [\mathfrak{J}(\mathfrak{r})].$$

In view of (16), the *least-squares estimate* for *ς*(*t*) is given by

$$\xi(t) = \Gamma(t) \int\_{t\_i}^{t} \mathcal{W}^T(\tau) \left( L\boldsymbol{y}(\tau) - F(\tau)A\tilde{\boldsymbol{x}}(\tau) - (I - F(\tau))A\tilde{\boldsymbol{x}}(\tau) \right) d\tau,\tag{17}$$

where Γ(*t*) is called *covariance matrix* and is defined as follows

$$\Gamma(t) = \left(\int\_{t\_i}^{t} W^T(\tau)W(\tau)d\tau\right)^{-1}$$

To assure positive definiteness and thus the invertibility, the covariance matrix will be further polished in the sequel. The covariance matrix plays an important role in the estimation of *ς*(*t*) and is worth noting that

$$\frac{d}{dt}\left(\Gamma^{-1}(t)\right) = \mathcal{W}^T(t)\mathcal{W}(t). \tag{18}$$

To find the least-squares estimator with recursive formulations, which parameters are updated continuously on the basis of available data, we differentiate (17) with respect to time and obtain

$$\frac{d}{dt}\xi(t) = -\Gamma(t)\mathcal{W}^T(t)\mathcal{W}(t)\xi(t) + f(t),\ \xi(t\_i) = \xi\_{i\prime} \tag{19}$$

where

$$f(t) = \Gamma(t)\mathcal{W}^{\Gamma}(t)\left(L\mathcal{y}(t) - F(t)A\vec{x}(t) - (I - F(t))A\pounds(t)\right),\tag{20}$$

for *t* ∈*Ii* , *i* = 0, 1, 2, ⋯. The covariance matrix Γ(*t*) acts in the *ς* ^(*t*) update law as a time-varying, directional adaptation gain. We have to aware that by observing (18), which indicates positive semi definite of *<sup>d</sup> dt* <sup>Γ</sup> <sup>−</sup>1(*t*), implies that Γ−<sup>1</sup> (*t*) may go without bound and hence Γ(*t*) will become very small in some directions and adaptation in those directions becomes very slow. Therefore, to avoid slowing adaptive propagation speed and to assure the positive definiteness of covariance matrix Γ(*t*) such that invertibility exists, the following *covariance resetting propaga‐ tion law* is developed. Within each time window, we modify (18) as follows,

$$\frac{d}{dt}\left(\Gamma^{-1}(t)\right) = \text{g}\mathcal{W}^T(t)\mathcal{W}(t),\ \Gamma(t\_i) = k\_0 I,\ \ t \in I\_{i\prime} \tag{21}$$

and

(22)

The scalar *g* > 0 is chosen such that the adaptation maintains suitable rate of propagation. The covariance resetting propagation is adjusted by (21), in which the initial condition is also reset. The condition (22) shows that the covariance matrix can also be reset within the time window if the covariance matrix is close to the singularity. That is, the covariance matrix is reset if its minimum eigenvalue is less than or equal to *k*1, that is, *λ*min(Γ(*t*))≤*k*1. The following Lemma shows that the covariance matrix Γ(*t*) is bounded and is positive definite based on the covariance resetting propagation law (21) and (22).

**Lemma 1**. Assuming that (21) and (22) hold. Then, *k*0*I* ≽ Γ(*t*)≽*k*1*I* ≻0 and, thus, *k*<sup>0</sup> ≥ ∥ Γ(*t*)∥ ≥*k*<sup>1</sup> for *t* ∈*Ii* , *i* = 0, 1, 2, ⋯.

**Proof**: At the resettings, the covariance matrix Γ−<sup>1</sup> (*t*) is reset at *<sup>t</sup>* <sup>=</sup>*tr* + , hence Γ(*tr* + ) =*k*0*I*. Then, followed by *<sup>d</sup> dt* <sup>Γ</sup>−<sup>1</sup> (*t*)= *gW <sup>T</sup>* (*t*)*<sup>W</sup>* (*t*)≽0, we have Γ−<sup>1</sup> (*t*1) −Γ−<sup>1</sup> (*t*2)≽<sup>0</sup> for all *t*<sup>1</sup> <sup>≥</sup>*t*<sup>2</sup> <sup>&</sup>gt;*tr* between covariance resettings. The computation will progress until the next resetting time *t*, if it exists, on which *λ*min(Γ(*t*))≤*k*1. Hence, we may conclude that *k*0*I* ≽ Γ(*t*)≽*k*1*I* ≻0, which says that *k*<sup>0</sup> ≥ ∥ Γ(*t*)∥ ≥*k*1.

Before presenting the theorem for modified least-squares algorithms of *ς* ^(*t*) showing that it is bounded, the following transition matrix Lemma for the solutions of (19) is essential.

**Lemma 2**. There exists a positive number *k* such that the transition matrix, Φ(*t*, *τ*), of (19) is bounded, that is, ∥ Φ(*t*, *τ*)∥ ≤*k* <*∞*, for *t* ∈*Ii* , *i* = 0, 1, 2, ⋯.

**Proof**: The proof is constructive. We first notice that the solution to (19) is given by

$$
\xi(t) = \Phi(t, t\_i)\xi\_i + \int\_{t\_i}^{t} \Phi(t, \tau)f(\tau)d\tau,
$$

where Φ(*t*, *τ*) is the transition matrix of *ς* ^˙(*t*)= <sup>−</sup> <sup>Γ</sup>(*t*)*<sup>W</sup> <sup>T</sup>* (*t*)*<sup>W</sup>* (*t*)*<sup>ς</sup>* ^(*t*) or the unique solution of Robust Observer-Based Output Feedback Control of a Nonlinear Time-Varying System http://dx.doi.org/10.5772/62697 11

$$
\dot{\Phi}(t,\tau) = -\Gamma(t)\mathcal{W}^{\text{I}}(t)\mathcal{W}(t)\Phi(t,\tau), \ \Phi(\tau,\tau) = I. \tag{23}
$$

A constructive method is suggested by letting a differential equation *η*˙(*t*)= −*η*(*t*), *η*(*ti* ) =*η<sup>i</sup>* , where *η*(*t*) is a vector of appropriate dimensions. We may conclude that *η*(*t*), *η*˙(*t*)∈L<sup>2</sup> ∩L*∞*..

Let *π*(*t*)= Φ(*t*, *ti* )*η*(*t*) and Lyapunov candidate, *Vπ* = *<sup>π</sup> <sup>T</sup>* (*t*)Γ−<sup>1</sup> (*t*)*π*(*t*), where Γ(*t*) is chosen as satisfying Lemma 1. Then, computing *V*˙ *π* along solutions of *π*˙ (*t*) = Φ˙ (*t*, *ti*)*η*(*t*) + Φ(*t*, *ti*)*η*˙(*t*) between the covariance resettings is as follows,

$$\dot{\mathcal{V}}\_{\mathcal{R}} = \pi^T(t) \left( (-\mathbf{2} + \mathbf{g}) \mathcal{W}^T(t) \mathcal{W}(t) - \Gamma^{-1}(t) \right) \pi(t).$$

Without loss of generality, let *g* = 2. Then,

semi definite of *<sup>d</sup>*

10 Robust Control - Theoretical Models and Case Studies

and

for *t* ∈*Ii*

followed by *<sup>d</sup>*

*k*<sup>0</sup> ≥ ∥ Γ(*t*)∥ ≥*k*1.

, *i* = 0, 1, 2, ⋯.

*dt* <sup>Γ</sup> <sup>−</sup>1(*t*), implies that Γ−<sup>1</sup> (*t*) may go without bound and hence Γ(*t*) will become

(21)

(22)

very small in some directions and adaptation in those directions becomes very slow. Therefore, to avoid slowing adaptive propagation speed and to assure the positive definiteness of covariance matrix Γ(*t*) such that invertibility exists, the following *covariance resetting propaga‐*

The scalar *g* > 0 is chosen such that the adaptation maintains suitable rate of propagation. The covariance resetting propagation is adjusted by (21), in which the initial condition is also reset. The condition (22) shows that the covariance matrix can also be reset within the time window if the covariance matrix is close to the singularity. That is, the covariance matrix is reset if its minimum eigenvalue is less than or equal to *k*1, that is, *λ*min(Γ(*t*))≤*k*1. The following Lemma shows that the covariance matrix Γ(*t*) is bounded and is positive definite based on the

**Lemma 1**. Assuming that (21) and (22) hold. Then, *k*0*I* ≽ Γ(*t*)≽*k*1*I* ≻0 and, thus, *k*<sup>0</sup> ≥ ∥ Γ(*t*)∥ ≥*k*<sup>1</sup>

covariance resettings. The computation will progress until the next resetting time *t*, if it exists, on which *λ*min(Γ(*t*))≤*k*1. Hence, we may conclude that *k*0*I* ≽ Γ(*t*)≽*k*1*I* ≻0, which says that

**Lemma 2**. There exists a positive number *k* such that the transition matrix, Φ(*t*, *τ*), of (19) is

, *i* = 0, 1, 2, ⋯.

^˙(*t*)= <sup>−</sup> <sup>Γ</sup>(*t*)*<sup>W</sup> <sup>T</sup>* (*t*)*<sup>W</sup>* (*t*)*<sup>ς</sup>*

*dt* <sup>Γ</sup>−<sup>1</sup> (*t*)= *gW <sup>T</sup>* (*t*)*<sup>W</sup>* (*t*)≽0, we have Γ−<sup>1</sup> (*t*1) −Γ−<sup>1</sup> (*t*2)≽<sup>0</sup> for all *t*<sup>1</sup> <sup>≥</sup>*t*<sup>2</sup> <sup>&</sup>gt;*tr* between

+

, hence Γ(*tr*

^(*t*) or the unique solution of

+

^(*t*) showing that it is

) =*k*0*I*. Then,

*tion law* is developed. Within each time window, we modify (18) as follows,

covariance resetting propagation law (21) and (22).

bounded, that is, ∥ Φ(*t*, *τ*)∥ ≤*k* <*∞*, for *t* ∈*Ii*

where Φ(*t*, *τ*) is the transition matrix of *ς*

**Proof**: At the resettings, the covariance matrix Γ−<sup>1</sup> (*t*) is reset at *<sup>t</sup>* <sup>=</sup>*tr*

Before presenting the theorem for modified least-squares algorithms of *ς*

bounded, the following transition matrix Lemma for the solutions of (19) is essential.

**Proof**: The proof is constructive. We first notice that the solution to (19) is given by

$$
\dot{V}\_{\overline{\pi}} = \pi^T(t) \left( -\Gamma^{-1}(t) \right) \pi(t) = -V\_{\overline{\pi}} < 0. \tag{24}
$$

At the point of resetting, that is, the point of discontinuity of Γ(*t*), we obtain

$$V\_{\pi}(t\_r^+) - V\_{\pi}(t\_r) = \pi^T(t) \left(\Gamma^{-1}(t\_r^+) - \Gamma^{-1}(t\_r)\right) \pi(t) \le 0. \tag{25}$$

It follows from (24) and (25), we conclude that the Lyapunov candidate along the solution *π*(*t*) has the property, 0≤*Vπ*(*t*)≤*Vπ*(*ti* ). This shows that *π*(*t*)∈L*<sup>∞</sup>* , which implies that ∥ Φ(*t*, *ti* )∥ ≤*k* <*∞* for some *k* >0.

**Theorem 2**. Assuming that the problem of observer-based control via contaminated measured feedback is solvable. If there exist the identifier structure of least-squares algorithm (19) with covariance resetting propagation law (21) and (22), then *ς* ^(*t*)∈L*∞* for all *<sup>t</sup>* <sup>≥</sup>0.

**Proof**: To prove the claim is true, we need to show that ∥*ς* ^(*t*)∥*<sup>∞</sup>* =sup*<sup>t</sup>* <sup>∥</sup>*<sup>ς</sup>* ^(*t*)<sup>∥</sup> <sup>&</sup>lt;*∞* for *t* ∈*Ii* , *i* = 0, 1, 2, ⋯. We have the solution to (19) is given by

$$
\xi(t) = \Phi(t, t\_l)\xi\_l + \int\_{t\_l}^t \Phi(t, \tau)f(\tau)d\tau,
$$

where Φ(*t*, *τ*) is the transition matrix shown in (23). In view of Lemma 2, we obtain

$$\|\|\xi(t)\|\|\_{\infty} = \left\|\Phi(t, t\_i)\xi\_i + \int\_{t\_i}^{t} \Phi(t, \tau)f(\tau)d\tau\right\|\_{\infty} \le k \left( \|\|\xi\_i\|\|\_{\infty} + \int\_{t\_i}^{t} \|\|f(\tau)\|\|\_{\infty} d\tau \right).$$

The boundedness of *∫ t i t* ∥ *f* (*τ*)∥*<sup>∞</sup> dτ* can be easily seen by observing (20), in which *x* ^(*t*), *x*˜(*t*), and *W* (*t*)= *Ldiag y* ^ <sup>=</sup> *Ldiag C x*^ followed by Theorem 1 have bounds and *<sup>x</sup>* ^(*t*), *<sup>x</sup>*˜(*t*), *<sup>W</sup>* (*t*)→0, as *t* →*∞*. The covariance matrix Γ(*t*) satisfies (21) and, then, is bounded by Lemma 1. Followed by system property (S1), *F*(*t*) is clearly bounded for all *t* ≥0. The measured signal *y* = *Cx*, by Theorem 1, is 0 as *t* →*∞*. In summary, there exists a positive finite number *k*3 such that

$$\int\_{t\_l}^{t} \|f(\tau)\|\_{\infty} d\tau \le k\_3 \int\_{t\_l}^{t} d\tau \le k\_3 \Delta t.$$

Therefore,

$$\|\|\dot{\xi}(t)\|\|\_{\infty} \le k \left( \|\|\dot{\xi}\_i\|\|\_{\infty} + k\_3 \Delta t \right) < \infty \,\tag{26}$$

which indicates that *ς* ^(*t*)<sup>∈</sup> *<sup>L</sup> <sup>∞</sup>*, for *<sup>t</sup>* <sup>∈</sup>*Ii* . As time evolves, for each small time interval, (26) always holds. Hence, we may extend *t* →*∞*. This completes the proof.

**Remark 6**. In this section, a modified least-squares algorithm is shown to find the estimated *ς*(*t*), which is intentionally designed to justify the effects of time-varying functions *F*(*t*) produced in the plant (1). **Figure 3** depicts the complete structure of observer–error dynamics that has been shown in **Figure 2**, in which two filters, namely *observer dynamics* and *error dynamics*, and one lest squares algorithm construct the feedback control. The observer dynam‐ ics produces the estimated state of plant by filtering the signals *u*(*t*), *w*(*t*), and *e*(*t*). It is worth noting that the signal *e*(*t*) from least-squares algorithm plays an additional drive force to the observer dynamics. The error dynamics is to find the error state *x*˜(*t*), which is then injected into the least-squares algorithms such that the time-varying function *ς*(*t*) is estimated.

**Figure 3. M**ass-damper-spring system.
