**4. Control and observer gain synthesis**

The synthesis of control and observer gains is addressed in Theorem 1. For the simplicity of expression, the time argument of matrix-valued function *F*(*t*) will be dropped and denoted by *F*. A useful and important Lemma will be stated in advance for clarity:

**Lemma 3** (Elimination Lemma see [32]). Given H=H*<sup>T</sup>* ∈ℜ*n*×*<sup>n</sup>*, V∈ℜ*n*×*<sup>m</sup>*, and U∈ℜ*n*<sup>×</sup> *<sup>p</sup>* with Rank(V)<n and Rank(U*<sup>T</sup>* )<n. There exists a matrix K such that

$$\mathcal{H} + \mathcal{V}\mathbb{K}\mathcal{U}^{\mathsf{T}} + \mathcal{U}\mathbb{K}^{\mathsf{T}}\mathcal{V}^{\mathsf{T}} \prec 0$$

if and only if

The boundedness of *∫*

*W* (*t*)= *Ldiag y*

Therefore,

which indicates that *ς*

**Figure 3. M**ass-damper-spring system.

*t i* ∥ *f* (*τ*)∥*<sup>∞</sup> dτ* can be easily seen by observing (20), in which *x*

^ <sup>=</sup> *Ldiag C x*^ followed by Theorem 1 have bounds and *<sup>x</sup>*

Theorem 1, is 0 as *t* →*∞*. In summary, there exists a positive finite number *k*3 such that

^(*t*)<sup>∈</sup> *<sup>L</sup> <sup>∞</sup>*, for *<sup>t</sup>* <sup>∈</sup>*Ii*

always holds. Hence, we may extend *t* →*∞*. This completes the proof.

*t* →*∞*. The covariance matrix Γ(*t*) satisfies (21) and, then, is bounded by Lemma 1. Followed by system property (S1), *F*(*t*) is clearly bounded for all *t* ≥0. The measured signal *y* = *Cx*, by

**Remark 6**. In this section, a modified least-squares algorithm is shown to find the estimated *ς*(*t*), which is intentionally designed to justify the effects of time-varying functions *F*(*t*) produced in the plant (1). **Figure 3** depicts the complete structure of observer–error dynamics that has been shown in **Figure 2**, in which two filters, namely *observer dynamics* and *error dynamics*, and one lest squares algorithm construct the feedback control. The observer dynam‐ ics produces the estimated state of plant by filtering the signals *u*(*t*), *w*(*t*), and *e*(*t*). It is worth noting that the signal *e*(*t*) from least-squares algorithm plays an additional drive force to the observer dynamics. The error dynamics is to find the error state *x*˜(*t*), which is then injected

into the least-squares algorithms such that the time-varying function *ς*(*t*) is estimated.

^(*t*), *x*˜(*t*), and

(26)

^(*t*), *<sup>x</sup>*˜(*t*), *<sup>W</sup>* (*t*)→0, as

. As time evolves, for each small time interval, (26)

*t*

12 Robust Control - Theoretical Models and Case Studies

$$\mathcal{V}^{\mathsf{T}}\_{\perp}\mathcal{H}\mathcal{V}\_{\perp} \prec 0 \text{ and } \mathcal{U}^{\mathsf{T}}\_{\perp}\mathcal{H}\mathcal{U}\_{\perp} \prec 0,$$

where V⊥ and U⊥ are orthogonal complement of V and U, respectively, that is V<sup>⊥</sup> *<sup>T</sup>*V= 0 and V<sup>⊥</sup> V is of maximum rank.

**Lemma 4**. Given a real number γ>0 and {*A*(*t*), *B*1, *B*2, *C*, *D*, *D*2} satisfying system properties (S1) and (S2), the following statements (Q1), (Q2), and (Q3) are equivalent.

**(Q1)** There exist matrices *P*1≻0, *P*2≻0, matrices *K* and *L*, and positive scalars *β* and *δ* such that the following inequality holds,

$$
\begin{pmatrix}
\Pi\_1(P\_1, K) + \delta^{-2} A^T (I - F)^T (I - F) A & P\_1 B\_1 K \\
K^T B\_1^T P\_1 & \Pi\_2(P\_2, L) + \delta^2 P\_2 P\_2
\end{pmatrix} \prec 0. \tag{27}
$$

**(Q2)** There exist matrices *P*1≻0, *P*2≻0, matrices *K* and *L*, and positive scalars *β* and *δ* such that the following inequality hold,

$$\left(\left(B\_{1}\right)\_{\perp}^{T}P\_{1}^{-1}\left(\Pi(P\_{1},K)+\delta^{-2}A^{T}(I-F)^{T}(I-F)A\right)P\_{1}^{-1}(B\_{1})\_{\perp}\prec 0,\tag{28}$$

$$
\Pi\_2(P\_2, L) + \delta^2 P\_2 P\_2 \prec 0,\tag{29}
$$

**(Q3)** There exist matrices *X* ≻0 and *P*2≻0, matrix *W* and *Y*, and the positive scalars *γ*, *δ* and *β* such that the following two matrix inequalities hold,

$$
\begin{pmatrix}
\,^T \boldsymbol{X} \boldsymbol{A}^T \boldsymbol{F}^T + \boldsymbol{F} \boldsymbol{A} \boldsymbol{X} + \boldsymbol{W}^T \boldsymbol{B}\_1^T + \boldsymbol{B}\_1 \boldsymbol{W} + \boldsymbol{\gamma}^{-2} \boldsymbol{B}\_2 \boldsymbol{B}\_2^T & \boldsymbol{X} \\
\boldsymbol{X} & - \left(\boldsymbol{D}^T \boldsymbol{D} + \boldsymbol{\delta}^{-2} \boldsymbol{A}^T (\boldsymbol{I} - \boldsymbol{F})^T (\boldsymbol{I} - \boldsymbol{F}) \boldsymbol{A}\right)^{-1}
\end{pmatrix} \prec \boldsymbol{0},\tag{30}
$$

$$
\begin{pmatrix}
A^T P\_2 + P\_2 A + C^T Y^T + YC & Y & P\_2 \\
Y & -\beta^2 I & 0 \\
P\_2 & 0 & -\delta^{-2} I
\end{pmatrix} \prec 0.\tag{31}
$$

**Proof**: to prove (Q1)⇔(Q2). The inequality (27) may fit into Lemma 3 with

$$\mathcal{H} = \begin{pmatrix} \Pi\_1(P\_1) + \delta^{-2} A^T (I - F)^T (I - F) A & 0 \\ 0 & \Pi\_2(P\_2) + \delta^2 P\_2 P\_2 \end{pmatrix},$$

$$\mathcal{V} = \begin{pmatrix} P\_1 B\_1 \\ 0 \end{pmatrix}, \text{ and } \mathcal{U} = \begin{pmatrix} I \\ I \end{pmatrix}.$$

Next, the orthogonal complement of V and U is given by V⊥ and U⊥, respectively, which are

$$\mathcal{V}\_{\perp} = \begin{pmatrix} P\_1^{-1}(\mathcal{B}\_1)\_{\perp} & 0 \\ 0 & I \end{pmatrix}, \text{ and } \mathcal{U}\_{\perp} = \begin{pmatrix} I \\ -I \end{pmatrix}.$$

which (*B*1)⊥ is defined as the orthogonal complement of *B*1 and is such that (*B*1)<sup>⊥</sup> <sup>T</sup> *B*<sup>1</sup> =0 and *B*1(*B*1)<sup>⊥</sup> is of maximum rank. By applying Lemma 3, we may have the following inequalities,

$$\mathcal{W}\_{\perp}^{T}\mathcal{H}\mathcal{V}\_{\perp} = \begin{pmatrix} (\mathcal{B}\_{1})^{T}\_{\perp}P\_{1}^{-1} \left\{ \Pi(\mathcal{P}\_{1},\mathcal{K}) + \delta^{-2}A^{T}(I-F)^{T}(I-F)A \right\} P\_{1}^{-1}(\mathcal{B}\_{1})\_{\perp} & 0\\ 0 & \Pi\_{2}(\mathcal{P}\_{2},\mathcal{L}) + \delta^{2}\mathcal{P}\_{2}\mathcal{P}\_{2} \end{pmatrix} \prec 0,\tag{32}$$

and

$$\mathcal{U}\_{\perp}^{T}\mathcal{H}\mathcal{U}\_{\perp} = \Pi(P\_1, K) + \delta^{-2}A^T(I - F)^T(I - F) + \Pi\_2(P\_2, L) + \delta^2P\_2P\_2 \prec 0. \tag{33}$$

It is seen that matrix inequalities (28) and (29) hold if and only if (32) is true. Given (32), (33) is also true. Therefore, by Lemma 3, (Q1)⇔(Q2).

To prove (Q2)⇔(Q3), let X=P1 −1 , we find the following *iff* condition for inequality (28),

(34)

where *W* = *KX* . It is noted that the last *iff* holds is due to Schur complement in that the positive definiteness of *D <sup>T</sup> D* + *δ* <sup>−</sup><sup>2</sup> *A<sup>T</sup>* (*I* − *F* )*<sup>T</sup>* (*I* − *F*)*A* must ensure. As for the matrix inequality (29), let *Y* = *P*2*L* , we have

$$\begin{aligned} & \Pi\_2(P\_2, L) + \delta^2 P\_2 P\_2 \prec 0, \\ \Leftrightarrow & A^T P\_2 + P\_2 A + \mathcal{C}^T Y^T + Y \mathcal{C} + \beta^{-2} Y Y^T + \delta^2 P\_2 P\_2 \prec 0, \\ \Leftrightarrow & \begin{pmatrix} A^T P\_2 + P\_2 A + \mathcal{C}^T Y^T + Y \mathcal{C} & Y & P\_2 \\ Y^T & -\beta^2 I & 0 \\ P\_2 & 0 & -\delta^{-2} I \end{pmatrix} \prec 0. \end{aligned} \tag{35}$$

Again, the last *iff* of (35) is due to Schur complement and *β* >0 and *δ* >0 ensure the inequality holds.

**Remark 7**. It is seen that *δ* is the only common scalar for matrix inequalities (34) and (35). In order to ease of computation and without loss of generality, we may assume that *δ* is a certain constant. The advantage of it, in addition to the ease of computation, is that the gains *K* and *L* are solely determined by (34) and (35), respectively. From rigorous point of view, we may not be able to say that the separation principle is completely valid for this case. But, loosely speaking, it fits by small modification.

**Lemma 5**. (Q1) implies (10).

(31)

<sup>T</sup> *B*<sup>1</sup> =0 and

(32)

(33)

(34)

**Proof**: to prove (Q1)⇔(Q2). The inequality (27) may fit into Lemma 3 with

14 Robust Control - Theoretical Models and Case Studies

Next, the orthogonal complement of V and U is given by V⊥ and U⊥, respectively, which are

*B*1(*B*1)<sup>⊥</sup> is of maximum rank. By applying Lemma 3, we may have the following inequalities,

It is seen that matrix inequalities (28) and (29) hold if and only if (32) is true. Given (32), (33)

, we find the following *iff* condition for inequality (28),

is also true. Therefore, by Lemma 3, (Q1)⇔(Q2).

−1

To prove (Q2)⇔(Q3), let X=P1

which (*B*1)⊥ is defined as the orthogonal complement of *B*1 and is such that (*B*1)<sup>⊥</sup>

and

**Proof**: let (*π*1, *π*2) ≠(0, 0). Then

Thus, (Q1) implies (10). This completes the proof.

**Theorem 3**. Given a real number γ>0 and {*A*(*t*), *B*1, *B*2, *C*, *D*, *D*2} satisfying system property (S1) and (S2). Then, (Q3) with scheme (11) implies (T2).

**Proof**: by Lemma 5, (Q1) implies matrix inequality (10). Moreover, by Lemma 4, we have (Q1)⇔(Q3). Therefore, (Q3) with scheme (11) is equivalent to (T1). Moreover, by Theorem 1, the claim is true.

**Remark 8**. Theorem 3 states that the problems post in observer-based control via contaminated measured feedback, that is, (O1) and (O2), are solvable by proving that (T2) holds.
