**3.Spatial‐based output feedback backstepping robust adaptive repetitive control (OFBRARC)**

Consider the same NPI model (3), which is transformed from the NTI model (1), under the same set of assumptions (Assumptions 2.1 and 2.2). The NPI model will be used for the subsequent design and discussion.

### **3.1 Nonlinear state observer**

Drop the *θ* notation and note that (3) can be expressed as a standard nonlinear system:

$$
\dot{\hat{y}} = \dot{\tilde{\xi}}\_{11} + \left(-k\_1 \upsilon\_{01} + \upsilon\_{02}\right) b\_0 + \dot{\Xi}\_1^T a + \dot{\varepsilon}\_{\hat{\imath}\_1} + \dot{\hat{d}}\_{y'} \cdot \dot{\upsilon}\_{02} = -k\_2 \upsilon\_{01} + \sigma \hat{\imath}.\tag{30}
$$

#### Robust Adaptive Repetitive and Iterative Learning Control for Rotary Systems Subject to Spatially Periodic Uncertainties http://dx.doi.org/10.5772/63082 113

where terms involving unstructured uncertainty are merged into *d* ^ *<sup>s</sup>* =*Δf* (*x* ^, *<sup>ϕ</sup><sup>f</sup>* ) <sup>+</sup> *<sup>Δ</sup>g*(*<sup>x</sup>* ^, *<sup>ϕ</sup>g*)*<sup>u</sup>* ^ with

$$
\Delta f\left(\hat{\mathfrak{x}}, \phi\_f\right) = \frac{\Delta f\_t(\hat{\mathfrak{x}}, \phi\_f)}{\hat{\mathfrak{x}}\_1}, \\
\Delta \mathbf{g}\left(\hat{\mathfrak{x}}, \phi\_{\mathcal{g}}\right) = \frac{\Delta \mathbf{g}\_t(\hat{\mathfrak{x}}, \phi\_{\mathcal{g}})}{\hat{\mathfrak{x}}\_1}.
$$

In addition, we have

**Theorem 2.2** Consider an exponentially minimum‐phase nonlinear system with parameter uncertainty and subject to output disturbance as given by (4), which is augmented with a state observer (or K‐filters) described by (13) [35]. Specify the control laws as (19), (20), and (23). Let

continuous functions, and *<sup>W</sup>* has bounded derivative with respect to ξ , *v*, and *Θ*˜. In addition,

shown in **Figure.1** is stable. Then, the parametric adaptation law given by (29) yields the

**3.Spatial‐based output feedback backstepping robust adaptive repetitive**

Consider the same NPI model (3), which is transformed from the NTI model (1), under the same set of assumptions (Assumptions 2.1 and 2.2). The NPI model will be used for the

Drop the *θ* notation and note that (3) can be expressed as a standard nonlinear system:

= + - + +X + + =- +

 ( ) & & & & & & <sup>1</sup> 11 1 01 02 0 1 <sup>ˆ</sup> 02 2 01 ˆ ˆ ˆ , . *<sup>T</sup>*

es

*x y y kv v b a d v kv u* (30)

^

*<sup>m</sup>*(*θ*)| <*d* ^ *M*¯ 0

Proof: Follow the same steps for proof of Theorem 3.2 in [24] with some differences.

^ *<sup>m</sup>*, *y* ^˙ *<sup>m</sup>*, ⋯, *y* ^ *m* (*r*−1)

^ *M*¯ 0

as *θ* →*∞*.

, *f* , *g*, *h* , *L hf*

(*s*˜) is specified such that the feedback system

(where *r* is the relative

*<sup>k</sup>* , *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup> <sup>h</sup>* are Lipschitz

Assumptions (1) to (9) be satisfied. Assume that *y*

assume that a stable and proper controller *C*

**Figure 1.** Repetitive controller and stabilizing compensator.

**control (OFBRARC)**

subsequent design and discussion.

x

**3.1 Nonlinear state observer**

*<sup>M</sup>*¯ are bounded with an upper bound *<sup>d</sup>*

^(*<sup>θ</sup>*)<sup>−</sup> *<sup>y</sup>* ^

degree) and *d*

^

112 Robust Control - Theoretical Models and Case Studies

bounded tracking error, i.e., | *y*

$$f(\hat{\mathfrak{x}}, \phi\_{\uparrow}) = \frac{f\_{\iota}(\hat{\mathfrak{x}}, \phi\_{\uparrow})}{\hat{\mathfrak{x}}\_{1}},\\g(\hat{\mathfrak{x}}, \phi\_{\S}) = \frac{g\_{\iota}(\hat{\mathfrak{x}}, \phi\_{\S})}{\hat{\mathfrak{x}}\_{1}},\\h(\hat{\mathfrak{x}}) = \hat{\alpha} = \hat{\mathfrak{x}}\_{1}$$

The state variables have been specified such that the angular velocity *ω* ^ is equal to *<sup>x</sup>* ^ 1, i.e., the undisturbed output *h* (*x* ^). It is not difficult to verify that (30) has the same relative degree in *D*<sup>0</sup> ={*x* ^ ∈ℝ*<sup>n</sup>* <sup>|</sup> *<sup>x</sup>* ^ <sup>1</sup> ≠0} as the NTI model in (1). If (30) has relative degree *r*, we can use the same nonlinear coordinate transformation defined previously. With respect to the new coordinates, i.e., *z* ^ 1 and *z* ^ 2, (30) can be transformed into the so‐called normal form, i.e., (5). With zero dynamics being assumed to be asymptotically stable, we may focus on designing a nonlinear state observer for external dynamics of (5), i.e., (7).

Because *f* (*x* ^) and *<sup>g</sup>*(*<sup>x</sup>* ^) are linearly related to system parameters, *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup> r*−1 *h* (*x* ^) and *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup> r*−1 *h* (*x* ^) can be written as *L <sup>f</sup> r h* (*x* ^)=*<sup>Θ</sup> <sup>T</sup> Wf* (*<sup>x</sup>* ^) and *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup> r*−1 *h* (*x* ^)=*<sup>Θ</sup> <sup>T</sup> Wg*(*<sup>x</sup>* ^), where *Wf* (*<sup>x</sup>* ^) and *Wg*(*<sup>x</sup>* ^) are two nonlinear functions, and *Θ* = *ϕ<sup>f</sup>* <sup>1</sup> ⋯ *ϕfk ϕg*<sup>1</sup> ⋯ *ϕgl* ⋯ *<sup>T</sup>* = *ϕ*<sup>1</sup> ⋯ *ϕ*<sup>ℓ</sup> *<sup>T</sup>* ∈ℝ<sup>ℓ</sup> , where ℓ is the number of unknown parameters. Next, we adopt the following observer structure: *z*¯ ˙ <sup>1</sup> = *A*0*z*¯ <sup>1</sup> + *k* ¯ *y* + *F* (*y*, *u*) *<sup>T</sup> <sup>Θ</sup>*, where *z*¯ <sup>1</sup> <sup>=</sup> *<sup>z</sup>*¯ <sup>11</sup> <sup>⋯</sup> *<sup>z</sup>*¯ <sup>1</sup>*<sup>r</sup> <sup>T</sup>* is the estimate of *z*1 and *W*¯ *<sup>f</sup>* (*y*) and *W*¯ *<sup>g</sup>*(*y*) are nonlinear functions with the same structure as *Wf* (*x*) and *Wg*(*x*), except that each entry of

*x* is replaced by *y*. Furthermore, *A*<sup>0</sup> = −*k*<sup>1</sup> ⋮ −*kr I*(*r*−1)×(*r*−1) 01×(*<sup>r</sup>*−1) , *k* ¯ = *k*<sup>1</sup> ⋯ *kr <sup>T</sup>* , and

$$F\left(y,\,\,\mu\right)^{\mathrm{T}} = \begin{bmatrix} 0\_{\left(r-1\right)\star\ell} \\ \overline{\mathcal{W}}\_f^{\mathrm{T}}(y) + \overline{\mathcal{W}}\_{\mathcal{S}}^{\mathrm{T}}(y)\mu \end{bmatrix} \in \mathbb{R}^{r\star\ell}.$$

By properly choosing *k* ¯, the matrix *A*<sup>0</sup> can be made Hurwitz. Define the state estimated error as *ε* ≜ *εz*<sup>11</sup> ⋯ *εz*1*<sup>r</sup> <sup>T</sup>* <sup>≜</sup>*z*<sup>1</sup> <sup>−</sup> *<sup>z</sup>*¯ <sup>1</sup>. The dynamics of the estimated error can be obtained as *ε*˙ = *A*0*ε* + *Δ*, where *Δ* = −*k* ¯*dy* <sup>+</sup> *Bc<sup>Θ</sup> <sup>T</sup> Wg*(*x*)−*<sup>W</sup>*¯ *<sup>g</sup>*(*y*) *<sup>u</sup>* <sup>+</sup> *Bc<sup>Θ</sup> <sup>T</sup> Wf* (*x*)−*<sup>W</sup>*¯ *<sup>f</sup>* (*y*) + *dsi*. To proceed, the role of the state observer is replaced by *z*¯ <sup>1</sup>≜*<sup>ξ</sup>* <sup>+</sup> *<sup>Ω</sup> <sup>T</sup> <sup>Θ</sup>* and the following two K‐filters:

$$\dot{\xi} = A\_0 \xi + \overline{k}y,\ \dot{\Omega}^T = A\_0 \Omega^T + F(y, u)^T \tag{31}$$

such that *ξ* = *ξ*<sup>11</sup> ⋯ *ξ*1*<sup>r</sup> <sup>T</sup>* ∈ℝ*<sup>r</sup>* and *Ω <sup>T</sup>* ≜ *v*<sup>1</sup> ⋯ *v*<sup>ℓ</sup> ∈ℝ*r*×<sup>ℓ</sup> . Decompose the second equation of (31) into *v*˙ *<sup>j</sup>* = *A*0*vj* + *erσ<sup>j</sup>* , *<sup>j</sup>* =1, 2, <sup>⋯</sup>, <sup>ℓ</sup>, where *er* <sup>=</sup> <sup>0</sup> <sup>⋯</sup> 0 1 ∈ℝ*<sup>r</sup>* and *σ<sup>j</sup>* =*w*<sup>1</sup> *<sup>j</sup>* + *w*<sup>2</sup> *<sup>j</sup> u* with *w*<sup>1</sup> *<sup>j</sup>* and *w*<sup>2</sup> *<sup>j</sup>* are the *<sup>j</sup> th* columns of *W*¯ *f <sup>T</sup>* (*y*) and *W*¯ *g <sup>T</sup>* (*y*), respectively. With the definition of the state estimated error *ε*, the state estimate *z*¯ 1, and (31), we acquire the following set of equations that will be used in the subsequent design:

$$\overline{z}\_{1k} = \overline{z}\_{1k} + \varepsilon\_{z\_{1k}} = \underline{\xi}\_{1k} + \sum\_{j=1}^{\ell} v\_{j,k} \phi\_j + \varepsilon\_{z\_{1k}}, \ k = 1, \dots, r \tag{32}$$

where • *<sup>j</sup>*,*<sup>i</sup>* denotes the *i* th row of • *<sup>j</sup>* .

### **3.2 Spatial domain output feedback adaptive control system**

To apply adaptive backstepping method, we first rewrite the derivative of output *y* ^ as

$$\dot{\hat{y}} = \dot{\hat{z}}\_{11} + \dot{\hat{d}}\_{y} = \hat{z}\_{12} + \hat{d}\_{s\_1} + \dot{\hat{d}}\_{y} = \overline{z}\_{12} + \varepsilon\_{z\_{12}} + \hat{d}\_{s\_1} + \dot{\hat{d}}\_{y} \tag{33}$$

With the second equation in (32), (33) can be written as

$$\dot{\tilde{y}} = \overline{z}\_{12} + \varepsilon\_{\dot{z}\_{12}} + \hat{d}\_{s\underline{i}\_1} + \dot{\hat{d}}\_y = \underline{\xi}\_{12} + \upsilon\_{\iota,2}\phi\_{\iota} + \overline{o}^T\Theta + \varepsilon\_{\dot{z}\_{12}} + \hat{d}\_{s\underline{i}\_1} + \dot{\hat{d}}\_y$$

where *ω*¯*<sup>T</sup>* <sup>=</sup> *<sup>v</sup>*1,2 <sup>⋯</sup> *<sup>v</sup>*ℓ−1,2 <sup>0</sup> .

In view of designing output feedback backstepping with K‐filters, we need to find a set of K‐ filter parameters, i.e., *v*ℓ,2, ⋯, *v*1,2, separated from *u* ^ by the same number of integrators between *z* ^ <sup>12</sup> and *u* ^. From (31), we see that *<sup>v</sup>*ℓ,2, <sup>⋯</sup>, *<sup>v</sup>*1,2 are all candidates if *w*<sup>2</sup> *<sup>j</sup>* are not zero. In the subsequent derivation, we assume that *v*ℓ,2 is selected. Therefore, the system incorporated the K‐filters can be represented by

$$\begin{aligned} \dot{\hat{y}} &= \tilde{\varepsilon}\_{12} + \upsilon\_{\ell,2}\phi\_{\ell} + \overline{\alpha}^{\mathrm{T}}\Theta + \varepsilon\_{z\_{12}} + \hat{d}\_{si\_{1}} + \dot{\hat{d}}\_{y\_{\ell}} \cdot \dot{\upsilon}\_{\ell,i} = \upsilon\_{\ell,i+1} - \\ k\_{i}\upsilon\_{\ell,1'} \text{ i} &= 2, \upsilon\_{\ell'} - 1, \ \dot{\upsilon}\_{\ell,r} = -k\_{r}\upsilon\_{\ell,1} + \upsilon\_{1\ell} + \upsilon\_{2\ell}\hat{\mu} \end{aligned} \tag{34}$$

To apply adaptive backstepping to (34), a new set of coordinates will be introduced

$$z\_1 = \hat{y} - \hat{y}\_{m'}, \ z\_i = \upsilon\_{\iota,i} - \alpha\_{i-1}, i = 2, \cdots, r \tag{35}$$

where *y* ^ *<sup>m</sup>* is the prespecified reference output, and *αi*−1 is the virtual input to be used for stabilizing each state equation. For simplicity, we define ∂*α*<sup>0</sup> / ∂ *y* ^ <sup>≜</sup> –1 for subsequent deriva‐ tions.

**Step 1:***i* =1With (35), the first state equation in (34) can be expressed as

$$\dot{z}\_1 = \xi\_{12} + z\_2 \phi\_\ell + \alpha\_1 \phi\_\ell + \overline{\phi}^T \Theta + \varepsilon\_{\dot{z}\_{12}} + \dot{\hat{d}}\_{s\_1} + \dot{\hat{d}}\_y - \dot{\hat{y}}\_m \tag{36}$$

Consider a Lyapunov function *V*<sup>1</sup> =(1/2)*z*<sup>1</sup> 2 and calculate its derivative

$$\dot{V}\_1 = z\_1 \dot{z}\_1 = z\_1 \left(\xi\_{12} + z\_2 \phi\_\ell + \alpha\_1 \phi\_\ell + \overline{\phi}^T \Theta + \varepsilon\_{\dot{z}\_{12}} + \widehat{d}\_{s1} + \dot{\widehat{d}}\_y - \dot{\widehat{y}}\_w\right) \tag{37}$$

Define the estimates of *ϕ<sup>i</sup>* as *ϕ*˜*<sup>i</sup>* and *<sup>Φ</sup>* <sup>=</sup> *<sup>Φ</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>Φ</sup>*<sup>ℓ</sup> <sup>=</sup>*<sup>Θ</sup>* <sup>−</sup>*<sup>Θ</sup>*˜, where *<sup>Θ</sup>*˜ <sup>=</sup> *<sup>ϕ</sup>*˜ *<sup>f</sup>* <sup>1</sup> <sup>⋯</sup> *<sup>ϕ</sup>*˜ *fk <sup>ϕ</sup>*˜ *<sup>g</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>ϕ</sup>*˜ *gl* <sup>⋯</sup> *<sup>T</sup>* <sup>=</sup> *<sup>ϕ</sup>*˜ <sup>1</sup> <sup>⋯</sup> *<sup>ϕ</sup>*˜<sup>ℓ</sup> *<sup>T</sup>* ∈ℝ<sup>ℓ</sup> . Note that *Θ* is the "true" parameter vector, whereas *Θ*˜ is the estimated parameter vector. Design the virtual input *<sup>α</sup>*1 as *α*<sup>1</sup> <sup>=</sup>*α*¯ <sup>1</sup> / *<sup>ϕ</sup>*˜<sup>ℓ</sup> and specify

$$\overline{\alpha}\_{1} = \frac{1}{z\_{1}} \left( -z\_{1}\underline{\xi}\_{12} - z\_{1}z\_{2}\widetilde{\phi}\_{\ell} - z\_{1}\overline{\alpha}\widetilde{\Theta} + z\_{1}\dot{\widetilde{y}}\_{m} - c\_{1}z\_{1}^{2} - d\_{1}z\_{1}^{2} - g\_{1}z\_{1}^{2} \right) \tag{38}$$

where *ci* , *di* , *gi* are variables. Therefore, (37) becomes

$$\dot{V}\_1 = -c\_1 z\_1^2 - d\_1 z\_1^2 - g\_1 z\_1^2 + \tau\_1 \Phi + z\_1 \left(\varepsilon\_{t\_{12}} + \hat{d}\_{s\_1} + \dot{\hat{d}}\_y\right) \tag{39}$$

where *τ*1<sup>Φ</sup> <sup>=</sup> *<sup>z</sup>*1*z*2Φℓ <sup>+</sup> *<sup>α</sup>*1Φℓ <sup>+</sup> *<sup>z</sup>*1*<sup>ω</sup>*¯*<sup>T</sup>* <sup>Φ</sup>.

such that *ξ* = *ξ*<sup>11</sup> ⋯ *ξ*1*<sup>r</sup> <sup>T</sup>* ∈ℝ*<sup>r</sup>*

114 Robust Control - Theoretical Models and Case Studies

and *w*<sup>2</sup> *<sup>j</sup>* are the *<sup>j</sup> th* columns of *W*¯

will be used in the subsequent design:

where • *<sup>j</sup>*,*<sup>i</sup>* denotes the *i* th row of • *<sup>j</sup>* .

where *ω*¯*<sup>T</sup>* <sup>=</sup> *<sup>v</sup>*1,2 <sup>⋯</sup> *<sup>v</sup>*ℓ−1,2 <sup>0</sup> .

K‐filters can be represented by

*z* ^

<sup>12</sup> and *u*

of (31) into *v*˙ *<sup>j</sup>* = *A*0*vj* + *erσ<sup>j</sup>*

and *Ω <sup>T</sup>* ≜ *v*<sup>1</sup> ⋯ *v*<sup>ℓ</sup> ∈ℝ*r*×<sup>ℓ</sup>

, *<sup>j</sup>* =1, 2, <sup>⋯</sup>, <sup>ℓ</sup>, where *er* <sup>=</sup> <sup>0</sup> <sup>⋯</sup> 0 1 ∈ℝ*<sup>r</sup>*

*g*

estimated error *ε*, the state estimate *z*¯ 1, and (31), we acquire the following set of equations that

 fe

*z z v kr* (32)

e&& & & & <sup>1</sup> 12 1 <sup>11</sup> <sup>12</sup> <sup>12</sup> <sup>ˆ</sup> ˆ ˆˆ ˆˆ ˆˆ ˆ *y si y z si y yz d z d d z d d* (33)

= =+ =+ + = å l

To apply adaptive backstepping method, we first rewrite the derivative of output *y*

= += + += + + +

= + + + = + + Q+ + +

In view of designing output feedback backstepping with K‐filters, we need to find a set of K‐

subsequent derivation, we assume that *v*ℓ,2 is selected. Therefore, the system incorporated the

 e = + + Q+ + + = - <sup>+</sup> = - =- + + l l l l

*y v d dv v*

12 1 12 ,2 ˆ , ,1 ,1 , ,1 1 2

, 2, , 1, ˆ

& & &

ˆ ˆ ˆ ,

*T*

L &

*i r r*

l l l ll

To apply adaptive backstepping to (34), a new set of coordinates will be introduced

*z yy z v i r* <sup>1</sup> =- = - = ˆ ˆ*mi i i* , , 2, , <sup>l</sup>, 1 a

& & & 12 1 12 1 12 ˆ ˆ 12 ,2 ˆ ˆ ˆ ˆ ˆ *<sup>T</sup> z si y z si y yz d d v d d*

 fw

^. From (31), we see that *<sup>v</sup>*ℓ,2, <sup>⋯</sup>, *<sup>v</sup>*1,2 are all candidates if *w*<sup>2</sup> *<sup>j</sup>* are not zero. In the

*z si y i i*

*kv i r v kv w w u* (34)

l l

 e

^ by the same number of integrators between


 x

1 , 1,..., *k k k kz k jk j z j*

*f*

e

**3.2 Spatial domain output feedback adaptive control system**

With the second equation in (32), (33) can be written as

e

filter parameters, i.e., *v*ℓ,2, ⋯, *v*1,2, separated from *u*

x

 fw

 x

1 1 11 1 ,

*<sup>T</sup>* (*y*) and *W*¯

. Decompose the second equation

*u* with *w*<sup>1</sup> *<sup>j</sup>*

^ as

and *σ<sup>j</sup>* =*w*<sup>1</sup> *<sup>j</sup>* + *w*<sup>2</sup> *<sup>j</sup>*

*<sup>T</sup>* (*y*), respectively. With the definition of the state

**Step 2:***i* =2, ⋯, *r* −1With respect to the new set of coordinates (35), the second equation of (34) can be rewritten as

$$\begin{split} \dot{z}\_{i} &= z\_{i+1} + \alpha\_{i} - k\_{i}\upsilon\_{\iota,1} - \left[ \frac{\partial \alpha\_{i-1}}{\partial \hat{y}} \left( \underline{\tilde{\varepsilon}}\_{12} + \upsilon\_{\iota,2}\phi\_{\iota} + \overline{\alpha}^{\top}\Theta + \underline{\sigma}\_{\hat{\imath}z\_{12}} + \hat{d}\_{s1} + \dot{\hat{d}}\_{s} \right) + \dot{\hat{\imath}}\_{s} \right] \\ & \frac{\partial \alpha\_{i-1}}{\partial \hat{\xi}} \left( A\_{o}\underline{\tilde{\varepsilon}} + \overline{k}\hat{y} \right) + \frac{\partial \alpha\_{i-1}}{\partial \hat{\Theta}} \dot{\tilde{\Theta}} \\ &= \sum\_{j=1}^{\ell} \frac{\partial \alpha\_{i-1}}{\partial \upsilon\_{j}} \left( A\_{o}\upsilon\_{j} + \underline{e}\_{r}\sigma\_{j} \right) + \sum\_{j=1}^{i-1} \frac{\partial \alpha\_{i-1}}{\partial \hat{y}\_{\eta}^{(\ell-1)}} \hat{y}\_{w}^{(\ell)} \end{split}$$

Consider a Lyapunov function V;= V;+ 22; Specify

The derivative of V ; becomes

$$\dot{V}\_{i} = -\sum\_{j=1}^{i-1} \left( c\_{j} z\_{j}^{2} + d\_{j} \left( \frac{\partial a\_{j-1}}{\partial \hat{y}} \right)^{2} z\_{j}^{2} + g\_{j} \left( \frac{\partial a\_{j-1}}{\partial \hat{y}} \right)^{2} z\_{j}^{2} \right) + \pi\_{i} \Phi - \sum\_{j=1}^{i-1} z\_{j} \frac{\partial a\_{j-1}}{\partial \hat{y}} \left( \varepsilon\_{i12} + \hat{d}\_{s1} + \dot{\hat{d}}\_{s} \right)$$

where τ;Φ=τηΦ−Σ -- (z;vg,1Фр + z;w Фр.

### Step 3:

With respect to the new set of coordinates (35), the third equation of (34) can be written as

$$\begin{split} \dot{z}\_{r} &= -k\_{r}\boldsymbol{\upsilon}\_{\boldsymbol{\varepsilon},1} + \boldsymbol{w}\_{1\boldsymbol{\varepsilon}} + \boldsymbol{w}\_{2\boldsymbol{\varepsilon}}\boldsymbol{\hat{\mu}} - \left[ \frac{\partial \boldsymbol{a}\_{r-1}}{\partial \boldsymbol{\hat{y}}} \Big( \boldsymbol{\tilde{\xi}}\_{12} + \boldsymbol{\upsilon}\_{\boldsymbol{\varepsilon},2} \boldsymbol{\phi}\_{\boldsymbol{\varepsilon}} + \overline{\boldsymbol{\sigma}}^{\boldsymbol{T}} \boldsymbol{\Theta} + \boldsymbol{a}\_{\boldsymbol{z}\_{12}} + \boldsymbol{\hat{d}}\_{s\_{1}} + \boldsymbol{\dot{\hat{d}}\_{y}} \Big) + \frac{\partial \boldsymbol{\alpha}\_{r-1}}{\partial \boldsymbol{\tilde{\xi}}} \Big( \boldsymbol{A}\_{y} \boldsymbol{\tilde{\xi}} + \boldsymbol{\tilde{\kappa}} \boldsymbol{\tilde{\dot{y}}} \Big) + \frac{\partial \boldsymbol{\alpha}\_{r-1}}{\partial \boldsymbol{\tilde{\Theta}}} \dot{\overline{\boldsymbol{\Theta}}} \Big] \\ &\sum\_{j=1}^{\ell} \frac{\partial \boldsymbol{\alpha}\_{r-1}}{\partial \boldsymbol{\tilde{\nu}}\_{j}} \Big( \boldsymbol{A}\_{0} \boldsymbol{\boldsymbol{\upsilon}}\_{j} + \boldsymbol{e}\_{r} \boldsymbol{\sigma}\_{j} \Big) + \sum\_{j=1}^{\ell-1} \frac{\partial \boldsymbol{\alpha}\_{r-1}}{\partial \boldsymbol{\tilde{\xi}}\_{j}^{(j-1)}} \boldsymbol{\hat{y}}\_{\boldsymbol{\varepsilon}}^{(j)} \Big] \end{split}$$

The overall Lyapunov function may now be chosen as

$$V\_r = \sum\_{j=1}^{r'-1} V\_j + \frac{1}{2} z\_r^2 + \frac{1}{2} \Phi^T \Gamma^{-1} \Phi + \sum\_{j=1}^{r'} \frac{1}{4d\_j} \mathcal{s}^T P \mathcal{e} \tag{40}$$

where I is a symmetric positive definite matrix, i.e., Г=Г >0. With the definition of state estimated error ε, we can obtain that

Robust Adaptive Repetitive and Iterative Learning Control for Rotary Systems Subject to Spatially Periodic 117 Uncertainties http://dx.doi.org/10.5772/63082

Specify the control input as

where u Substituting (41) into V , and writing 7, Φ = τ, - 1 Φ = Φ = Φ = Φ = Φ ), we arrive at

$$\begin{split} \dot{V}\_{r} &= -\sum\_{j=1}^{r} \left( c\_{j} z\_{j}^{2} + d\_{j} \left( \frac{\partial \alpha\_{j-1}}{\partial \hat{y}} \right)^{2} z\_{j}^{2} + g\_{j} \left( \frac{\partial \alpha\_{j-1}}{\partial \hat{y}} \right)^{2} z\_{j}^{2} \right) + \left( \tau\_{r} + \dot{\Phi}^{\top} \Gamma^{-1} \right) \Phi + z\_{r} \hat{u}\_{\hat{R}} \\ &- \sum\_{j=1}^{r} z\_{j} \frac{\partial \alpha\_{j-1}}{\partial \hat{y}} \left( \varepsilon\_{\hat{z}\_{12}} + \hat{d}\_{\hat{s}\_{1}} + \dot{\hat{d}}\_{\hat{y}} \right) - \sum\_{j=1}^{r} \frac{1}{4d\_{j}} \varepsilon^{\top} \mathcal{E} + \sum\_{j=1}^{r} \frac{1}{4d\_{j}} \left( \varepsilon^{\top} P \Delta + \Delta^{\top} P \varepsilon \right) \end{split} \tag{42}$$

From (42), we may specify the parameter update law to cancel the term (7, +@F F-1) \$. To guarantee that the estimated parameters will always lie within allowable region w, a projected parametric update law will be specified as

$$\dot{\tilde{\Theta}} = \begin{cases} \Gamma \tau\_r^\top & \text{if } \tilde{\Theta} \in w^0, \\ P\_\mathbb{R}(\Gamma \tau\_r^\top) & \text{if } \hat{\Theta} \in \hat{\partial}w \text{ and } \tau\_r \Gamma \tilde{\Theta}\_{prp} > 0, \end{cases} \tag{43}$$

where w is the allowable parametric set. It is compact and convex with its interior and boundary denoted by w and dw, respectively. If the current estimated parametric vector locates within the allowable parametric set, the regular update law is used. If the current estimated parametric vector is on the boundary of the allowable parametric set, the projected update law denoted by *PR*(.) is employed to stop the parametric vector from leaving the set. With (43), add and subtract terms ∑ *j*=1 *<sup>r</sup>* 1 4*gj* |*d* ^ *si*1 + *d* ^˙ *<sup>y</sup>* | 2 to (42), we have

$$\begin{split} \dot{V}\_{i} &\leq -\sum\_{j=1}^{r} c\_{j} z\_{j}^{2} - \sum\_{j=1}^{r} d\_{j} \left( \frac{\partial \alpha\_{j-1}}{\partial \hat{y}} z\_{j} + \frac{1}{2d\_{j}} \boldsymbol{\varepsilon}\_{i\_{12}} \right)^{2} \\ &- \sum\_{j=1}^{r} \mathcal{g}\_{j} \left( \frac{\partial \alpha\_{j-1}}{\partial \hat{y}} z\_{j} + \frac{1}{2\mathcal{g}\_{j}} \left| \hat{d}\_{s1} + \dot{\hat{d}}\_{s} \right| \right)^{2} \\ &+ \sum\_{j=1}^{r} \frac{1}{4g\_{j}} \left| \hat{d}\_{s1} + \dot{\hat{d}}\_{s} \right|^{2} + \sum\_{j=1}^{r} \frac{1}{4d\_{i}} \left( \boldsymbol{\varepsilon}^{\top} P \boldsymbol{\Delta} + \boldsymbol{\Delta}^{\top} P \boldsymbol{\varepsilon} \right) \\ &- \sum\_{j=1}^{r} \frac{1}{4d\_{j}} \left( \boldsymbol{\varepsilon}^{\hat{z}}\_{i\_{11}} + \boldsymbol{\varepsilon}^{\hat{z}}\_{i\_{13}} + \cdots + \boldsymbol{\varepsilon}^{\hat{z}}\_{i\_{1r}} \right) + z\_{r} \hat{\mu}\_{\hat{R}} \end{split} \tag{44}$$

The tracking error *Z*1(*s*˜) and the control input *U* ^ *R* ^ (*s*˜) are related by

$$
\hat{L}\_{\hat{\mathbb{K}}}(\tilde{\mathbf{s}}) = -\hat{R}(\tilde{\mathbf{s}})\hat{\mathbb{C}}(\tilde{\mathbf{s}})Z\_1(\tilde{\mathbf{s}}) \tag{45}
$$

where we have chosen *R* ^ (*s*˜) as a low‐order and attenuated‐type internal model filter, i.e.,

$$\hat{R}(\tilde{\mathbf{s}}) = \prod\_{l=1}^{k} \frac{\tilde{\mathbf{s}}^2 + 2\mathcal{L}\_l \alpha\_{nl} \tilde{\mathbf{s}} + \alpha\_{nl}^2}{\tilde{\mathbf{s}}^2 + 2\mathcal{L}\_l \alpha\_{nl} \tilde{\mathbf{s}} + \alpha\_{nl}^2} \tag{46}$$

where *k* is the number of periodic frequencies, *ωni* is the *i*th disturbance frequency in rad/rev, and *ξ<sup>i</sup>* and *ζ<sup>i</sup>* are damping ratios satisfying 0<*ξ<sup>i</sup>* <*ζ<sup>i</sup>* <1. The gain of *R* ^ (*s*˜) at those periodic frequencies can be varied by adjusting the values of *ξ<sup>i</sup>* and *ζ<sup>i</sup>* .

### **Theorem 3.1**

Consider the control law of (41) and (45) employed to a nonlinear system with unmodeled dynamics, parametric uncertainty, and output disturbance given by (30). Suppose that *y* ^ *<sup>m</sup>*, *y* ^˙ *<sup>m</sup>*, ⋯, *y* ^ *m* (*r*) (where *r* is the relative degree) and *d* ^ *<sup>y</sup>*, *d* ^˙ *<sup>y</sup>*, ⋯, *d* ^ *y* (*r*) are known and bounded, *d* ^ *si*1 (*r*−1) , *d* ^ *si*2 (*r*−2) , ⋯, *d* ^˙ *sir* <sup>−</sup><sup>1</sup> are sufficiently smooth, *f* , *g* , *h* , *L <sup>f</sup> r h* , *L <sup>g</sup>L <sup>f</sup> r*−1 *h* are Lipschitz contin‐ uous functions, and at least one column of *W*¯(*<sup>y</sup>* ^) is bounded away from zero. Moreover, suppose that a loop‐shaping filter *C* ^ (*s*˜) is specified to stabilized the feedback system. Then, the parametric update law given by (43) yields the bounded tracking error.

Proof: Refer to [36].
