**Assumption 2.1**

are significant. The restraint and conservatism of modeling uncertainties was relieved by Chen and Yang [24] by formulating a spatial‐based repetitive control system with the adoption of adaptive feedback linearization. However, this method is only applicable to systems with

The design of spatial‐based repetitive control has been sophisticated enough to cope with a class of uncertain nonlinear systems. On the contrary, existing spatial‐based iterative learning controls [27,28] are still primitive and aim at only linear systems. It is not apparent whether those methods can be generalized to be applicable for nonlinear and high‐order systems. Knowing that spatial uncertainties in rotary systems may be tackled as periodic disturbances or periodic parameters [29–31], treating the uncertainties as disturbances seem to be more

This book chapter reviews and summarizes the recent progress in the design of spatial‐based robust adaptive repetitive and iterative learning control. In particular, the collection of methods aims at rotary systems that are subject to spatially periodic uncertainties and based on nonlinear control paradigm, e.g., adaptive feedback linearization and adaptive backstep‐ ping. We will elaborate on the design procedure (applicable to generic *n*th‐order systems) of each method and the corresponding stability and convergence theorems. The outline of the

Section 2 presents a spatial‐based robust repetitive control design that builds on the design paradigm of feedback linearization. This design basically evolves from the work of Chen and Yang [24]. The proposed design resolves the major shortcoming in their design, i.e., which requires full‐state feedback, by the incorporation of a K‐filter‐type state observer. The system is allowed to operate at varying speed, and the open‐loop nonlinear time‐invariant (NTI) plant model identified for controller design is assumed to have both unknown parameters and unmodeled dynamics. To attain robust stabilization and high‐performance tracking, we propose a two‐degrees‐of‐freedom control configuration. The controller consists of two modules, one aiming at robust stabilization and the other tracking performance. One control module applies adaptive feedback linearization with projected parametric adaptation to stabilize the system and account for parametric uncertainty. Adaptive control plays the role of tuning the estimated parameters, which differs from those methods (e.g., [22,23]), where it was for tuning the period of the repetitive kernel. The other control module comprises a spatial low‐order and attenuated repetitive controller combined with a loop‐shaping filter and is integrated with the adaptively controlled system. The overall system may operate in variable speed and is robust to model uncertainties and capable of rejecting spatially periodic and nonperiodic disturbances. The stability of the design can be proven under bounded disturb‐

Section 3 presents another spatial‐based robust repetitive control design that resorts to the design paradigm of backstepping. This design basically builds on the work of Yang and Chen [32]. The method has been extended to a category of nonlinear systems (instead of just LTI systems). Furthermore, the main deficiency of requiring full‐state feedback in Yang and Chen's design is resolved by incorporating a K‐filter‐type state observer. To achieve robust stabiliza‐ tion and high‐performance tracking, a two‐module control configuration is constructed. One

measurement of all states available in real time.

102 Robust Control - Theoretical Models and Case Studies

prevalent in literatures.

chapter is as follows.

ance and uncertainties.

(1) *dy*(*t*) is a class of bounded signals with (dominant) spatially periodic and band‐limited (or nonperiodic) components.

Here, band‐limited disturbances are signals with Fourier transform or power spectral density being zero above a certain finite frequency. The number of distinctive spatial frequencies and the spectrum distribution are the only available information of the disturbances.

(2) *f <sup>t</sup>*(*x*(*t*), *ϕ<sup>f</sup>* ) and *gt*(*x*(*t*), *ϕg*) are known vector‐valued functions with unknown but bounded system parameters, i.e., *ϕ<sup>f</sup>* <sup>=</sup> *<sup>ϕ</sup><sup>f</sup>* <sup>1</sup> <sup>⋯</sup> *<sup>ϕ</sup>fk* and *ϕ<sup>g</sup>* <sup>=</sup> *<sup>ϕ</sup>g*<sup>1</sup> <sup>⋯</sup> *<sup>ϕ</sup>gl* .

(3) *Δ f <sup>t</sup>*(*x*(*t*), *ϕ<sup>f</sup>* ) and *Δgt*(*x*(*t*), *ϕg*) represent unmodeled dynamics, which are also assumed to be bounded.

Consider an alternate variable *θ* =*λ*(*t*), i.e., the angular displacement, instead of time *t* as the independent variable. Because *λ*(*t*)=*∫* 0 *t ω*(*τ*)*dτ* + *λ*(0) where *ω*(*t*) is the angular velocity, the following condition

$$
\rho a(t) = \frac{d\theta}{dt} > 0, \; \forall \; t \ge 0 \tag{2}
$$

will ensure that *λ*(*t*) is strictly monotonic, so that *t* =*λ* <sup>−</sup><sup>1</sup> (*θ*) exists. Hence, all the time‐domain variables can be transformed into their counterparts in the *θ*‐domain, i.e.,

$$\hat{\mathfrak{X}}(\theta) = \mathfrak{x}(\hat{\mathcal{L}}^{-1}(\theta)), \ \hat{\mathfrak{y}}(\theta) = \mathfrak{y}(\hat{\mathcal{L}}^{-1}(\theta)), \ \hat{\mathfrak{u}}(\theta) = \mathfrak{u}(\hat{\mathcal{L}}^{-1}(\theta)), \ \hat{\mathfrak{d}}(\theta) = \mathfrak{d}(\mathcal{L}^{-1}(\theta)), \ \hat{\mathfrak{o}}(\theta) = \mathfrak{o}(\hat{\mathcal{L}}^{-1}(\theta))$$

where we denote • ^ as the *θ*‐domain representation of • . Note that, in practice, (2) can usually be satisfied for most rotational motion system where the rotary component rotates only in one direction. Because

$$d\mathbf{x}(\mathbf{t})/d\mathbf{t} = d\theta/d\mathbf{t} \cdot d\hat{\mathbf{x}}(\theta)/d\theta = \hat{\alpha}(\theta) \cdot d\hat{\mathbf{x}}(\theta)/d\theta$$

(1) can be rewritten as

$$\begin{split} \hat{\boldsymbol{\mu}}(\theta) \frac{d\hat{\boldsymbol{\Sigma}}(\theta)}{d\theta} &= \left[ f\_t \left( \hat{\boldsymbol{\Sigma}}(\theta), \boldsymbol{\phi}\_f \right) + \Delta f\_t \left( \hat{\boldsymbol{\Sigma}}(\theta), \boldsymbol{\phi}\_f \right) \right] + \left[ \mathbf{g}\_t \left( \hat{\boldsymbol{\Sigma}}(\theta), \boldsymbol{\phi}\_s \right) + \Delta \mathbf{g}\_t \left( \hat{\boldsymbol{\Sigma}}(\theta), \boldsymbol{\phi}\_s \right) \right] \hat{\boldsymbol{\mu}}(\theta) \\ \hat{\boldsymbol{y}}(\theta) = \Psi \hat{\boldsymbol{\Omega}}(\theta) + \hat{d}\_y(\theta) &= \hat{\mathbf{x}}\_1(\theta) + \hat{d}\_y(\theta). \end{split} \tag{3}$$

Equation (3) is an nonlinear position‐invariant (NPI; as opposed to the definition of time‐ invariant) system with the *θ* as the independent variable. Note that we define the Laplace transform of a signal *g* ^ (*θ*) in the angular displacement domain as *G* ^ (*s*˜)=*<sup>∫</sup>* 0 *∞ g* ^ (*θ*)*e* <sup>−</sup>*s*˜ *<sup>θ</sup>dθ*.

This definition will be useful for describing the linear portion of the overall control system.

Drop the *θ* notation and rewrite (3) in the form

$$\dot{\hat{\mathbf{x}}} = f(\hat{\mathbf{x}}, \boldsymbol{\phi}\_f) + g(\hat{\mathbf{x}}, \boldsymbol{\phi}\_g)\hat{\boldsymbol{\mu}} + \hat{\boldsymbol{d}}\_s \ \hat{\mathbf{y}} = h(\hat{\mathbf{x}}) + \hat{\boldsymbol{d}}\_y = \hat{\boldsymbol{o}} \mathbf{o} + \hat{\boldsymbol{d}}\_y \tag{4}$$

where terms involving unstructured uncertainty are merged into *d* ^ *<sup>s</sup>* =*Δf* (*x* ^, *<sup>ϕ</sup><sup>f</sup>* ) <sup>+</sup> *<sup>Δ</sup>g*(*<sup>x</sup>* ^, *<sup>ϕ</sup>g*)*<sup>u</sup>* ^ with *Δf* (*x* ^, *<sup>ϕ</sup><sup>f</sup>* ) <sup>=</sup>*<sup>Δ</sup> <sup>f</sup> <sup>t</sup>*(*<sup>x</sup>* ^, *<sup>ϕ</sup><sup>f</sup>* )/ *<sup>x</sup>* ^ 1, *Δg*(*x* ^, *<sup>ϕ</sup>g*) <sup>=</sup>*Δgt*(*<sup>x</sup>* ^, *<sup>ϕ</sup><sup>f</sup>* ) / *<sup>x</sup>* ^ 1. In addition, we have

(3) *Δ f <sup>t</sup>*(*x*(*t*), *ϕ<sup>f</sup>* ) and *Δgt*(*x*(*t*), *ϕg*) represent unmodeled dynamics, which are also assumed to

Consider an alternate variable *θ* =*λ*(*t*), i.e., the angular displacement, instead of time *t* as the

*ω*(*τ*)*dτ* + *λ*(0) where *ω*(*t*) is the angular velocity, the

*dt* (2)

(*θ*) exists. Hence, all the time‐domain

 wl q

> q

(*θ*)*e* <sup>−</sup>*s*˜ *<sup>θ</sup>dθ*.

(4)

(3)

0 *t*

w

variables can be transformed into their counterparts in the *θ*‐domain, i.e.,

q

 qf

ˆ( ) ˆ ˆ ˆ ˆ ˆˆ ( ) , , , ,

 q

() ()

 q

*y y*

qf

=Y + = + <sup>1</sup>

*y xd x d*

ˆ ˆ ˆˆ ˆ () () () .

^

Drop the *θ* notation and rewrite (3) in the form

f

&

will ensure that *λ*(*t*) is strictly monotonic, so that *t* =*λ* <sup>−</sup><sup>1</sup>

 l q

 q q

 q  l q


be satisfied for most rotational motion system where the rotary component rotates only in one

*dx t dt d dt dx d dx d* ( ) = × =×

<sup>=</sup> <sup>é</sup> + D ù é <sup>+</sup> + D <sup>ù</sup> <sup>ë</sup> û ë <sup>û</sup>

*dx fx fx gx gx u <sup>d</sup>*

*t ft f t g t g*

Equation (3) is an nonlinear position‐invariant (NPI; as opposed to the definition of time‐ invariant) system with the *θ* as the independent variable. Note that we define the Laplace

(*θ*) in the angular displacement domain as *G*

This definition will be useful for describing the linear portion of the overall control system.

*x f x gx u d y hx d d*

ˆ ˆ ˆˆ ˆ ˆ ˆ = + + = + =+ ( , ) ( , ) , ( )

 f*f gs*

 q q wq

 q

^ as the *θ*‐domain representation of • . Note that, in practice, (2) can usually

ˆ ˆˆ () () ()

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

 qf

ˆ ˆˆ *y y*

w

 l q wq

 q q

> qf

^ (*s*˜)=*<sup>∫</sup>* 0 *∞ g* ^

( ) 0, t >0 = >" *<sup>d</sup> <sup>t</sup>*

be bounded.

following condition

q

where we denote •

direction. Because

(1) can be rewritten as

w q

q

q

q

transform of a signal *g*

 qq

 l q

independent variable. Because *λ*(*t*)=*∫*

104 Robust Control - Theoretical Models and Case Studies

$$f(\hat{\mathbf{x}}, \boldsymbol{\phi}\_f) = f\_t(\hat{\mathbf{x}}, \boldsymbol{\phi}\_f) / \hat{\mathbf{x}}\_1 \text{ , } \operatorname{g}(\hat{\mathbf{x}}, \boldsymbol{\phi}\_{\boldsymbol{\varrho}}) = \operatorname{g}\_t(\hat{\mathbf{x}}, \boldsymbol{\phi}\_{\boldsymbol{\varrho}}) / \hat{\mathbf{x}}\_1 \text{ , } h(\hat{\mathbf{x}}) = \hat{\boldsymbol{\phi}} = \hat{\mathbf{x}}\_1$$

The state variables have been specified such that the angular velocity *ω* ^ is equal to *<sup>x</sup>* ^ 1, i.e., the undisturbed output *h* (*x* ^). To proceed, we will adopt the definitions and notations given in [24] for Lie derivative, relative degree, diffeomorphism.

It can be verified that (4) has the same relative degree in *D*<sup>0</sup> ={*x* ^ ∈ℝ*<sup>n</sup>* <sup>|</sup> *<sup>x</sup>* ^ <sup>1</sup> ≠0} as the NTI model in (1). If (4) has relative degree *r*, the following nonlinear coordinate transformation can be defined as

$$\hat{\boldsymbol{\omega}} = T(\hat{\boldsymbol{\omega}}) = \begin{bmatrix} \boldsymbol{\nu}\_1(\hat{\boldsymbol{\omega}}) & \cdots & \boldsymbol{\nu}\_{n-r}(\hat{\boldsymbol{\omega}}) \end{bmatrix} \begin{vmatrix} h(\hat{\boldsymbol{\omega}}) & \cdots & L\_f^{r-1} h(\hat{\boldsymbol{\omega}}) \end{vmatrix}^T \triangleq \begin{bmatrix} \hat{\boldsymbol{\tilde{z}}}\_2 \\ \hat{\boldsymbol{z}}\_1 \end{bmatrix}$$

where *ψ*1 to *ψn*−*<sup>r</sup>* are chosen such that *T* (*x* ^) is a diffeomorphism on *<sup>D</sup>*0⊂*D* and

$$L\_{\mathcal{S}}\varphi\_i(\hat{\mathfrak{x}}) = 0,\ 1 \le i \le n - r$$

∀ *x* ^ <sup>∈</sup>*D*0. With respect to the new coordinates, i.e., *<sup>z</sup>* ^ <sup>1</sup> and *z* ^ <sup>2</sup>, (4) can be transformed into the so‐called normal form, i.e.,

$$\begin{aligned} \dot{\hat{z}}\_{z} &= L\_{f}\eta\nu(\hat{\mathbf{x}})\Big|\_{\mathbb{R}\rightarrow T^{-1}\{\hat{z}\}} + \hat{d}\_{\text{sol}} \triangleq \Psi\{\hat{z}\_{1}, \hat{z}\_{2}\} \\\\ \dot{\hat{z}}\_{z} &= A\_{c}\hat{z}\_{1} + B\_{c} \Big[L\_{\hat{x}}L\_{f}^{-1}h(\hat{\mathbf{x}})\Big|\_{\mathbb{R}\rightarrow T^{-1}\{\hat{z}\}} \Bigg] \hat{u} + \frac{L\_{f}^{\prime}h(\hat{\mathbf{x}})}{L\_{\hat{x}}L\_{f}^{\prime -1}h(\hat{\mathbf{x}})} \Bigg]\_{\mathbb{R}\rightarrow T^{-1}\{\hat{z}\}} \Bigg] + \hat{d}\_{\text{s}\prime} \cdot \hat{\mathcal{y}} = C\_{c}\hat{z}\_{1} + \hat{d}\_{\text{y}} \end{aligned} \tag{5}$$

where *d* ^ *so* and *d* ^ *si* <sup>=</sup> *<sup>d</sup>* ^ *si*<sup>1</sup> ⋯ *d* ^ si*r <sup>T</sup>* come from *d* ^ *<sup>s</sup>* going through the indicated coordinate transformation. *z* ^ <sup>1</sup> <sup>=</sup> *<sup>z</sup>* ^ <sup>11</sup> ⋯ *z* ^ <sup>1</sup>*<sup>r</sup> <sup>T</sup>* ∈ℝ*<sup>r</sup>* , *z* ^ 2∈ℝ*n*−*<sup>r</sup>* , and (*Ac*, *Bc*, *Cc*) is a canonical form represen‐ tation of a chain of *r* integrators. The first equation in (5) is the internal dynamics and not affected by the control *u* ^. By setting *<sup>z</sup>* ^ <sup>1</sup> =0, we obtain *z* ^˙ <sup>2</sup> =*Ψ*(0, *z* ^ 2), which is the zero dynamics of (4) or (5). The system is called minimum phase if the zero dynamics has an asymptotically stable equilibrium point in the domain of interest. To allow us to present the proposed algorithm and stability analysis in a simpler context, we will make the following assumptions for the subsequent derivation.

### **Assumption 2.2**

(1) *f* (*x* ^(*θ*), *<sup>ϕ</sup><sup>f</sup>* ) and *g*(*<sup>x</sup>* ^(*θ*), *<sup>ϕ</sup>g*) are linearly related to those unknown system parameters, i.e.,

$$f\left(\hat{\mathbf{x}}(\boldsymbol{\theta}),\boldsymbol{\phi}\_{f}\right) = \boldsymbol{\phi}\_{f1}\boldsymbol{f}\_{1}\left(\hat{\mathbf{x}}(\boldsymbol{\theta})\right) + \dots + \boldsymbol{\phi}\_{f}\boldsymbol{f}\_{k}\left(\hat{\mathbf{x}}(\boldsymbol{\theta})\right),\\\boldsymbol{g}\left(\hat{\mathbf{x}}(\boldsymbol{\theta}),\boldsymbol{\phi}\_{\boldsymbol{\varrho}}\right) = \boldsymbol{\phi}\_{\boldsymbol{\varrho}1}\boldsymbol{g}\_{1}\left(\hat{\mathbf{x}}(\boldsymbol{\theta})\right) + \dots + \boldsymbol{\phi}\_{\boldsymbol{\varrho}k}\boldsymbol{g}\_{l}\left(\hat{\mathbf{x}}(\boldsymbol{\theta})\right)\tag{6}$$


(4) *d* ^ *si*1 (*r*−1) , *d* ^ *si*2 (*r*−2) , ⋯, *d* ^˙ *sir* <sup>−</sup><sup>1</sup> exist, i.e., the transformed unstructured uncertainty is sufficiently smooth; and

(5) The reference command *y* ^ *<sup>m</sup>* and its first *r* derivates are known and bounded. Moreover, *y* ^ *m* (*r*) is piecewise continuous.

With Assumption 2, the design of a nonlinear state observer may focus on the external dynamics of (5), i.e.,

$$\dot{\hat{\boldsymbol{z}}}\_{1} = \boldsymbol{A}\_{c}\hat{\boldsymbol{z}}\_{1} + \boldsymbol{B}\_{c} \Big[ \boldsymbol{L}\_{\boldsymbol{\hat{x}}} \boldsymbol{L}\_{\boldsymbol{f}}^{\prime -1} \boldsymbol{h}(\hat{\boldsymbol{x}}) \Big|\_{\boldsymbol{\hat{x}} = \boldsymbol{T}^{-1}(\boldsymbol{\hat{z}})} \Bigg] \hat{\boldsymbol{u}} + \frac{\boldsymbol{L}\_{\boldsymbol{f}}^{\prime} \boldsymbol{h}(\hat{\boldsymbol{x}})}{\boldsymbol{L}\_{\boldsymbol{\hat{x}}} \boldsymbol{L}\_{\boldsymbol{f}}^{\prime -1} \boldsymbol{h}(\hat{\boldsymbol{x}})} \Bigg]\_{\boldsymbol{\hat{x}} = \boldsymbol{T}^{-1}(\boldsymbol{\hat{z}})} + \hat{\boldsymbol{d}}\_{s} \tag{7}$$

#### **2.1 State observer design**

In this section, we show how to establish a state observer for the transformed NPI system (5). Because *f* (*x* ^) and *<sup>g</sup>*(*<sup>x</sup>* ^) are assumed to be linearly related to system parameters, *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup> r*−1 *h* (*x* ^) and *L <sup>g</sup>L <sup>f</sup> r*−1 *h* (*x* ^) can be expressed as

$$L\_f^r h(\hat{\mathfrak{x}}) = \Theta^T \mathcal{W}\_f \left(\hat{\mathfrak{x}}\right), \ L\_{\mathfrak{x}} L\_f^{r-1} h(\hat{\mathfrak{x}}) = \Theta^T \mathcal{W}\_{\mathfrak{x}} \left(\hat{\mathfrak{x}}\right).$$

where *Wf* (*x* ^) and *Wg*(*<sup>x</sup>* ^) are two nonlinear functions, and

$$\Theta = \begin{bmatrix} \phi\_{f1} & \cdots & \phi\_{j\ell} & \phi\_{g1} & \cdots & \phi\_{g\ell} & \cdots \end{bmatrix}^{r} = \begin{bmatrix} \phi\_{\mathtt{i}} & \cdots & \phi\_{\ell} \end{bmatrix}^{r} \in \mathbb{R}^{\ell} \dots$$

where ℓ denotes the number of unknown parameters. Hence, (7) can be rewritten as

$$\dot{\hat{z}}\_1 = A\_c \hat{z}\_1 + B\_c \left[ \Theta^T \mathcal{W}\_{\boldsymbol{\upbeta}} \left( \hat{\mathbf{x}} \right) \hat{\boldsymbol{\upmu}} + \Theta^T \mathcal{W}\_f \left( \hat{\mathbf{x}} \right) \right] + \hat{d}\_{sl} \tag{8}$$

Equation (8) can be further written in the form

**Assumption 2.2**

qf

^(*θ*), *<sup>ϕ</sup><sup>f</sup>* ) and *g*(*<sup>x</sup>*

106 Robust Control - Theoretical Models and Case Studies

 f

, ⋯, *d* ^˙ *sir* <sup>−</sup><sup>1</sup>

(5) The reference command *y*

is piecewise continuous.

**2.1 State observer design**

*r*−1 *h* (*x*

^) and *<sup>g</sup>*(*<sup>x</sup>*

^) and *Wg*(*<sup>x</sup>*

Because *f* (*x*

where *Wf* (*x*

and *L <sup>g</sup>L <sup>f</sup>*

dynamics of (5), i.e.,

 q  f

(3) The output disturbance is sufficiently smooth [i.e., *d*

^


1 1 1 <sup>1</sup> ˆ ˆ ( )

& <sup>1</sup>

^) can be expressed as

f

1 1

^(*θ*), *<sup>ϕ</sup>g*) are linearly related to those unknown system parameters, i.e.,

 f

^˙ *<sup>y</sup>*, ⋯, *d* ^ *y* (*r*) exists];

exist, i.e., the transformed unstructured uncertainty is sufficiently

*<sup>m</sup>* and its first *r* derivates are known and bounded. Moreover, *y*


ˆ ˆ ( )

1

*LL hx* (7)

=

é ù

*r*

^) are assumed to be linearly related to system parameters, *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup>*

 f

*cc g <sup>f</sup> si z Az B W x u W x d* (8)

*T T*

 f

*Lhx*

ë û

 q  f  q

> ^ *m* (*r*)

*r*−1 *h* (*x* ^)

*f x*( ) ˆ ˆ ˆˆ ˆ ˆ ( )

(2) (4) is exponentially minimum phase, i.e., the zero dynamics is exponentially stable;

With Assumption 2, the design of a nonlinear state observer may focus on the external


In this section, we show how to establish a state observer for the transformed NPI system (5).

( ) ( ) - = Q = Q <sup>1</sup> ( ) , ( ) ˆˆˆˆ *rT r T Lhx W x LL hx W x <sup>f</sup> f gf <sup>g</sup>*

 f

where ℓ denotes the number of unknown parameters. Hence, (7) can be rewritten as

= + Q +Q + é ù () () ë û &

ˆ ˆ ˆ ˆˆ ˆ *T T*

11 1 <sup>=</sup> [ ] <sup>Î</sup> . ë û <sup>l</sup> L LL L ¡ <sup>l</sup>

^) are two nonlinear functions, and

Q = é ù

*f fk g gl*

 ff -

= + é ùê ú <sup>+</sup> <sup>+</sup> ê ú ë ûê ú

( )ˆ <sup>ˆ</sup> ˆˆ ˆ ˆ ( ) ( )ˆ

*z Az B LL h x u d*

*r f c c gf <sup>r</sup> si xT z g f xT z*

 qf

, *f f* = ++ 1 1*f x*( ) ( ) ... *fk k f x gx g x g x* ( ) ( ) , , ( ) ( ) *g g* = ++ 1 1 ( ) ( ) ... *gl l*( ) ( ) (6)

 q

(1) *f* (*x*

(4) *d* ^ *si*1 (*r*−1) , *d* ^ *si*2 (*r*−2)

smooth; and

$$\dot{\hat{z}}\_1 = A\_0 \hat{z}\_1 + \overline{k} \,\hat{z}\_{11} + B\_c \Big[\Theta^T \mathcal{W}\_{\underline{x}} \Big(\hat{\mathbf{x}}\big) \hat{\boldsymbol{\mu}} + \Theta^T \mathcal{W}\_f \Big(\hat{\mathbf{x}}\big)\Big] + \hat{d}\_{s\iota} \tag{9}$$

where *A*<sup>0</sup> = −*k*<sup>1</sup> ⋮ −*kr I*(*r*−1)×(*r*−1) 01×(*<sup>r</sup>*−1) and *k* ¯ = *k*<sup>1</sup> ⋯ *kr <sup>T</sup>* .

By properly choosing *k* ¯, the matrix *A*0 can be made Hurwitz. Next, we adopt the following observer structure:

$$\dot{\overline{z}}\_{1} = A\_{0}\overline{z}\_{1} + \overline{k}\hat{y} + B\_{c} \left[ \Theta^{T} \overline{W}\_{\text{g}} \left( \hat{y} \right) \hat{u} + \Theta^{T} \overline{W}\_{\text{f}} \left( \hat{y} \right) \right] \tag{10}$$

where *z*¯ <sup>1</sup> <sup>=</sup> *<sup>z</sup>*¯ <sup>11</sup> <sup>⋯</sup> *<sup>z</sup>*¯ <sup>1</sup>*<sup>r</sup> <sup>T</sup>* is the estimate of *<sup>z</sup>* ^ 1 and *W*¯ *<sup>f</sup>* (*y* ^) and *W*¯ *<sup>g</sup>*(*y* ^) are nonlinear functions with the same structure as *Wf* (*x* ^) and *Wg*(*<sup>x</sup>* ^) , except that each entry of *<sup>x</sup>* ^ is replaced by *<sup>y</sup>* ^. Equation (10) can be further expressed as

$$\dot{\overline{\boldsymbol{z}}}\_{1} = \boldsymbol{A}\_{0}\overline{\boldsymbol{z}}\_{1} + \overline{\boldsymbol{k}}\widehat{\boldsymbol{y}} + F(\widehat{\boldsymbol{y}}, \widehat{\boldsymbol{n}})^{T}\boldsymbol{\Theta} \text{ with } F(\widehat{\boldsymbol{y}}, \widehat{\boldsymbol{n}})^{T} = \begin{bmatrix} \mathbf{0}\_{(r-1)\times\ell} \\\\ \overline{\boldsymbol{W}}\_{f}^{T} \left(\widehat{\boldsymbol{y}}\right) + \overline{\boldsymbol{W}}\_{\boldsymbol{\mathcal{x}}}^{T} \left(\widehat{\boldsymbol{y}}\right) \widehat{\boldsymbol{n}} \end{bmatrix} \in \mathbb{R}^{r\times\ell} \tag{11}$$

Define the state estimated error as *ε* ≜ *ε<sup>z</sup>* ^ <sup>11</sup> ⋯ *ε<sup>z</sup>* ^ 1*r <sup>T</sup>* ≜*z* ^ <sup>1</sup> − *z*¯ 1. The dynamics of the estimated error can be obtained by subtracting (10) from (9), i.e.,

$$\dot{\boldsymbol{\varepsilon}} = \boldsymbol{A}\_{0}\boldsymbol{\varepsilon} + \boldsymbol{\Delta} \cdot \boldsymbol{\Delta} = -\overline{\boldsymbol{k}}\hat{\boldsymbol{d}}\_{y} + \boldsymbol{B}\_{\boldsymbol{\varepsilon}}\boldsymbol{\Theta}^{T} \left[\boldsymbol{\mathcal{W}}\_{\boldsymbol{\varepsilon}}\left(\hat{\boldsymbol{\mathbf{x}}}\right) - \overline{\boldsymbol{\mathcal{W}}}\_{\boldsymbol{\varepsilon}}\left(\hat{\boldsymbol{y}}\right)\right] \hat{\boldsymbol{u}} + \boldsymbol{B}\_{\boldsymbol{\varepsilon}}\boldsymbol{\Theta}^{T} \left[\boldsymbol{\mathcal{W}}\_{f}\left(\hat{\boldsymbol{\mathbf{x}}}\right) - \overline{\boldsymbol{\mathcal{W}}}\_{f}\left(\hat{\boldsymbol{y}}\right)\right] + \hat{\boldsymbol{d}}\_{\boldsymbol{\varepsilon}}.\tag{12}$$

Here, we further assume that

### **Assumption 2.3**

(9) *Wg*(*x* ^)−*W*¯ *<sup>g</sup>*(*y* ^) and *Wf* (*<sup>x</sup>* ^)−*W*¯ *<sup>f</sup>* (*y* ^) are bounded to ensure the boundness of the estimated error. To see this, note that the solution of (12) may be viewed as sum of zero input response *ε<sup>u</sup>* and zero state response *εs*, i.e., *ε* =*ε<sup>u</sup>* + *εs*. The zero input response *ε*˙ *<sup>u</sup>* = *A*0*ε<sup>u</sup>* will decay to zero exponentially, as *A*0 is Hurwitz, and the zero state response *εs* will be bounded due to the bounded disturbance *d* ^ *<sup>y</sup>*, *Wg*(*x* ^)−*W*¯ *<sup>g</sup>*(*y* ^), and *Wf* (*<sup>x</sup>* ^)−*W*¯ *<sup>f</sup>* (*y* ^).

Equation (10) or (11) cannot be readily implemented due to the unknown parametric vector *Θ*, but it motivates the subsequent mathematical manipulation. Define the state estimate as *<sup>z</sup>*¯ <sup>1</sup>≜*<sup>ξ</sup>* <sup>+</sup> *<sup>Ω</sup> <sup>T</sup> <sup>Θ</sup>* such that *<sup>ξ</sup>* <sup>=</sup> *<sup>ξ</sup>*<sup>11</sup> <sup>⋯</sup> *<sup>ξ</sup>*1*<sup>r</sup> <sup>T</sup>* ∈ℝ*<sup>r</sup>* and *Ω <sup>T</sup>* ∈ℝ*r*×<sup>ℓ</sup> and employ the following two K‐ filters:

$$
\Delta \dot{\xi} = A\_v \xi + \overline{k} \hat{y},
\dot{\Omega}^T = A\_v \Omega^T + F(\hat{y}, \hat{u})^T. \tag{13}
$$

It can be easily verified that (13) is equivalent to (11). Hence, (13) may replace the role of (11) for providing the state estimate. With *Ω <sup>T</sup>* ≜ *v*<sup>1</sup> ⋯ *v*<sup>ℓ</sup> , the second equation of (13) may be further decomposed into

$$\dot{\boldsymbol{\sigma}}\_{\rangle} = \boldsymbol{A}\_{0}\boldsymbol{\upsilon}\_{\rangle} + \boldsymbol{e}\_{r}\boldsymbol{\sigma}\_{\rfloor\prime}\,\,\,\dot{\boldsymbol{j}} = \mathbf{1}\,\,\mathbf{2}\_{r}\cdots\,\,\boldsymbol{\ell} \tag{14}$$

where *er* <sup>=</sup> <sup>0</sup> <sup>⋯</sup> 0 1 ∈ℝ*<sup>r</sup>* and *σ<sup>j</sup>* =*w*<sup>1</sup> *<sup>j</sup>* + *w*<sup>2</sup> *<sup>j</sup> u* ^ with *<sup>w</sup>*<sup>1</sup> *<sup>j</sup>* and *w*<sup>2</sup> *<sup>j</sup>* are the *<sup>j</sup> th* columns of *W*¯ *f <sup>T</sup>* (*y* ^) and *W*¯ *g <sup>T</sup>* (*y* ^), respectively. Equation (13) is still not applicable due to *Θ*. However, with the definition of the state estimated error *ε*, the state estimate, the first equation of (13), and (14), we acquire the following relationship that is not available from (11):

$$\begin{aligned} \hat{\mathbf{z}}\_{11} &= \overline{\mathbf{z}}\_{11} + \mathbf{z}\_{\hat{\mathbf{z}}1} = \underline{\mathbf{z}}\_{11} + \sum\_{j=1}^{\ell} \upsilon\_{j,1} \phi\_{j} + \varepsilon\_{\hat{\mathbf{z}}\_{11}\prime} \dots \prime \hat{\mathbf{z}}\_{1r} \\ \mathbf{z} &= \overline{\mathbf{z}}\_{1r} + \underline{\mathbf{z}}\_{\hat{\mathbf{z}}\_{1r}} = \underline{\mathbf{z}}\_{1r} + \sum\_{j=1}^{\ell} \upsilon\_{j,r} \phi\_{j} + \varepsilon\_{\hat{\mathbf{z}}\_{1r}} \end{aligned} \tag{15}$$

where • *<sup>j</sup>*,*<sup>i</sup>* denotes the ith row of • *<sup>j</sup>* . Equation (15) will be used in the subsequent design.

#### **2.2 Output feedback robust adaptive repetitive control system**

In this section, we show how to incorporate the state observer established in the previous section into an output feedback adaptive repetitive control system. The control configuration consists of two layers. The first layer is the adaptive feedback linearization, which tackles system nonlinearity and parametric uncertainty. The second layer is a repetitive control module of a repetitive controller and a loop‐shaping filter. This layer not only enhances the ability of the overall system for rejection of disturbance, sensitivity reduction to model uncertainty, and state estimated error but also improves the robustness of the parametric adaptation. Although inclusion of the state observer relieves the design of the need of full‐state feedback, it actually introduces extra dynamics into the system. Hence, the stability of the resulting system needs to be further justified.

Suppose that (4) has relative degree *r*. To perform input/output feedback linearization, differentiate the output *y* ^ until the control input *<sup>u</sup>* ^ appears to obtain

$$
\hat{y}^{(r)} = \hat{z}\_{11}^{(r)} + \hat{d}\_y^{(r)} = \dot{\hat{z}}\_{1r} + \hat{d}\_y^{(r)} = \dot{\tilde{z}}\_{1r} + \dot{\varepsilon}\_{\dot{z}\_{1r}} + \hat{d}\_y^{(r)} \tag{16}
$$

Substituting the *r* th state equation of (10) into (16), we have

Robust Adaptive Repetitive and Iterative Learning Control for Rotary Systems Subject to Spatially Periodic Uncertainties http://dx.doi.org/10.5772/63082 109

$$\hat{y}^{(r)} = \dot{\overline{\boldsymbol{z}}}\_{1r} + \dot{\boldsymbol{z}}\_{\boldsymbol{z}\_{1r}} + \hat{\boldsymbol{d}}\_{y}^{(r)} = -k\_r \overline{\boldsymbol{z}}\_{11} + k\_r \hat{\boldsymbol{y}} + \Theta^{\top} \overline{\mathcal{W}}\_{f} \left( \hat{\boldsymbol{y}} \right) + \Theta^{\top} \overline{\mathcal{W}}\_{z} \left( \hat{\boldsymbol{y}} \right) \hat{\boldsymbol{u}} + \dot{\boldsymbol{z}}\_{\boldsymbol{z}\_{1r}} + \hat{\boldsymbol{d}}\_{y}^{(r)} \boldsymbol{z} \tag{17}$$

To put the previously developed state observer into use, we substitute the first equation of (15) into (17) and arrive at

$$\hat{\boldsymbol{y}}^{(r)} = -k\_r \left( \boldsymbol{\xi}\_{11} + \sum\_{j=1}^{\ell} \boldsymbol{v}\_{j,1} \boldsymbol{\phi}\_j \right) + k\_r \hat{\boldsymbol{y}} + \Theta^T \overline{\boldsymbol{V}}\_f \left( \hat{\boldsymbol{y}} \right) + \Theta^T \overline{\boldsymbol{V}}\_g \left( \hat{\boldsymbol{y}} \right) \hat{\boldsymbol{u}} + \boldsymbol{\varepsilon}\_{\hat{\boldsymbol{z}}\_{1r}} + \hat{\boldsymbol{d}}\_g^{(\epsilon)} \tag{18}$$

Define the estimated parametric vector of *Θ* as

x

108 Robust Control - Theoretical Models and Case Studies

further decomposed into

where *er* <sup>=</sup> <sup>0</sup> <sup>⋯</sup> 0 1 ∈ℝ*<sup>r</sup>*

and *W*¯ *g <sup>T</sup>* (*y*  x

= + W= W+ & & 0 0 ˆ, ( , ) . ˆ ˆ *TT T A ky A F y u* (13)

L l <sup>0</sup> , 1,2, , *j j rj v Av e j* (14)

are the *<sup>j</sup> th* columns of *W*¯

*f <sup>T</sup>* (*y* ^)

(15)

and *w*<sup>2</sup> *<sup>j</sup>*

It can be easily verified that (13) is equivalent to (11). Hence, (13) may replace the role of (11) for providing the state estimate. With *Ω <sup>T</sup>* ≜ *v*<sup>1</sup> ⋯ *v*<sup>ℓ</sup> , the second equation of (13) may be

> *u* ^ with *<sup>w</sup>*<sup>1</sup> *<sup>j</sup>*

definition of the state estimated error *ε*, the state estimate, the first equation of (13), and (14),

 fe

*z jj z r*

 fe

where • *<sup>j</sup>*,*<sup>i</sup>* denotes the ith row of • *<sup>j</sup>* . Equation (15) will be used in the subsequent design.

In this section, we show how to incorporate the state observer established in the previous section into an output feedback adaptive repetitive control system. The control configuration consists of two layers. The first layer is the adaptive feedback linearization, which tackles system nonlinearity and parametric uncertainty. The second layer is a repetitive control module of a repetitive controller and a loop‐shaping filter. This layer not only enhances the ability of the overall system for rejection of disturbance, sensitivity reduction to model uncertainty, and state estimated error but also improves the robustness of the parametric adaptation. Although inclusion of the state observer relieves the design of the need of full‐state feedback, it actually introduces extra dynamics into the system. Hence, the stability of the

Suppose that (4) has relative degree *r*. To perform input/output feedback linearization,

() () () ( ) ( ) = + =+ =+ +

 & & & <sup>1</sup> <sup>11</sup> <sup>1</sup> <sup>1</sup> <sup>ˆ</sup> ˆˆ ˆ ˆˆ ˆ *<sup>r</sup> rrr r r*

^ appears to obtain

*y ry rz y y z d zd z d* (16)

e

^ until the control input *<sup>u</sup>*

Substituting the *r* th state equation of (10) into (16), we have

=

å

11 11 ˆ ˆ 11 ,1 1 1

*j*

ˆ ˆ , ... ,

*z z v z*

11 11

l

=

å

1 1

*r r*

l

^), respectively. Equation (13) is still not applicable due to *Θ*. However, with the

& =+ = s

and *σ<sup>j</sup>* =*w*<sup>1</sup> *<sup>j</sup>* + *w*<sup>2</sup> *<sup>j</sup>*

we acquire the following relationship that is not available from (11):

e

 x

*z v*

**2.2 Output feedback robust adaptive repetitive control system**

resulting system needs to be further justified.

differentiate the output *y*

=+ =+ +

e

 x

1 1, ˆ ˆ 1

*rz r jr j z j*

=+ =+ +

Q = éf ff f f f ùé ù <sup>=</sup> <sup>Î</sup> <sup>ë</sup> ûë û <sup>l</sup> <sup>l</sup> % % %% % % % 11 1 L LL L ¡ . *T T f fk g gl*

The control law using the estimated system parameters and states is

$$\hat{\mu} = \frac{1}{\tilde{\Theta}^T \bar{\mathcal{W}}\_{\mathcal{S}} \left( \hat{y} \right)} \Bigg( -\tilde{\Theta}^T \bar{\mathcal{W}}\_{\prime} \left( \hat{y} \right) + k\_r \Bigg( \tilde{\varphi}\_{11} + \sum\_{j=1}^{\ell} v\_{j,1} \tilde{\phi}\_j \Bigg) - k\_r \hat{y} + \tilde{\hat{v}}\_d + \hat{\hat{u}}\_{\hat{k}} \Bigg) \tag{19}$$

where we introduce two designable inputs, *v* ^˜ *<sup>d</sup>* and *u* ^ *R* ^ . Specify *v* ^˜ *<sup>d</sup>* , the estimate of *v* ^ *<sup>d</sup>* , as

$$
\tilde{\boldsymbol{\hat{v}}}\_d = \hat{\mathbf{y}}\_m^{(r)} + \boldsymbol{\alpha}\_1 (\hat{\mathbf{y}}\_m^{(r-1)} - \tilde{\mathbf{y}}^{(r-1)}) + \dots + \boldsymbol{\alpha}\_{r-1} (\dot{\hat{\mathbf{y}}}\_m - \dot{\tilde{\mathbf{y}}}) + \boldsymbol{\alpha}\_r (\hat{\mathbf{y}}\_m - \hat{\mathbf{y}}), \tag{20}
$$

where *y* ^ *<sup>m</sup>* is a prespecified reference trajectory, *y* ^˜(*<sup>k</sup>* ) denotes the estimate of *y* ^(*k* ) , and *α<sup>i</sup>* 's are adjustable parameters. Substituting (19) back into (18) and defining the tracking error *e* ^<sup>≜</sup> *<sup>y</sup>* ^ <sup>−</sup> *<sup>y</sup>* ^ *<sup>m</sup>*, we arrive at the following error equation:

$$\begin{aligned} \hat{\mathbf{c}}^{(r)} &+ \alpha\_1 \hat{\mathbf{c}}^{(r-1)} + \dots + \alpha\_{r-1} \dot{\hat{\mathbf{c}}} + \alpha\_r \hat{\mathbf{e}} = \Phi^T \mathcal{W} + \hat{\boldsymbol{\mu}}\_{\hat{\mathbf{k}}} + \hat{\mathbf{d}}\_y^{(r)} + \dot{\mathcal{E}}\_{\hat{\mathbf{z}}\_{1r}} \\ &+ \alpha\_1 \left( \hat{\boldsymbol{d}}\_y + \boldsymbol{\sigma}\_{z\_{11}} \right)^{(r-1)} + \dots + \alpha\_{r-1} \left( \dot{\hat{\mathbf{d}}}\_y + \dot{\mathcal{E}}\_{z\_{11}} \right) \end{aligned} \tag{21}$$

where *<sup>Φ</sup>* <sup>=</sup>*<sup>Θ</sup>* <sup>−</sup>*<sup>Θ</sup>*˜ and *<sup>W</sup>* is a function of *ξ*, *v*, and *Θ*˜. If we denote *<sup>M</sup>* (*s*˜)=1 / (*s*˜*<sup>r</sup>* <sup>+</sup> *<sup>α</sup>*1*s*˜*r*−<sup>1</sup> <sup>+</sup> <sup>⋯</sup> <sup>+</sup> *<sup>α</sup>r*), (21) implies that

$$\begin{split} \frac{1}{M(\tilde{\mathbf{s}})} \hat{E}(\tilde{\mathbf{s}}) &= \Phi^{\mathrm{T}} \mathcal{W} + \hat{U}\_{\tilde{\mathbf{s}}}(\tilde{\mathbf{s}}) + \left( \tilde{\mathbf{s}}' + \alpha\_{1} \tilde{\mathbf{s}}'^{-1} + \dots + \alpha\_{r-1} \tilde{\mathbf{s}} \right) \hat{d}\_{y} \\ &+ \left( \alpha\_{1} \tilde{\mathbf{s}}'^{-1} + \dots + \alpha\_{r-1} \tilde{\mathbf{s}} \right) \varepsilon\_{\tilde{\mathbf{s}}\_{\tilde{\mathbf{s}}1}} + \tilde{\mathbf{s}} \varepsilon\_{\tilde{\mathbf{s}}\_{\tilde{\mathbf{s}}\_{1}}}. \end{split} \tag{22}$$

Neglecting the details of *Φ <sup>T</sup> W* , we can view (21) or (22) as a linear system (with the output *e* ^) subject to five inputs. We propose adding another control loop between *<sup>E</sup>* ^ (*s*˜) and *U* ^ *R* ^ (*s*˜). This control loop provides an additional degree‐of‐freedom for reducing the effect of the unstruc‐ tured uncertainty, the state estimated error, and the output disturbance. The tracking error *E* ^ (*s*˜) and the control input *U* ^ *R* ^ (*s*˜) is related by

$$\hat{\mathbf{U}}\_{\hat{\mathbb{R}}}(\tilde{\mathbf{s}}) = -\hat{\mathbf{R}}(\tilde{\mathbf{s}})\hat{\mathbf{C}}(\tilde{\mathbf{s}})\hat{\mathbf{E}}(\tilde{\mathbf{s}}),\\\hat{\mathbf{R}}(\tilde{\mathbf{s}}) = \prod\_{i=1}^{k} \frac{\tilde{\mathbf{s}}^{2} + 2\zeta\_{i}a\_{n}\tilde{\mathbf{s}} + a\_{n}^{2}}{\tilde{\mathbf{s}}^{2} + 2\zeta\_{i}a\_{n}\tilde{\mathbf{s}} + a\_{n}^{2}} \text{ (low-order respective controller)}\tag{23}$$

where *k* is the number of periodic frequencies, *ωni* is the *i*th disturbance frequency in rad/rev, and *ξ<sup>i</sup>* and *ζ<sup>i</sup>* are damping ratios with 0<*ξ<sup>i</sup>* <*ζ<sup>i</sup>* <1. The gain of *R* ^ (*s*˜) at those periodic frequencies may be varied by adjusting the values of *ξ<sup>i</sup>* and *ζ<sup>i</sup>* . Furthermore, *C* ^ (*s*˜) is a controller that should ensure the stability of the overall system. Substitute (23) back into (22), we obtain

$$\begin{aligned} \left[\mathbf{1}\{M(\tilde{\mathbf{s}}) + \hat{R}(\tilde{\mathbf{s}})\hat{\mathbf{C}}(\tilde{\mathbf{s}})\right] \hat{E}(\tilde{\mathbf{s}}) &= \Phi^T \mathcal{W} + \left(\tilde{\mathbf{s}}^r + a\_1 \tilde{\mathbf{s}}^{r-1} + \dots + a\_{r-1} \tilde{\mathbf{s}}\right) \hat{d}\_y \\ &+ \left(a\_1 \tilde{\mathbf{s}}^{r-1} + \dots + a\_{r-1} \tilde{\mathbf{s}}\right) \varepsilon\_{z\_{11}} + \tilde{s} \varepsilon\_{z\_{1r}} \end{aligned} \tag{24}$$

Define

$$
\overline{M}(\tilde{\mathbf{s}}) \triangleq \left[ \mathbf{1} / M(\tilde{\mathbf{s}}) + \hat{R}(\tilde{\mathbf{s}}) \hat{\mathbf{C}}(\tilde{\mathbf{s}}) \right]^{-1}, \tag{25}
$$

Equation (24) becomes

$$\begin{split} \hat{e} &= \overline{M}(\hat{\mathbf{s}}) \boldsymbol{\Phi}^{\mathrm{T}} \boldsymbol{\mathcal{W}} + \hat{d}\_{\overline{\boldsymbol{M}}^{\mathrm{T}}} \hat{d}\_{\overline{\boldsymbol{M}}} = \overline{M}(\hat{\mathbf{s}}) \Big[ \Big( \widetilde{\mathbf{s}}^{\prime} + \boldsymbol{\alpha}\_{1} \widetilde{\mathbf{s}}^{\prime -1} + \dots + \boldsymbol{\alpha}\_{r-1} \widetilde{\mathbf{s}} \Big) \hat{d}\_{\mathbf{y}} \\ &+ \Big( \boldsymbol{\alpha}\_{1} \widetilde{\mathbf{s}}^{\prime -1} + \dots + \boldsymbol{\alpha}\_{r-1} \widetilde{\mathbf{s}} \Big) \boldsymbol{\varepsilon}\_{\boldsymbol{\mathfrak{t}}\_{\mathrm{11}}} + \widetilde{\mathbf{s}} \boldsymbol{\varepsilon}\_{\boldsymbol{\mathfrak{t}}\_{\mathrm{1}r}} \Big] \end{split} \tag{26}$$

where

$$\hat{d}\_{\widetilde{M}} = \overline{M}(\tilde{\mathbf{s}}) \Big[ \Big( \widetilde{\mathbf{s}}^{\prime} + \boldsymbol{\alpha}\_{1} \widetilde{\mathbf{s}}^{\prime -1} + \dots + \boldsymbol{\alpha}\_{r-1} \widetilde{\mathbf{s}} \Big) \Big\hat{d}\_{y} + \Big( \boldsymbol{\alpha}\_{1} \widetilde{\mathbf{s}}^{\prime -1} + \dots + \boldsymbol{\alpha}\_{r-1} \widetilde{\mathbf{s}} \Big) \boldsymbol{\varepsilon}\_{\dot{\boldsymbol{z}}\_{11}} + \widetilde{\mathbf{s}} \boldsymbol{\varepsilon}\_{\dot{\boldsymbol{z}}\_{1r}} \Big]$$

Because *e* ^˙, *<sup>e</sup>* ^¨ , ⋯, *e* ^(*r*−1) cannot be measured directly, the so‐called augmented error scheme will be used. The augmented error is defined as

$$
\hat{e}\_1 = \hat{e} + \left(\Phi^T \overline{M}(\tilde{\mathbf{s}}) \mathcal{W} - \overline{M}(\tilde{\mathbf{s}}) \Phi^T \mathcal{W}\right). \tag{27}
$$

Substituting (26) into (27), we obtain

( )

. *r*


*R r y*

1 ˆ 1 1

> e

 a


*s s* (23)

^

 a


*r y*

*r*

 e

^

. Furthermore, *C*

( )

*rz z*

é ù <sup>+</sup> ë û % % %% @ <sup>1</sup> <sup>ˆ</sup> <sup>ˆ</sup> *Ms Ms RsCs*() 1 () () () , (25)

 a

> ae

 e



1 1 1 ^

(*s*˜) and *U* ^ *R* ^ (*s*˜). This

(*s*˜) at those periodic frequencies

(*s*˜) is a controller that should

(22)

(24)

(26)

a

( )

^) subject to five inputs. We propose adding another control loop between *<sup>E</sup>*

^ (*s*˜) is related by

are damping ratios with 0<*ξ<sup>i</sup>* <*ζ<sup>i</sup>* <1. The gain of *R*

=

*k*

1

+ ++ +

 ae

zw

2 2

<sup>2</sup> ˆ ˆ ˆˆ <sup>ˆ</sup> ( ) ( ) ( ) ( ), ( ) ( ) <sup>2</sup> low-order repetitive controller

xw

*i i ni ni*

ensure the stability of the overall system. Substitute (23) back into (22), we obtain

a

 e

 a

*r r r <sup>M</sup> r y rz z d M s s s sd s s s*

ˆ ˆ ˆ ˆ ( ) , ( )

*e Ms W d d Ms s s s d*

ë

% %% % % L

û

*r*

= F + = + ++ é

*rz z*

*T r r*

é ù <sup>+</sup> =F + + + + ë û


*r*

1

% %% % % % % L

*Ms RsCs Es W s s s d*

ˆ ˆ ˆ ˆ 1 () () () ()

*T r r*


1 1 ˆ ˆ

*s ss*

% %% L 11 1

*rz zEs W U s s s s d*

Neglecting the details of *Φ <sup>T</sup> W* , we can view (21) or (22) as a linear system (with the output

control loop provides an additional degree‐of‐freedom for reducing the effect of the unstruc‐ tured uncertainty, the state estimated error, and the output disturbance. The tracking error

> w

*i ni ni*

 w

where *k* is the number of periodic frequencies, *ωni* is the *i*th disturbance frequency in rad/rev,

and *ζ<sup>i</sup>*

( )

*T rr*

+ ++ +

a

 ae



( )


1 1 1

a

 a


) ( ) - -

*M M r y*

1 1 ˆ ˆ

*s ss*

% %% L 11 1

=F + + + + +

<sup>1</sup> ˆ ˆ <sup>ˆ</sup> () () ( )

% %% % % <sup>L</sup> %

a

^ *R*

+ + <sup>=</sup> - =P + + % % % %%% % % %

ˆ 2 2

( )

(

 ae

a

+ ++ + ù

*s ss*


1 1 ˆ ˆ

% %% L 11 1

a


*r*

1

*s s U s RsCsEs Rs*

may be varied by adjusting the values of *ξ<sup>i</sup>*

*M s*

110 Robust Control - Theoretical Models and Case Studies

(*s*˜) and the control input *U*

*e*

*E* ^

*R*

and *ξ<sup>i</sup>*

Define

where

Equation (24) becomes

and *ζ<sup>i</sup>*


*r*

1

$$
\hat{e}\_1 = \Phi^T \overline{\xi} + \hat{d}\_{\mathfrak{M}'} \tag{28}
$$

where *ς*¯ <sup>=</sup>*<sup>M</sup>*¯(*<sup>s</sup>*˜)*<sup>W</sup>* . The parametric adaptation law to be used is modified from the normalized gradient method proposed in [33], i.e.,

$$
\dot{\tilde{\Theta}} = -\dot{\Phi} = \begin{cases}
\frac{\rho \hat{e}\_1 \overline{\xi}}{1 + \overline{\xi}^T \overline{\xi}} & \text{if } |\hat{e}\_1| > \hat{d}\_{\mathcal{R}\_0} \text{ and } \tilde{\Theta} \in \boldsymbol{w}^0, \\
 P\_k(\frac{\rho \hat{e}\_1 \overline{\xi}}{1 + \overline{\xi}^T \overline{\xi}}) & \text{if } |\hat{e}\_1| > \hat{d}\_{\tilde{\mathcal{R}}\_0}, \tilde{\Theta} \in \hat{c} \text{w}, \text{ and } \hat{e}\_1 \overline{\xi}^T \tilde{\Theta}\_{\rho \circ \eta} > 0, \\
 & \text{if } |\hat{e}\_1| \le \hat{d}\_{\mathcal{R}\_0}.
\end{cases} \tag{29}
$$

where *w* is the allowable parametric variation set (compact and convex) with its interior and boundary denoted by *w* <sup>0</sup> and ∂*w*, respectively, *d* ^ *M*¯ 0 is an upper bound for the magnitude of *d* ^ *<sup>M</sup>*¯, and *ρ* is an adjustable adaptation rate that affects the convergence property. If the magnitude of *e* ^ <sup>1</sup> is small and dominated by the magnitude of *d* ^ *<sup>M</sup>*¯, the adaptation law is disabled to prevent the parameters from being adjusted based on the disturbance. If *e* ^ <sup>1</sup> is greater than *d* ^ *<sup>M</sup>*¯ magnitude‐wise, two scenarios need to be considered. If the current estimated parametric vector locates within the allowable parametric set, regular adaptation law is applied. If the current estimated parametric vector is on the boundary of the allowable parametric set, the projected adaptation law is employed to stop the parametric vector from leaving the variation set.

In the following, we present stability theorem for the proposed spatial‐based OFLRARC system. The theorem extends the results in the literature [33,34] to take into account the addition of the repetitive control module. It will be seen that the overall OFLRARC system will stay stable and the tracking error will be bounded as long as a stable and proper loop‐ shaping filter stabilizes a certain feedback system.

**Theorem 2.1** The error equation (28) with the parametric update law (29) leads to *Φ* ∈ *L <sup>∞</sup>*, *<sup>Φ</sup>*˙ <sup>∈</sup> *<sup>L</sup>* <sup>2</sup> <sup>∩</sup> *<sup>L</sup> <sup>∞</sup>*, and *<sup>Φ</sup> <sup>T</sup> <sup>ς</sup>*¯(*θ*) <sup>2</sup> <sup>≤</sup>*γ*(1 + *<sup>ς</sup>*¯ *T L <sup>∞</sup>* ) for all *θ*.

Proof: Follow the same steps for proof of Theorem 3.1 in [24].

**Theorem 2.2** Consider an exponentially minimum‐phase nonlinear system with parameter uncertainty and subject to output disturbance as given by (4), which is augmented with a state observer (or K‐filters) described by (13) [35]. Specify the control laws as (19), (20), and (23). Let Assumptions (1) to (9) be satisfied. Assume that *y* ^ *<sup>m</sup>*, *y* ^˙ *<sup>m</sup>*, ⋯, *y* ^ *m* (*r*−1) (where *r* is the relative degree) and *d* ^ *<sup>M</sup>*¯ are bounded with an upper bound *<sup>d</sup>* ^ *M*¯ 0 , *f* , *g*, *h* , *L hf <sup>k</sup>* , *<sup>L</sup> <sup>g</sup><sup>L</sup> <sup>f</sup> <sup>h</sup>* are Lipschitz continuous functions, and *<sup>W</sup>* has bounded derivative with respect to ξ , *v*, and *Θ*˜. In addition, assume that a stable and proper controller *C* ^ (*s*˜) is specified such that the feedback system shown in **Figure.1** is stable. Then, the parametric adaptation law given by (29) yields the bounded tracking error, i.e., | *y* ^(*<sup>θ</sup>*)<sup>−</sup> *<sup>y</sup>* ^ *<sup>m</sup>*(*θ*)| <*d* ^ *M*¯ 0 as *θ* →*∞*.

Proof: Follow the same steps for proof of Theorem 3.2 in [24] with some differences.

**Figure 1.** Repetitive controller and stabilizing compensator.
